HCI

¡Supera tus tareas y exámenes ahora con Quizwiz!

How do you do reporting in usability test?

- Anonymize participants - Background of participants - Details of task - write down:• What did they do• Why did they do it• what did they not do• What was interesting• What was surprising to you - Making mistakes is okay but make sure you write it down and acknowledge it so the reader know that they should take the result with a grain of salt and not make the same mistake in future studies.

How to prepare for focus groups?

- CHOOSE TOPICS: 3-5 topics (make topics not too broad) - TARGET AUDIENCE • Homogeneous audience - pick people that are quite similar so that they feel comfortable expressing their actual thoughts on topics, but it also depends on the topic e.g., if your topic is 1950 music then age will play a big part • Choose a subset from our target audience: these are the ones that you think will give the most useful feedback on our topics. Therefore you need a good user profile to find the right ones. • Choose several different groups - to get more reliable answers and result (bc you can't just rely on just the group or one person so make sure to run multiple focus groups to strengthen your answers) RECRUTING • Find people that fit your profile • Don't invite people who know each other to the same group - bc you can otherwise read each others signals and body language • Avoid regulars: people who frequently participate in focus groups bc veterans may subconsciously try to give answers that they think are expected • Same level of knowledge on the topic: make sure that none of the participants have significantly more knowledge about any of the topics than the others. • Do not invite people that are directly related to the topic - bc you want honest and real answers DEFINING SCOPE • How many groups will we have- try to do as many as possible but ofc depends on how much time you have (never do just one focus group) • How many people per group? WRITE A GUIDE (a script) • Sequential - think about what you want to ask and make sure it is asked in the right order • Non directed - don't imply an answer or judgement (no bias pls)• Open-ended - ask questions that encourage people to open up and share experiences • Focus on specifics - ask specific questions to get specific answers • Personal - create a personal environment where people can tell you their personal views, values and experiences without generalizing it to the general public. • Unambiguous - the questions should be pretty straight on and not be open to more than one interpretation.

What are the outcomes of contextual inquires?

- Challenging assumptions (e.g. Is the radio interface really the problem?) - Get specified concrete details about actual use (e.g. Get to see how the radios are used) - Hidden understanding can surface (e.g. It is not the interface, it is the gloves)

Which research methods should we use?

- Depends on which field you're in. Try to use both Qualitive (attitudes, opinions, feeling through eg. interviews, observations) and Quantitative (numbers and statistics) methods. - Depends on factors like; time, budget, available equipment etc. Research methods are guidelines. No situation is the same so try to adapt and be creative.

Other things to keep in mind when conducting interviews?

- Do not force opinions - Be aware of your own expectations: things might surprise and make sure that this doesn't affected how you run the interview - Restate answers: verify that you understood the answer correctly by stating what the participant answered but using different words - Follow up with examples but wait for an undirected answer first: good if they seem to have gotten stuck in their ideas and discussion and this might spark something new - Use artifacts to keep people focused and to trigger ideas: getting them to focus on a specific object can help them recall the missing details - Never say the participant is wrong - Listen carefully to the questions posed to you by interviewee: they say a lot about how they understand the product or a situation. Answer their questions with a probing questions (e.g. is that how you think it should work?) - Create a report immediately after the interview and always review your tapes to check if you missed something. Listen, understanding and then respond = what interviews are

How to do personas?

- Internal interviews - with your company to collect as much data as possible like target group, existing users - Research with participants - do research with individual users or potential users to get more data for your personas - Market research - sales and marketing often have detailed demographic profiles and market research that can give you a big picture view of your audience - Usage data and customer feedback - look at customer forums, community sites and support systems which can give data about your users

How to do interview preperations?

- It's important to organize in detail and rehearse the interviewing process before beginning the formal study. • Choose a setting with the least distraction • Prepare a method for recording data, e.g., take notes, audio recording. - Researcher needs to make sure respondents have: • Basic information about the purpose of the interview and the research project of which it is a part. • A clear idea of why they have been asked. Therefore, it's good to brief them properly so they know why they're here before the interview starts. • Some idea of the probable length of the interview and that you would like to record it (explain why). It's good to record the interview, you'll miss things if you try to write it down.

What to think about when editing and ordering your survey questions?

- Keep the surveys short: test it before hand, how long it will take and also write it in the survey, so they know. Shouldn't take longer than 20 minutes about 20-30 questions. - Edit and order questions: ask e.g. colleagues to try it out - Carefully consider questions order to make sure it's logical: if no logical order is possible, then randomize your questions. - Write instructions, including:• Importance: "your participation in this survey is very important to us"• Goals: with the survey• Privacy: how you handle their private information• Reward: if they get something for doing it• Responsible organization/contact: who to contact if they have questiond/comments - Already start writing the report

How to conduct focus groups?

- Layout - e.g. prepare snacks if people will sit there for a long time but not e.g. chips that is laud that can be distracting, chose a comfortable room with good ventilations, few distractions etc. - Moderator - (the one holding the focus groups) need to make sure that: • You (moderator) need to always be in control • Always move forward - if something takes too much time, try to go over to the next topic • Nonjudgmental• Respectful • No distractions around • Prepared - know enough about the topic if people also ask you questions - Assistant moderator - someone who can help you e.g. welcome someone, make sure there's enough snacks - Moderating the discussion - balancing the participants comfort level and keeping the discussion producing useful information for the research. -Problems when conducting: • Misleading results - e.g. group thinking can arise and cause problems • Emotions - high emotions on people in the group can arise with certain topics which can influence the rest in the group • Dismission participants - sometimes people are late, it turns out that they don't fit the target group, they might bully others in the focus group - Managing observers: people from the development team should observe some focus groups and can communicate with the monitor thought text, chat, notes if they have any questions - Hiring experts: if you don't have time you can hire experts but cost money and takes time bc you need to brief them

What are diary studies/probes?

- Longitudinal: runs lengthwise - Provide view in people's lives - Participants track their own progress - Ask people to keep a diary as they are using a product over time• Access to time-variant information, trends• Reduce revision in retrospection, forgetting events: bc they reduce the time between an event and its documentation - "Geographically distributed qualitative research method"• Potential to exploit cultural and geographic differences

Surveys: test & field?

- Pilot test the survey as if it were the real deal to eliminate costly mistakes • Including the same recruiting methods, data collection methods and analysis methods as the final survey - Incentives: should be provided and picked according to who your target group is - Tailor to target audience • Small amounts of money per participant may not be appealing • Raffle something valuable, e.g., PS3 - Important to find the right people to not get biases and wrong result --> have a good user profile - Sampling frame = the subset of the population that you can hypothetically access is your sampling frame, people within your reach (e.g. JU). Sample = the ones you've actually reaching out to, he group of people that filled out your survey (e.g. students at JTH) You want to comprehensively invite people that make up your user population, without missing or over-representing any groups.

What to think about when reading customer feedback?

- Read from the user's perspective - Focus on facts: e.g. what is wrong, how is it wrong even if the comment sounds angry - Do not jump to conclusions: just because a couple of people has said that they like/don't like something doesn't mean that they're representing the population - Not everything is a must-have fix: complains are just pointers but that you might want to look at - Take comments with a grain of salt: people who are leaving feedback are usually the ones with extreme emotions You want to look for patterns in your feedback: - Who are the users: are the feedback coming from people in your target group or not - What are they trying to do: what goals are they trying to reach? - How do they approach a problem: what strategies do they use? - What are the problems: are they having similar problems or not?

How do you recruit and plan for a contextual inquiry?

- Recruit extreme users - bc they will have the most outspoken opinions about your product. - Plan enough time - e.g. if you need to follow someone the whole day you'll only have info from one person so it will take time - Get consent - make sure you're allowed to come and visit and observe - Learn the domain - you should know stuff beforehand so you can ask more to the point questions (bc there are dumb questions) - Make expectations explicit - tell the person you're observing what you'll do so they know - Prepare for the visit - test all your e.g. cameras beforehand so everything is ready when you start

What to think about when setting your research plan format?

- Set expectations; be realistic aka don't oversell it. - Set schedules for responsibilities: Who is doing what when? - specific for short term goals, more general for the long-term goals bc they may change depending on how the short ones goes - Specify goals: e.g. when are you expected to deliver what so everyone know what to expect when. - Specify outputs: these should be tangible, measurable deliverables - reports, presentations, workshops

How do you document usability tests?

- Specifications - how did you do the test, what was the rooms like etc. In case someone wants to re-do your study to see if they get the same result - Hard - and software use - Detailed description of prototypes - Environmental conditions - Skills of the users

How to write a scenario?

- The scenario should have a people's perspective --> "day in the life" feel - Stick to the task • Only include the most important actions and system responses - don't list every interaction the person might bump into during the scenario, just the ones that matter • Introduce constraints (limitations) one by one and see how the story changes • High-level scenarios - the scenarios need to be described at a high level

What are the different types of diary studies?

- Usage diaries: you document specific moments of interaction with a product or serice - Spotter diaries: get them to identify where and how the presence of companies/products/services matters in people's lives. You ask people to spot things e.g. how many ATMs do you see on you way to school - Process/purchase diaries: what the process look like that led to the decision to e.g. buy a new product (check budget etc.) - Behavior diaries: they examine activities or objects that make up a specific topic e.g. to see how people dela with money you look at activities like earnings, spendings, savings

Why and how can we be neutral when interviewing?

- You need to be non-direct, so don't lead the questions a certain way or bias the answer, bc then they'll think we're biased. We want to know people's thoughts and not confirm our own biases. - It's difficult to be entirely neutral but to get more neutral you can: • Distance yourself from the product - try to be as rigorous (stern) as possible in your neutrality • Forget about company pride and all the effort and commitment that was put into a product - Focus on immediate and concrete experiences: ask questions in such a way to get longer answers from you participant. • So not "is this interesting you" but instead "if it were available to you right now, would you use it? Why?" - Focus on a single topic and avoid asking something with multiple questions; ask then listen then respond and go to the next topic - Keep questions open-ended:• So not "which feature is most important to you?" but instead "does the product do anything that is useful to you? If so, what is it? What makes it useful?" - Avoid binary (yes/no) questions.

Goals are the part in a research plan. How does it work?

1. Collecting issues and present them as goals - identify stakeholders Every department (even within a department) will have different priorities and issues e.g. CEO, marketing, programmers, customer relations, optimize revenue, strong branding, bug free code, minimize customer complaints. Not all priorities will benefit the end-user. Once collected all these issues you can... 2. Prioritize goals - once you've found all the issues you need to prioritize them - Importance x severity = priority (e.g. does the product crash or just cause a paus for a moment for the user) 3. Rewrite as questions - once the goals are set you can rewrite them as research questions. - It's up to you to decide which questions are important for the product.

Three principles when designing:

1. Early focus on users and tasks: to know what they want, use 2. Empirical measurement of product usage: they get to do usability test on a prototype 3. Iterative design: short development cycle, going back and fort, working together, keep improving the product

What does the interview structure look like?

1. Introduction - brief them properly so they know why they're here) 2. Warm up - to get people to focus on the product and their work of answering the questions. 3. General issues - the first round of questions concentrating on the experience with the product, expectations, attitudes, assumptions about it. 4. Deep focus - the product is introduced, and people concentrate on the details of what it does, how it works, whether they can use it and what their immediate experience of it is. 5. Retrospective - people can now evaluate the product 6. Wrap up (summaries, administration - they usually need to sign some form before they go)

What do we need to know to create a good system for the users?

5 things: - Who the users are - What their needs are - What they want - How they currently do things (making mistakes, do they have problems?) - How they would like to do them

What are some designer mistakes?

6 things: - Featurism: a design that doesn't have all the features - Machine oriented: engineers sometimes just create from their own thinking and doesn't think from other perspectives - Premature product release - "Next bench" design: a friend will test a product from the way they think you want them to test it --> biased - Designer is not the user - Technical and human-oriented skills: also need social skills to be able to get the information from the users

What is a Norman door?

A door that makes it difficult or confusing for people to use, made by Don Norman, a professor of computer science, psychology etc.

What is technological complexity?

A needed technological level for the manufacture of an industrial product. Though this is not really vital anymore because products are now adapted to making it as easy as possible for us to understand e.g. don't always need manuals. We don't adapt tp technology anymore, it's the other way around.

What is a task analysis and how does it work?

A variation of contextual inquiry where you just focus on a specific task - Want to know how users solve a known problem: • What is the sequence of actions? • Where are the flexibilities • What are the tools involved? • How are the tasks learned?• Where are the task performed?

What is a user journey?

A visualization of steps a user takes in order to accomplish a goal. What the user is doing, thinking and feeling is typically represented in a user journey.

What is A/B usability testing and what are the good/bad things about it?

A/B testing, also called split testing, is a type of usability testing when you make two different websites with some smaller changes between them and then one person tries the version that has new features and the other one tries the ones with the already existing features. The good - Get clear evidence - Test new idea - Answer specific questions The bad - Small changes - Can easily go wrong

What is Usability?

About if it's useful and does what it's intended to do making it easy for the users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.

What scales are being used in surveys?

After you have your questions you need to decide what scales you're using. - Open response scale (the least recommended one): an open text form where people can write whatever they want. Then the person needs to put more effort into answering and you also need to analyze the meaning behind the written text --> it will be more like a qualitative method instead - Partially open response: multiple-choice questions. Standard answering options where you make sure that you have the most common options. - Closed response • Semantic differential scales: about attitudes and have two opposite side (e.g. terrible vs great; choose on the scale • Likert scale: about how much you agree or disagree with the statement • Ranking scales: you rank the answers • Checklist: you ask people to select multiple options.

What is the circle of life?

An iterative process where that starts all over again either with a new product or if there are any problems you might need to go some steps back. It follows these steps: 1. Plan = Stakerholder interview. Look at competitors, talk to management about their vision , goals, target group 2. Research = User research. Look at personas and scenarios and then look at motivation, goals, needs, triggers, preferences 3. Design = Ideation. Look at prototypes, architecture, wireframes, workflow 4. Adapt = Implementation. Implement to the system and focus on technical testing, development, visual design. 5. Measure = Evaluation. User testing: usability, user experience, did it actually work?

What is eye tracking and what are the good/bad things about it?

Another type of usability test where a camera who picks up on the tracking of movements on our eyes. It's usually hard to distinguish between if a person actually saw what it was supposed to look for but didn't bother/failed to read or understand it or if they've actually missed it but thanks to eye tracking this can be addressed. The good - Seeing vs noticing - Heat maps: shows where participants ten to focus most and least - Gaze plots: shows where on the screen the eyes had fixations and the paths (saccades) the eye took between them The bad - Used to be hard to analyze but not anymore - Expensive (e.g. equipment, training/skills)

Bad vs Good design?

Bad designs are easy to find bc they stand out Good designs is important and usually harder to notice bc they blend in and are important to you life. Good designs (very subjective): - Easy, natural & engaging interaction - Usefulness is often context-dependent - Users can carry out their required tasks - Accounts for human limitations

Why is it good to know cultural differences?

Bc in different cultures specific signs, words et.c. can mean a totally different things compared to in your country. You need to know these before you design something that can offend someone.

What type of biases can you get your survey answers?

Biases you can get while conducting surveys · Sample bias o getting the wrong people · Non-responder bias - a certain type of people who don't respond to your survey · Timing/duration bias o choose the right timing to send out your survey bc otherwise the response can be wrong e.g., if you send out a shopping survey during Christmas you won't get the average shopping behavior · Invitation/incentive bias o adapt the language on the invitation according to who you're sending it out to e.g., be more formal · Self-selection bias o a special kind of invitation bias where you let people choose whether they want to participate in a survey without explicitly invite them e.g. "click here to take our survey" · Presentation bias o How you present your survey determines who feels interested in answering it, both technologically or esthetically e.g., some people are more attracted to what your survey looks like - try to have it clean. · Expectation bias o people who just try to guess what you want to hear

Facts about contextual inquiry?

Contextual inquiry: almost like an interview but more like an internship where you go with someone, and they are teaching you and telling you about it - you go to someone's workplace and you ask questions about how they do things. - Users needed: ~10 - Interviews in work context - Master - apprentice (you see the one being interviewed as the Master and yourself as the Apprentice/pupil) - Work objects available - Easier to address realistic issues - But: interruptions (e.g. loud things around)

What to think about when it comes to the budget of researching your product?

Cost: - Staff time * hourly rate - Recruiting and incentive cost (no one is participating for free) - Equipment cost (incl. location rental) - Incentives differ depending on user population A rule of thumb, user centered research can be estimated at around 10% of the total development cost.

Where can you get customer feedback and what will it tell you?

Customer feedback will give you information about user's experiences. You can get access to feedback through... Sources, such as... - Forums: e.g. comments left by users - Social media - Review pages And they can tell you... - Expectations: if those expectations were met or not - "Points of pain": the issues they experienced when using your product. - Suggestions: e.g. how to improve you product.

What is the dual process theory?

Describes how people uses two methods of thinkingH: · System 1 is intuition and instinct. You just know how to use it. It happens unconscious, fast, associative and per automatic pilot. · System 2 is rational thinking where you're concentrating which takes more effort, more slowly, logical, lazy and indecisive. Takes more effort to learn these systems.

History in 1995?

Don Norman joined Apple as a User Experience Architect, taking the place of that many consider to be the first dedicated UX position.

Examples on when to use these research methods?

Early design & requirement gathering: - Internal discovery - Survey - Log file analysis - User profiling - Usability testing of existing products - Contextual inquiry/ask analysis - Focus groups Development & design: - Usability tests After release: once everything is implemented, you test this with the final product - Surveys and log files - Diary studies - Contextual inquiry

History in 1430?

Early example of UX design, Leonardo Da Vinci created working conveyor belts (band i fabrik), a giant oven for a feast hosted by the Duke of Milan. They all failed but only bc of poor execution = good lesson.

Facts about focus groups?

Focus groups: putting people together and asking them for their opinions on the product. Moderated group discussions. - Users needed: 6-9 per group - POBA talk: (Perceptions, Opinions, Beliefs & Attitudes) asking about their opinion, beliefs, attitudes on things, how people feel about things. - Lifecycle stage: early development, feature definition, user involvement. - Main advantages: Spontaneous reactions (you're in a group of people that have different priorities, desires and anecdotes) and group dynamics (if you have a good group people can compliment each other, spark new ideas) - Main disadvantages: • Groupthink (if you have a bad group, e.g. if someone is very dominating then that person can get everyone to agree even if they have different opinions. Therefor you have to select people very carefully, not choose someone who dominates or someone who always agrees with you and doesn't give their opinions). •Social demand characteristics. Desires can easily be misinterpreted as needs.

What does frustration from a product/system lead to and look like?

Frustration is the main cause of people abandoning the use of a product --> - Can't figure out how to do simple things - Many not frequently used functions - Many hidden functions - Operations outcome that's not visible (when on a website and push on a button and nothing happens)

What are personas?

Generalizations of people, bc you can't design for everybody individually so gather data from as many as possible and put them together.

What to keep in mind when doing surveys on the web?

Gives you more flexibility and freedom than page based surveys. Keep these things in mind when doing web-based surveys: - Functionality - always check for functionality when doing these ones online e.g. how to show a certain video in all browsers - Usability - do a basic usability test on the survey to make sure that everything works and that peoples experiences of it matches their expectations. - Error checking - if people forget to fill in a question you can do a pop up that says that you forgot to fill it in - Timing - you don't want people to answer it to fast bc then they probably haven't really read through the questions - Mortality - keep track of people who drop out of your survey after a certain question - Response rate - track how many people were offered to take the survey vs how many actually filled it out

History in 1955?

Henry Dreyfuss, an industrial designer, elaborated on some of the key principles of usability for consumer products in his book "Designing for people".

How do we define and make a scenario?

How to make a scenario? - Decide what stories to tell: • Frequency - which activities are most frequently taken to achieve the goals? • Necessity - which activities are necessary to achieve it? • Sequence - which frequent and/or necessary activities take place as part of a single sequence?

How do you identify your competition?

Identify the main product goals into these groups - Tier 1 - your direct competitors, the most important) - Tier 2 - also competitors but not as important) - Tier 3 - niche competitors, compete with part of your product but not the whole thing Then you need to profile your competition to know who you're dealing with: - Product description - of the competitor's product - Audience profile - what kinds of people uses you competitors' products? - Define key dimensions/categories - so you can easier compare your competitors

What does the perfect world for a designer look like and why isn't it possible?

In a perfect world we'd like to make the users happy where the product is desirable (why?), functional (what?) and efficient (how?). But it's not easy to make the perfect product bc: - Companies want profits - Marketers want promotions Hard to find a balance without compromising the use of the product to much.

History in 1970?

Interdisciplinary design. Xerox Parc (lab from Silicon Valley) were responsible for e.g. Concept of GUI and a computer mouse. Psychologists and engineers worked arm in arm to provide stunning experiences.

Why and when to do an interview?

Interview is good technique to get information on the user's experience. When to do an interview: • When there is a need to get highly personalized data • When there are opportunities required for probing. When you want to go more in debt or probing. • A good return rate is important • Can also use them when they have reading difficulties or speak another language and they don't understand a survey.

Facts about interviews?

Interviews are a qualitative research method which is about opinions, feelings, beliefs etc. - Users needed: ~5 - Lifecycle stage: task and environmental analysis, early design stages - Main advantages: flexible, in-depth attitude and experience probing (=searching/exploring). You can also see how people respond. - Main disadvantages: Time-consuming. You also have to go through the interviews and put some time into it because sometimes people express things differently but mean the same thing. It's also hard to analyze and compare - Variations: contextual inquiry (e.g. .tech-tours)

How to recruit people for your research?

It's important to recruit the right people bc otherwise you'll get the wrong kind of feedback that won't represent the views and behaviors of the actual users. 1. Selection - profile your users. - E.g., age, gender, education, income, technology experience etc. There's no fixed list for this, it's up to you to decide which of these aspects' matters for your product. Look online maybe to figure out who the people are that are using your product. 2. Finding your users - Start with friends and family but make sure they don't know too much about your goals and what you want to achieve with this product. - Then maybe parents, colleagues, housemates... Be creative and persistent and usedifferent platforms and resources. AND expand your horizon... 3. Schedule the study - Define a scheduling window in which the study will take place - make sure there's a long enough period and have people choose. You should adapt to their schedule and not the other way around. - Write and send invitations: • Be clear about the goal of the study without giving away too much detail • Be clear about the time/date, location, incentive • Be clear about voluntary nature of the study and protection of personal privacy - Confirm and reconfirm - Schedule potential backup candidates (in case someone doesn't show up) - They're aloud to stop and not finish bc it's voluntary and they still get the gift card (if you promised one) even though they didn't finish 4. Recruiting pitfalls - some mistakes that can happen when recruiting people - The wrong people - When having no shows, you can schedule extra people to avoid this. • Severity depends on criticality of their role and input (expert is harder to replace than a student) • Double scheduling the time slots to prevent no shows - though double the work and double the incentives• Floater = someone who hangs around all they. If someone doesn't show up, they can do it • "Makeup slots": make up more space than needed so if people don't show up or if the data you afterwards go through is not good enough to use you'll still have enough data. (What Bruce does, best way to solve the problem of faulty data and no shows.) - Bias: be aware of the potential bias in your sample: a truly random sample is im

What type of process is user centered design?

Iterative design process = a cycle where you take smaller steps, keep testing and creating so you know that you're on the right track Plan --> Research --> Design --> Adapt --> Measure --> start over

What factors should you look at when conducting these?

Look at one of these factors: - Ease of learning • How fast can a user who has never seen the user interface learn it sufficiently well to accomplish basic tasks? How fast are the learning curves? - Efficiency of use • Once an experienced user has learned to use the system, how fast can he or she accomplish tasks? - Memorability • If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything? - Error frequency and severity • How often do users make errors while using the system, how serious are these errors and how do users recover from these errors? - Subjective satisfaction (IBM usability questionnaire - Lewis 1995) • How much does the user like using the system?

History in early to mid 1900s?

Mechanical engineers (e.g. Winslow Taylor, Henry Ford) created a framework for the relationship between workers and their tools: early examples of documented research on the relationship between users and their tools/products.

How can metrics be used on the web?

Metrics - the web Two types of ways to get quantitative data: · Log files = a file that contains information about usage patterns, activities and shows whether resources are performing properly. Here it's on the surface side and is more about errors within the system (e.g. missing pages) · Page tagging = codes inserted on the pages to track users. It tells you more about the behaviors of the users that are visiting your website (e.g. Google Analytics) - Everything can be captured about the users' behaviors which is an advantage (e.g. how long they're watching an ad) Things tell you what people are interested in looking at and their behavior on the website. These don't tell you why they behave like this so you should also have qualitative methods too. - Site-wide measurements: These ones cover the largest windows of information. - Session-based statistics: These reveals a richer set of user behaviors than simple site-wide stats. - User-based statistics: These helps learn how your users vary in their behavior - Clickstream analysis: Shows the paths people followed through a site, so goes into more specifics on user statics. - Metrics for advertisements: To show statistics about advertisement so researchers can understand how to increase the effectiveness of advertising.

What are metrics and some good things to know about metrics?

Metrics are the specific measurements and type of statistics used to review the usability of your product. You analyze the metrics and then it can tell you what and where in the process the problems are occurring It's good to know: · What data is collected (so you know what you're dealing with and it helps you to know how to interpret the data) · Critical measurements · Data access (someone to turn to for access) · Historical data (good to know how far the data goes back to know if the data is new and relevant or old)

What are the two different types of scenarios?

Narrative to personas that describes how a person thinks and behaves in connection to an activity or situation. Two types: - Current interactions (current problems that people are having) --> context/problem scenarios - Shows how the current state can be improved (what to improve) - Future interactions (write more during the development process and try to identify what happens if you change certain development in a cycle) --> design scenarios - envision proposal for change (What to change)

Facts about observations?

Observations is when you're sitting somewhere hidden (be as invisible as possible) and try to observe certain behaviors. - Users needed: 5+ - Lifecycle stage: task and environment analysis, follow up studies - Main advantages: ecological validity; reveals users' real tasksand shows that the findings can be applicable to the real world. Suggests functions and features. - Main disadvantages: you have very little control (you do want control as a researcher). Intrusive. Experimenter can affect user behavior. Time-consuming data analysis. It's prone to be influenced by outside factors - Variations: participatory observation (when you emerge yourself into a social setting to observe)

How to define your personas?

Once everything is clustered you might get a persona e.g. 40 something woman, married with children, small business manager. - Then you need to add details to these characteristics --> e.g. Doris Walter, 45 years old, marries to Steve, has two kids Greg and Anna .... - Then you create a story line so someone who reads it can put themselves in their shoes. - Avoid doing too many personas. You don't have to make personas out of every target group, just the important ones. • Target group • Importance • Frequencies • Expertise - Prioritize: if you have multiple target groups, then focus on the more challenging group --> you will solve problems for the average group as well - Document and share: teach the whole team about the personas and how to use them - Develop with personas: keep using them after they are done being developed, otherwise they won't remain useful. - Update: make sure to update the personas all the time during the development

In what ways do we see structure and why?

Our brains try to make things easier to interpreted, to do things autopilot and push it into System 1. Once we see learn something it's very hard to unlearn it. 5 laws: - Law of proximity = objects close to each tends to get grouped together - Law of similarity = objects similar to each other tends to get grouped together - Law of closure = when we see complex arrangement we tend to look for coherence/a single, recognizable pattern -> our brain can visually fill things out - Law of continuity = elements presented in a line or a curve are perceived to be related - Law of symmetry = the mind perceives objects as being symmetrical

What is User experience?

Person's perceptions and responses resulting from the use and/or anticipated use of a product, system or service. The emotional value that you're creating.

Good and bad parts with personas?

Personas - the good and bad - Good: Conceptualizing differences - personas helps team understand differences between user groups and what to prioritize them - Good: Over simplifying - they simplify otherwise boring and difficult reports - Bad: Avoid preexisting associations - be as neutral as possible - Bad: Need to connect personas to needs• E.g. Jeff is late paying bills (So what?)• E.g. Jeff need help paying the bill (Yes, we can!) - Bad: Personas comes with a scenarios and doesn't stand on its' own.

How do you prepare data and find patterns in a qualitative analysis?

Prepare data · Label photographs and videos - no one wants to look at the photos on a camera screen so move the photos/videos from the digital device to file systems and label them with important information like participant name, time, date. · Transcribing audio and video - also messy handwritten notes should be transcribed. · Break it down into large chucks of data - (cluster) and try to find the patterns Find the patters: · Sorting and categorizing- Use deductive reasoning: when you have predetermined or pre existing categories - Use inductive reasoning: when you allows your data to suggest new groups - Decide on topic and context Examples: How you want to cluster your data is based on what you want to know. E.g. understanding people, then you might cluster things like - Values, mental models, goals, behaviors, role, skill level, preferred or alternative tools, pain points, demographics E.g. understanding behaviors, then you might cluster things on - Resources, mistakes/corrections, decision points, outcomes, frequency, importance, risks, purpose, cues, options

How to prepare a usability test?

Prepare: - Participant profile(s): find the right people - How many participants? - Choosing features - which features you want to look at depends on different things. Make a list of what features you want to test, like...• Frequency: features that are used often• New: features that are new• Troublesome: features that seem to cause trouble/annoyance for the users• "Dangerous": features that are potentially dangerous or have bad side effects if use incorrectly• Importance: features that are considered important by users

Facts about probes/diary studies?

Probes/Diary studies: see how attitudes change over time You need to decide how much time you need to get proper results: e.g., if you meet every week, you need a couple of weeks. If you meet every day, then maybe a week is enough. - Users needed: ~5 clusters (e.g. families) - Lifecycle stage: environment analysis, early design stages - Main advantages: It's contextual (people do this in their own environment which give insight about real time users behaviors). Over time. Personalized. Rich design inspiration. Geographically distributed - Main disadvantages: little control (what if people miss filling out one day). Response bias. Hard (if not impossible) to analyze - Variations: cultural probes (Gaver), technology probes (InterLiving)

How does qualitative analysis work?

Qualitative data leads to you ending up with a lot of information. It can be time-consuming. - Good to go through the data with a group - Qualitative output is unstructured, sometimes chaotic and can be messy (people say different things but mean the same) - First you need to decide what you want to get out of your research and data - Then return to your data and structure them into something ordered, legible (redable) and intelligible (easily understood) e.g., using affinity diagram method or other methods - Create a clustered hierarchy of observations - with a team you try to cluster (stick sticky notes together in a hierarchy way) and put together the answers that mean the same thing - First create notes: singular observations about tools, sequences, interactions, mental models... - Arrange: analysts, whiteboard, markers, post-it notes in multiple colors - Write notes on separate post-its and put notes that relate to each other together - Iteratively cluster and label into hierarchy (new color for each new level)

What is the Wizard of Oz within usability testing and what are the good/bad things about it?

Simulate that there is some interaction without them knowing to try to simulate the actual prototype you're trying to develop. This does not require that you have developed a product yet. E.g., pranking that the TV is touchscreen but they are actually using the remote without them knowing. The good - Test more complex interactions - One at the time The bad - Consistency - Training the wizard (the person that's doing the simulations needs training)

How do you construct surveys?

Surveys - construction · Brainstorm your questions - Involve the department team and write down all the questions you can think of. · Three types of questions you can ask: - Characteristic questions: who they are; age, where they're come from - Behavioral questions: about their behaviors: how often do you, how long have you... - Attitudinal questions: asking for you opinion, what you want and believe, e.g. do you think this works well or not, do you like the product

Facts about surveys?

Surveys: a quantitative method - Users needed: at least 30 (more is better) - Lifecycle stage: early design, follow up studies - Main advantages: finds subjective user preferences. Quantitatively describes the audience. Easy to repeat (large groups) and to analyze. Anonymity. - Main disadvantages: Questionnaire development is time intensive: item construction, scale construction, establishing reliability and validity - pilot test required. Sample bias: who will respond? Response bias. No further probing (searching/exploring) possible (I.e., you can't ask why).You'll need to do a lot of questioning

What are the good and bad things with Think aloud?

The Good - Cheap - Convincing: getting to listen to what the users actually think about their product can be a good motivator for the company to pay attention to usability - Flexible: can be done anywhere and be used at any stage in the development lifecycle The Bad - Some unnatural situations bc people are not used to speak their minds - Filtered statements: people don't want to look dump sp they might think through their comments before saying them but we actually want to know what comes to mind directly when it happens.

What are the good and bad things about surveys?

The good: - Structured way of letting people describe themselves, their interest and preferences. - Higher degree of certainty: bc people answer the same way compared to interviews where an answer can be hard to interpreted. The bad: - Can easily go wrong: e.g. you need to make sure you ask the right questions or if people that are not in our target group are answering the survey - Leading to inaccurate, inconclusive and deceptive results (this can lead to you maybe going the wrong way when developing your product)

What's the second part in a research plan after goals?

The scheduling - Once you have decided your questions you need to plan a schedule. - Integrate schedules people within the business all operates on different schedules with which research needs to be coordinated with - Adapt priorities - Big issues first (if tackling big issues first then you might solve some other ones at the same time) - General methods first (before using specific methods for specific issues)

What are usability tests?

These are structured interviews focused on specific features in an interface prototype. Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use - They're testing specific features but not all of them at the same time - Do not start too late with usability test - start in the early to middle stages of development to test with your prototypes, before a feature is locked in and set There are four different types of usability testing: - Exploratory: this test preliminary concepts and evaluate their promise - Assessment: test features during implementation - Comparison: assess one design against another - Validation: test to see if everything meet certain standards

What to think about when drawing your conclusion on surveys?

Things to keep in mind when drawing you conclusion - Confusing correlation and causation - correlation like; A causes B or it just happens by coincidence and affect each other? - No differentiation between subpopulations - sometimes what looks like a single trend is actually the result of multiple trends in different populations. Can look into that if you can't find a conclusion. Can divide based on gender, age whatever you think is relevant to look at the smaller parts - Confusing belief with truth - their recollection might be false - Be critical about survey responses: • People want everything: some people want evert possible feature • People exaggerate and lie: people try to present how they want to be instead of how they are • People will try to outguess the survey: they try to guess the reasons behind the question and what the questioner expects to hear • People will choose an answer even if they don't feel strong about it: therefor important with opt-out answers

Facts about think aloud?

Think aloud: you give a person a task and tell the person to execute it but during the execution you'll tell them to verbalize what goes on in their mind to understand their thought process when doing a task. - Users needed: 3-5 - Lifecycle stage: formative evaluation, iterative design - Main advantages: pinpoints user misconception, cheap test, helps them to think because sometimes people just do per automatic and can't really explain why they would do that when it doesn't make any sense. - Main disadvantages: unnatural for users, can be hard for user to verbalize what they're thinking. - Variations: peer tutoring, co-discovery

Why is the dual process theory used?

To easier understand how people behave and what they want so we know how to make an easy and good system for them to use. We want to create products for System 1 where they know how to use it and just do it.

What are the three different kinds of interviews?

Unstructured: we sit down and have a casual conversation and see what info we can get, no real plan but to sit down and see where it goes Structured: like a survey; you ask a question and go to the next one. Don't really want to know any in depth, just want answers to your questions Semi structured: make a plan with what you want to know but allow yourself to deviate if you find something interesting.

What to look at when doing qualitative vs quantitative usability tests?

Usability test can be both qualitative and quantitative depending on what data you're collecting. Qualitative: - Observation of behavior - Interview: expectations, first impressions, evaluation - Think aloud: during task execution Quantitative: - Speed of task completion - Number of errors they make. - Speed and frequency of error recovery - Number of how many people were able to complete tasks - Satisfaction

Facts about user tests?

User tests: You ask the person to try the product. Can be used in the final stages or in a competitive state where they compare other competitive products. - Users needed: at least 10+ - Lifecycle stage: competitive analysis, benchmarking final testing - Main advantages: controlled study. Results are easy to interpret and compare (benchmarking). Replicable. Gives you quantitative data. Can do this online and not only in lab. - Main disadvantages: tasks often artificial/feels fake and restricted. Time-consuming.

When and when should we not do focus groups?

WHEN? Use this when you want to understand what goes on in people's heads (values, desire, motivations, memories) · POBA talk (Perceptions, Opinions, Beliefs & Attitudes) 4 different types of focus groups: - Exploratory: explore general attitudes on a given topic to help developers see how the eventual users of the product will understand it, talk about it and judge it. - Feature prioritization: you have a whole list of features that you want to implement on your product but don't know where to start - make a priorization list with the feedback given from the focus groups - Competitive analysis: ask about competitors products to see what they like and doesn't like - Trend explanation: after spotting a trend, these focus groups can help determine the cause for the trend/the behavior WHEN NOT? · Not to infer (make a good guess on what people actually do) but to understand · Not to generalize but to determine range: focus groups does not generalize to a larger population like e.g. surveys but

History in 1966?

Walt Disney emphasizes the use of the latest technologies to improve people's lives, inspiring many user experience designers today. Design for utility AND joy.

What is the waterfall method and is it good?

Waterfall method - a development method where each phase is passed down to each other, meaning that they depend on the deliverables of the previous phase. It's not very iterative or flexible and it's hard to go back wards (waterfall can't go up) and when it does it usually gets very expensive. This was used a lot before the iterative development came. - Iterative development: flexibility, adaptivity, shared vision

Why bother doing research?

We want to know things before creating our product, otherwise there might be mistakes made. We can also learn from others mistakes if we do research. - E.g. to avoid making products that look too simliar to other products so they get mixed up e.g. cooking vs insect spray - E.g. a computer mouse with the charger outlet on the back so you can't use it while charging.

Describe how to do coding/revisit in qualitative analysis?

We want to revisit our codes bc at this point we'll probably have lots of big groups of items and now it's time to dive deeper and...: - Look for internal patterns • Subcategories: see if you have any clear subcategories • Combining: some groups can probably be combined together or at least moved in proximity so that you remember that they are related • Moving: if a piece of data no longer fit in a group, move it to somewhere more suitable. Clustering you do with other people to get multiple views on qualitative data and because you might have different perceptions on things. It usually takes a couple of rounds of clustering before you're done and have a result. Once you're done clustering you need to reframe frameworks to make more sense of your clustered data. This can be done by: • Taxonomies: a taxonomy is a hierarchical organization which brings together all your existing categories and subcategories. • Maps: placing your data into spatial (structured) representations • Timelines: often a chronological order. Timelines help you track and present sequences of activities over a day/month/decades and can show you interesting patterns that otherwise would be difficult to see. • Flowcharts: a way to represent branching action pathways, processes that take different turns depending on an important decision point. • Spectrums: things that are opposite can create a spectrum along which people can vary on. • Two-by-two matrixes: uses two spectrums

What are field visits?

When you go to places where the people feel most comfortable (any place important to your target group like offices, homes, gyms, shops etc) so you when and how products are used and what they are used for. E.g. firefighters who thought that the radio was bad but after observing it turned out that the buttons on the radio were too small. · Contextual inquiry: a type of contextual inquiry but instead of having to interview and complete a task you just observe · Observing people in their habitual places and activities · Get a rough idea on: - How products are used - When products are used - What the products are used for

How do we create our survey questions?

· Closed questions should be specific, exhaustive and mutually exclusive- ask something that everyone understands the same way · Do not make people predict their future behavior- better to ask if they've done things in the past bc then it's a higher chance you'll do it in the future then if you try to guess what you would do · Avoid negative questions - bc they can easily be mistaken for their positive version. E.g if you ask "which ones are you not interested in" they can think it means "which ones are you interested in?". · Do not overload questions- like asking multiple questions within one question. · Be specific- try to avoid fuzzy words like "some", "a lot" · Be consistent- in the way you're formatting your survey questions and skills. Ask questions the same way every time. · Avoid extremes - "Are you always doing this?" - bc no one is always doing it · Include an opt-out option- do not force people to answer, have an "I don't know/not applicable to me" also · Leave space for comments and suggestions at the end of the survey

How to analyze your data to build your personas?

· Combine different data sources: - Find the best way to cluster data (see chapter 15). Cluster data = put it on post it notes and cluster them together - Who's using the product the most? - Find communalities - feeling of a group or belonging · Prioritize attributes and patterns - Frequency of use - Size of market - how large are the groups of people represented by each pattern? - Historic or protentional revenue - how financially important are each of those groups? - Strategic importance - who are you trying to reach?

How to do contextual inquiry?

· Establishing rapport- Relationship: master/apprentice- NOT interviewer/interviewee, expert/novice, guest or critic · Introductions and warm up - so the person you're following knows what to expect- Describe, NDforms, get settled, set up equipment- General questions (e.g. typical day, tasks) · Observation = the main interview- User actions, tool use, running description- Record · Follow-up interview · Wrap-up - summarize and go through the result, see if they have anything to add

How does diary studies with voice message work?

· Participants can use mobile phone or landline to make reports to a dedicated voice mail service, rather than writing on paper if they're not very comfortable with writing. · Advantages of voicemail diaries over paper-based diaries:- Less interruption of ongoing tasks (depends on environment)- Combines well with mobility: mobile phone as tool is easier since you carry it with you all the time rather than a paper book.- Rich qualitative feedback: people might choose to update the diaries more often and give more concrete details when they don't have to write it instead)- Interaction with researcher through regular updates of prerecorded messages • Personal interaction allows to calibrate appropriate level of user response • Personal interaction will yield higher response rate

How to create tasks for usability tests?

· Reasonable - need to make sure that the tasks are reasonable, what a normal person would do · Describe tasks in terms of end-goals - the product is just a tool to reach the goal. You want to know how people reach a goal so don't tell them every step of the way how to do it · Be specific - the task should have a concrete end goal e.g. book a ticket to Italy vs book a ticket to Italy, Rome · Doable - should be doable but also interesting to see if people are struggling · Reasonable length · In realistic sequence (browse, search, buy...)

Some facts about observations?

· Sensitive topics - good when the topic is sensitive and people might not want to talk about it - Are people uncomfortable or unwilling to answer questions about a particular subject? Self-reports on sensitive (social) topics often bring biased answers.- In this case, observations are more likely to bring about more accurate data. · Observable - You must be able to observe what is relevant to your study. E.g. you can't see attitudes - Although you can observe behaviors and make interferences about attitudes. - Also, you can't be everywhere. There are certain things you can't observe. · Time - Observational research may be time consuming- In order to obtain reliability, behaviors must be observed several times. - The observer's presence may change the behaviors being observed. Over time subject may grow accustomed to your presence. · Process rather than outcome - when doing observations, you're more interested in the process behind rather than the outcome- Behaviors are (partly) unconscious - It's hard for people to reliably report how they perform certain tasks, achieve certain goals - It's especially difficult to gather such data with questionnaires - Events involving multiple people interacting are even harder to capture via self-report · Main advantages - Ecological validity - Reveals users' real tasks · Main disadvantage: e.g. weather might play a role - No experimenter control - unless... - Intrusive: experimenter can affect user behavior - Only for overt behaviors - Time-consuming gathering data & analyze these type of observations · You need to be as invisible as possible to maintain a high ecological validity to get the real reactions and behaviors from people · You can't rely on one observation, you need multiple to see that the behaviors actually are true and relevant

How do you prepare for diary studies?

· Takes a lot of time to prepare (need to design and manufacture a customized digital/paper diary) · Duration: depends on frequency of behavior and entries- E.g., if a product is used only once a week, you will need to run the study for about two months to observe any trends · Sampling rate: how many entries do you want? This determines the level of detail that you can observe. The more frequently people fill in the diary, the more subtle you can notice changes and trends.- However, don't interrupt too frequently. If you ask them to do this too much, then they might drop out halfway. SO, try to find a good balance in the frequency of writing in the diary. · Using triggers: - Interval-contingent - send reminders at regular intervals (e.g., like before lunch you send out a reminder to fill in the diary, or send out one right after an event but find the balance so you don't bother them) - Signal-contingent - triggered by researcher, and/or sensor events in the environment or worn by user - Event-contingent - after a pre-specified event, e.g., use of product X) · Invent good exercises: diary tasks, to get more elaborate answers but also so they don't get bored and drop out · Unstructured diaries: these diaries give open-ended forums for users, only loose guidelines about what they can write · Structed diaries: these diaries only give a limited set of possibilities to choose a response, like a survey form that you ask them to fill out. · Look at the forms as they come in: - If you do it online (online diary formats) it will allow you to adapt the content as the study is in progress. Good that you can look at the data and see if you need to steer it in a different direction a little bit maybe. Paper books does not let you do that. - Analysis is similar to focus group analysis (coding, label etc.)

Why do competitive research?

· There are always competitors (even though you think you've come up with something new and special). You can learn from your competitors and the changes in the competitor's products. · When producing requirements - research into what the competitor's costumers find useful, attractive and where those products fail to guide your own selection and prioritization of features. · Redesigning; learn from the competitors mistakes. If something responds very well, you might want to implement that as well.

The two notes to entry on User experience?

• Note 1: User experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use. • Note 2: User experience is a consequence of brand image, presentation, functionality, system performance, interactive behavior and assistive capabilities of the interactive system, the user's internal and physical state resulting from prior experiences, attitudes, skills and personality, and the context of use.

What are the different layers within a product in the user experience design that creates the experience?

• Product objectives • Users needs • Interaction design • Usability • Visual design • Information architecture

What is HCI?

• Stands for Human-Computer Interaction = investigate how people interact with computers. • Multi & interdisciplinary field (= combination of multiple disciplines into one activity. It's not designed for one specific theme, anybody can do it e.g. computer science, design, psychology, engineering) • Other related fields: Human technology interaction, Brain computer interaction, Human robot interaction

Technology as a mediator: what is important to focus on?

• WHAT? - What does it need (functions) to create this product? Determines the functionality to provide the experience • HOW? - how do these functions need to work (ex. What buttons do we need?). Putting the functionality into action. WHY? - clarify the need and emotions involved in an activity, the meaning, the experience


Conjuntos de estudio relacionados

Chapter 10 Antimicrobial Treatment

View Set

Learning System RN - Fundamentals

View Set

SIE prac quiz part 2 (sec 10-12)

View Set

Health Assessment: Collecting Subjective Data

View Set

Ch.11 Depreciation, Impairments, and Depletion

View Set

Key Bones and Bony Landmarks of Shoulder Girdle and Shoulder Joint

View Set

ACCT 2 Chapter 5: Cost-Volume-Profit Relationships SW

View Set