UX Design Definitions

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Content Strategy

(Editorial)+ Experience (Content)(Structure) + Process System

A/B Testing

(also known as "multivariate testing," "live testing," or "bucket testing"): a method of scientifically testing different designs on a site by randomly assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior.

Concept Testing

A UX researcher shares an approximation of a product that captures the key essence (the Value Proposition) of a new concept in order to determine if it meets the needs of the target audience. This can be done one-on-one or with larger numbers of participants, and either in person or online.

Design review

A blank is a milestone within a product development process whereby a design is evaluated against its requirements in order to verify the outcomes of previous activities and identify issues before committing to - and if need to be re-prioritise - further work. The ultimate blank, if successful, therefore triggers the product launch or product release.

Prototypes

A blank is a simulation or sample version of a final product, which is used for testing prior to launch. The goal of a blank is to test products (or product ideas) before spending lots of time and money into the final product.

Use Cases

A blank is a written description of how users will perform tasks in your app. It outlines, from a user's point of view, an app's behavior as it responds to a request. Each blank is represented as a sequence of simple steps, beginning with a user's goal and ending when that goal is fulfilled.

Moodboards

A collaborative collection of images and references that will eventually evolve into a product's visual style guide. Blank allows UX designers to show stakeholders and teammates a proposed look for the product before investing too much time or money on it.

Experience Maps

A diagram that explores the multiple steps taken by users as they engage with the product. Blank allows designers to frame the user's motivations and needs in each step of the journey, creating design solutions that are appropriate for each.

Online survey

A questionnaire that the target audience can complete over the Internet.

Value Proposition

A statement that maps out the key aspects of product: what it is, who it is for and how it will be used. This helps the team create consensus around what the product will be.

Task Analysis

A study of the actions required in order to complete a given task. This is helpful when designers and developers try to understand the current system and its information flows. It makes it possible to allocate tasks appropriately within the new system.

Accessibility Audit

A study to measure if the website can be used by everyone, including users with special needs. It should follow the W3C guidelines to make sure that all users are satisfied.

Eye Movement Tracking

A technology that analyzes the user's eye movements across the UI layout (i.e. web page)This provides data about what keeps users interested on the screen and how their reading flow could be optimized by design.

Wireframes

A visual guide that represents the page structure, as well as its hierarchy and key elements. Blank are useful when UX designers need to discuss ideas with team members/stakeholders, and to assist the work of visual designers and developers.

User Flow

A visual representation of the user's actions to complete tasks within the product. Visualized blank makes it easier to identify which steps should be improved or redesigned.

Storyboards

Blank are illustrations that represent shots that ultimately represent a story. In UX this story illustrates the series of actions that users need to take while using the product. Translating functionalities into real-life situations, helps designers create empathy with the user.

Personas

Blank is a fictional character created to represent a user type that might use a product in a similar way.Blank make it easier for designers to create empathy with users throughout the design process.

Sketches

Blank is a quick way of visualizing an idea (e.g. new interface design) by using paper and pen. Blank are useful to validate product concepts and design approaches both with team members and users.

Competitive-analysis report

Blank maps out their existing features in a comparable way. Report helps you understand industry standards and identify opportunities to innovate in a given area.

Usability report

Blank summarize usability findings in a clear, precise and descriptive way that helps the product team identify the issue(s) and work toward a solution. When reporting results from a usability test, UX designer should focus primarily on the findings and recommendations that are differentiated by levels of severity.

Surprise changes

Communicate future directions. Customers and users depend on what they are able to do and what they know how to do with the products and services they use. Change can be good, even when disruptive, but ......... ............ are often poorly received because they can break things that people are already doing. Whenever possible, ask, tell, test with, and listen to the customers and users you have. Consult with them rather than just announcing changes. Discuss major changes early, so what you hear can help you do a better job, and what they hear can help them prepare for the changes needed.

Low Fidelity or Paper Prototype

Less time to prepare a static prototype, more time to work on design, before the test. Creating a clickable prototype takes time. Without having to make the prototype work, you can spend more time on designing more pages, menus, or content. (You still need to organize pages before the test so the "computer" can easily find the right one to present. But doing this is usually a lot faster than preparing a clickable prototype.) You can make design changes more easily during the test. A designer can sketch a quick response, and erase or change part of design between test sessions (or during a session) without worrying about linking the new page in the interactive prototype. Blank prototypes put less pressure on users. If a design seems incomplete, users usually have no idea whether it took a minute or months to create it. They may better understand that you are indeed testing the design and not them, feel less obliged to be successful, and be more likely to express negative reactions. Designers feel less wedded to blank prototypes. Designers are more likely to want to change a sketchy design than one with full interaction and aesthetics. Once we invest more time and sweat in a design, it's harder to give it up if it does not work well. Stakeholders recognize that the work isn't finished yet. When people see a rough prototype, they don't expect it to ship tomorrow. Everybody on the team will expect changes before the design is finalized. (In contrast, when a design looks very polished, it's easy for an executive to fall into the trap of saying, "this looks good, let's make it go live now.")

Context of Product Use

Natural or near-natural use of the product Scripted use of the product Not using the product during the study A hybrid of the above

Analytics report

Numbers provided by an analytics tool on how the user interacts with your product: clicks, user session time, search queries etc. Blank can also "uncover the unexpected", surfacing behaviors that aren't explicit in user tests.

Social Media

Pay attention to user sentiment. ......is a great place for monitoring user problems, successes, frustrations, and word-of-mouth advertising. When competitors emerge, ........... may be the first indication.

High Fidelity or Clickable Prototype

Prototypes with blank interactivity have realistic (faster) system response during the test. Sometimes it can take extra time for the person playing the computer, whether online or on paper, to find the right screen and respond to a user's click. Too long of a lag between user's action and the "computer's" response can break the users' flow and make them forget what they did last or expected to see next. A delay also gives users extra time to study the current page. So, with a slow prototype, usability-test participants may notice more design details or absorb more content than they normally would with a live system. TIP: If the page supposed to appear next is hard to find in a paper prototype or slow to load in a clickable prototype, take away the current screen the user is looking at, so she is instead looking at a blank page or area. When the next page is ready, first display the previous page for a few moments again so the user can get her bearings, then replace that screen with the next one. The test facilitator can help this process by saying just a few words to help the user recover the context — for example, "Just a recap, you clicked About Us." With blank interactivity and/or visuals, you can test workflow, specific UI components (e.g. mega menus, accordions), graphical elements such as affordance, page hierarchy, type legibility, image quality, as well as engagement. Blank prototypes often look like "live" software to users. This means that test participants will be more likely to behave realistically, as if they were interacting with a real system, whereas with a sketchy prototype they may have unclear expectations about what is supposed to work and what isn't. (Though it's amazing how strong the suspension of disbelief is for many users in test situations where not everything is real.) Blank interactivity frees the designer to focus on observing the test instead of thinking about what should come next. Nobody needs to worry during the test about making the prototype work. Blank testing is less likely to be affected by human error. With a static prototype, there is a lot of pressure on the "computer" and a fair chance of making a mistake. Rushing, stress, nerves, paying close attention to user clicks, and navigating through a stack of papers can all make the "computer" panic or just slip during the test.

Quantitative Survey

Questions that provide numbers as result. Quick and unexpensive way of measuring a level of user satisfaction and collecting feedback about the product. Survey is a quick way to collect information from a large number of users but their obvious limitation is lack of any interaction between the researcher and the users.

Diary study

Record for a long time (week — month) the process of using the product or service. Previously, we used diaries with self-report, when the subjects filled out a "diary" every day, describing their interaction with the product and their feelings about it. Now a more objective method is used: recording to the camera, video recording of the device.

People

Recruit ....... for future research and testing. Actively encourage ....... to join your pool of volunteer testers. Offer incentives for participation and make signing up easy to do via your website, your newsletter, and other points of contact.

Training

Reduce the need for ..... ....... is often a workaround for difficult user interfaces, and it's expensive. Use ......... and help topics to look for areas ripe for design changes.

Phases of Product Development

STRATEGIZE: In the beginning phase, you typically consider new ideas and opportunities for the future. Research methods in this phase can vary greatly. EXECUTE: Eventually, you will reach a "go/no-go" decision point, when you transition into a period when you are continually improving the design direction that you have chosen. Research in this phase is mainly formative and helps you reduce the risk of execution. ASSESS: At some point, the product or service will be available for use by enough users so that you can begin measuring how well you are doing. This is typically summative in nature, and might be done against the product's own historical data or against its competitors.

Focus group

Several people similar in some way are discussing an important topic for them, the product or service under research.

Qualitative vs. Quantitative Dimension

The distinction here is an important one, and goes well beyond the narrow view of qualitative as "open ended" as in an open-ended survey question. Rather, studies that are qualitative in nature generate data about behaviors or attitudes based on observing them directly, whereas in quantitative studies, the data about the behavior or attitudes in question are gathered indirectly, through a measurement or an instrument such as a survey

In-depth interview

The researcher asks the user about the experience of using different products, about their impressions of use, the problems the user has met with and how they have solved and much more.

Usability testing

The user performs typical tasks for himself in the product under investigation.

Think aloud

The user, during the execution of the tasks, pronounces aloud his actions.

Stakeholders Interviews

These are conversations UX designers conducts with their key stakeholder: customers, bosses, subordinates or peers both within and outside the organization. This allows UX designer to step into the shoes of their interviewees and see your role through the eyes of these stakeholders. It also helps prioritize features and define key performance indicators (KPIs).

Kickoff Meeting

This covers a high-level outline of the product's purpose, who is involved in designing and developing the product, how they'll work together and stay up to date on progress, and what the intended results or success metrics are. The kickoff meeting sets the stage for the success of your product.

Attitudinal vs. Behavioral Dimension

This distinction can be summed up by contrasting "what people say" versus "what people do" (very often the two are quite different). The purpose of attitudinal research is usually to understand or measure people's stated beliefs, which is why attitudinal research is used heavily in marketing departments. While most usability studies should rely more on behavior, methods that use self-reported information can still be quite useful to designers.

User Interview

This is a common user research technique used typically to get qualitative information from existing users. It helps UX designer better understand their users (user's emotion and opinions). This technique is especially useful when the target audience is new or unknown for the team.

Competitive Audit

This is a comprehensive analysis of competitor products that maps out their existing features in a comparable way. The goal is to discover what is working for other companies in your industry, so that you can make those strategies work for you, too, to gain a competitive advantage.

Heuristic Evaluation

This is a detailed analysis of a product that highlights good and bad design practices in existing product. It helps UX designers visualize the current state of the product in terms of usability, accessibility, and effectiveness of the experience.

Card Sorting

This is a method used to help design or evaluate the information architecture of a product. UX designer asks users to group content and functionalities into open or closed categories. A result gives UX designer input on content hierarchy, organization and flow.

Focus Groups

This is a moderated discussion that typically involves 5 to 10 participants. You bring people to discuss issues and concerns about the features of a user interface. The group typically lasts about 2 hours and is run by a moderator who maintains the group's focus.

Product Roadmap

This is a product's evolution plan with prioritized features. It could be a spreadsheet, a diagram or even a bunch of sticky notes. UX designer shares the product strategy with the team and the road that needs to be taken to achieve its vision.

Cultural Probes

This is a technique used to inspire ideas in a design process. It serves as a means of gathering inspirational data about people's lives, values and thoughts. With minimal intrusion, researchers can glean insights into participants' environments that can help to identify problem statements, uncover new opportunities, and inspire the designer with new ideas and novel solutions.

Customer Journey Map

This is a visualization of the process that a person goes through in order to accomplish a goal. It's used for understanding and addressing customer needs and pain points. In its most basic form, starts by compiling a series of user goals and actions into a timeline skeleton. Next, the skeleton is fleshed out with user thoughts and emotions in in order to create a narrative. Finally, that narrative is condensed into a visualization used to communicate insights that will inform design processes. This tool combines two powerful instruments: storytelling and visualization. Storytelling and visualization are essential facets of journey mapping because they are effective mechanisms for conveying information in a way that is memorable, concise and that creates a shared vision. Fragmented understanding is chronic in organizations where KPIs are assigned and measured per individual department or group because many organizations do not ever piece together the entire experience from the user's standpoint. This shared vision is a critical aim with this tool, because without it, agreement on how to improve customer experience would never take place. It creates a holistic view of customer experience, and it's this process of bringing together and visualizing disparate data points that can engage otherwise disinterested stakeholders from across groups and spur collaborative conversation and change.

Field Studies

This is about going out and observing users "in the wild" so that behavior can be measured in the context where a product will actually be used. This technique can include ethnographic research, interviews and observations, plus contextual enquiry.

Guerrilla Testing

This is one of the simplest (and cheapest) form of user testing. Using this testing usually means going into a coffee shop or another public place to ask people there about your product or prototype. It can be conducted anywhere ex- cafe, library, train station etc, essentially anywhere where you can find a relevant audience.

Product Strategy

This is the foundation of a product life-cycle and the execution plan for further development. It allows UX designers to zero in on specific target audiences and draw focus on the product and consumer attributes.

Usability Testing

This is the observation of users trying to carry out tasks with a product. Testing can be focused on a single process or be much more wide ranging.

Brainstorming

This is widely used by teams as a method to generate ideas and solve problems. It allows the team to visualize a broad range of design solutions before deciding which one to stick with.

Contextual interview

User and researcher are in the real user environment. Researcher listen to the user and watch the context.

Remote unmoderated user testing

We give the person our product and ask him to perform tasks under the natural conditions for him. We record it.

Tests using psychophysiological indicators

We obscure the user with sensors and measure such parameters as a skin-galvanic reaction, myogram, respiration, palpitations.

Eye tracking

We put a person in front of the monitor with the eye tracker. While he performs tasks on the site, we see and record where and for how long a person looks.

Activities -Discover

What actionable part of the design process is this describing? Ongoing and strategic activities can help you get ahead of problems and make systemic improvements. Find allies. It takes a coordinated effort to achieve design improvement. You'll need collaborators and champions. Talk with experts. Learn from others' successes and mistakes. Get advice from people with more experience. Follow ethical guidelines. The UXPA Code of Professional Conduct is a good starting point. Involve stakeholders. Don't just ask for opinions; get people onboard and contributing, even in small ways. Share your findings, invite them to observe and take notes during research sessions. Hunt for data sources. Be a UX detective. Who has the information you need, and how can you gather it? Determine UX metrics. Find ways to measure how well the system is working for its users.

Search-Log Analysis

What analysis is this? Your website's search engine can tell you what your web visitors want, how they look for it, and how well your content strategy meets their needs.

5 steps to prepare for any presentation

What are these steps used for? 1. Define goals of a presentation Explain the goals of a presentation and what kind of feedback you're looking for. If you're showing a prototype, let listeners know at the outset that you want them to concentrate on the user flow, not visual design. What type of feedback you want to elicit What questions you want answered 2. Set context Even when all listeners are familiar with a project they may have different perspectives on it. To ensure everybody is on the same page walk listeners through a short scenario: Introduce the user Describe use case scenario Explain the context of use For example: Alex hasn't been feeling well recently. A doctor prescribed him some medication. Alex has to take pills twice a day. 3. Focus on the problem Explain the user problem we're trying to solve. What are the user's goals? You can cite your users or show a record from interviews with the user. For example: Alex isn't getting better. He keeps forgetting to take his pills. 4. Introduce solution Illustrate the reasons behind decisions made instead of explaining features. Explain why this solution is better than alternatives: share observations from research provide data recap feedback from previous meetings clarify impact on business For example: We've noticed Alex and other patients often forget to take their pills. We've also noticed Alex has his phone with him all the time. We can use his phone as a reminder tool and gradually he'll become used to taking his pills on time. 5. Provide next steps To lead a discussion and avoid irrelevant questions. Give listeners the future vision of the product and remind them of the current priorities. Question Finish the presentation with a question to steer listener attention towards the problem and the goal of the presentation.

A 5-Step Process For Conducting User Research

What is this 5 step process used for? 1.Objectives -These are the questions we are trying to answer. What do we need to know at this point in the design process? What are the knowledge gaps we need to fill? 2.Hypotheses -These are what we believe we already know. What are our team's assumptions? What do we think we understand about our users, in terms of both their behaviors and our potential solutions to their needs? 3.Methods -These address how we plan to fill the gaps in our knowledge. Based on the time and people available, what methods should we select?Once you've answered the questions above and factored them into a one-page research plan that you can present to stakeholders, you can start gathering the knowledge you need through the selected research methods: 4.Conduct -Gather data through the methods we've selected. 5.Synthesize -Answer our research questions, and prove or disprove our hypotheses. Make sense of the data we've gathered to discover what opportunities and implications exist for our design efforts.

SWOT Analysis

What is this analysis called? Various methods for assessing the Strengths, Weaknesses, Opportunities and Threats that impact the user experience of a product.

analytical review

What is this describing? An auditing procedure based on ratios among accounts and tries to identify significant changesIn order to make the most of analytics data, UX professionals need to integrate this data where it can add value to qualitative processes instead of distract resources. The biggest issue with analytics is that it can very quickly become a distracting black hole of "interesting" data without any actionable insight.Scope of metrics: So many things that can be measured, but which are meaningful? Difference between metrics: Which metrics best answer specific questions? Interface complexity: How do you get the analytics system to tell you what you want to find out?

Test instrucitons

What is this describing? Prepare your product or design to test Find your participants Write a test plan Take on the role of moderator Present your findings

Test Plan

What is this describing? Study goals: The goals should be agreed upon with any stakeholders, and they are important for creating tasks. Session information: This is a list of session times and participants. You can also include any information about how stakeholders and designers can log into sessions to observe. For example, you can share - and record - sessions using WebEx or gotomeeting. Background information and non-disclosure information: Write a script to explain the purpose of the study to participants; tell them you're testing the software, not them; let them know if you'll be recording the sessions; and make sure they understand not to share what they see during the study (having participants sign a non-disclosure agreement as well is a best practice). Ask them to please think aloud throughout the study so you can understand their thoughts. Tasks and questions: Start by asking participants a couple of background questions to warm them up. Then ask them to perform tasks that you want to test. For example, to learn how well a person was able to navigate to a type of product, you could have them start on the home page and say, "You are here to buy a fire alarm. Where would you go to do that?" Also consider any follow-up questions you might want to ask, such as "How easy or difficult was that task?" and provide a rating scale. Conclusion: At the end of the study, you can ask any observers if there are questions for the participant, and ask if the participant has anything to else they'd like to say.

Accessibility Evaluation

What is this describing? Web accessibility testing is a subset of usability testing where the users under consideration have disabilities that affect how they use the web. The end goal, in both usability and accessibility, is to discover how easily people can use a web site and feed that information back into improving future designs and implementations.

Research -Exploration methods

What method is this describing? These methods are for understanding the problem space and design scope and addressing user needs appropriately. Compare features against competitors. Do design reviews. Use research to build user personas and write user stories. Analyze user tasks to find ways to save people time and effort. Show stakeholders the user journey and where the risky areas are for losing customers along the way. Decide together what an ideal user journey would look like. Explore design possibilities by imagining many different approaches, brainstorming, and testing the best ideas in order to identify best-of-breed design components to retain. Obtain feedback on early-stage task flows by walking through designs with stakeholders and subject-matter experts. Ask for written reactions and questions (silent brainstorming), to avoid groupthink and to enable people who might not speak up in a group to tell you what concerns them. Iterate designs by testing paper prototypes with target users, and then test interactive prototypes by watching people use them. Don't gather opinions. Instead, note how well designs work to help people complete tasks and avoid errors. Let people show you where the problem areas are, then redesign and test again. Use card sorting to find out how people group your information, to help inform your navigation and information organization scheme.

Research- Testing and validation methods

What part of the design phase and what methods are described? These methods are for checking designs during development and beyond, to make sure systems work well for the people who use them. Do qualitative usability testing. Test early and often with a diverse range of people, alone and in groups. Conduct an accessibility evaluation to ensure universal access. Ask people to self-report their interactions and any interesting incidents while using the system over time, for example with diary studies. Audit training classes and note the topics, questions people ask, and answers given. Test instructions and help systems. Talk with user groups. Staff social-media accounts and talk with users online. Monitor social media for kudos and complaints. Analyze user-forum posts. User forums are sources for important questions to address and answers that solve problems. Bring that learning back to the design and development team. Do benchmark testing: If you're planning a major redesign or measuring improvement, test to determine time on task, task completion, and error rates of your current system, so you can gauge progress over time.

Research -Listen

What part of the design phase is this describing? Listen throughout the research and design cycle to help understand existing problems and to look for new issues. Analyze gathered data and monitor incoming information for patterns and trends. Survey customers and prospective users. Monitor analytics and metrics to discover trends and anomalies and to gauge your progress. Analyze search queries: What do people look for and what do they call it? Search logs are often overlooked, but they contain important information. Make it easy to send in comments, bug reports, and questions. Analyze incoming feedback channels periodically for top usability issues and trouble areas. Look for clues about what people can't find, their misunderstandings, and any unintended effects. Collect frequently asked questions and try to solve the problems they represent. Run booths at conferences that your customers and users attend so that they can volunteer information and talk with you directly. Give talks and demos: capture questions and concerns.

Research -Discovery

What phase of the design process are you in while doing these tasks? Conduct field studies and interview users: Go where the users are, watch, ask, and listen. Observe people in context interacting with the system or solving the problems you're trying to provide solutions for. Run diary studies to understand your users' information needs and behaviors. Interview stakeholders to gather and understand business requirements and constraints. Interview sales, support, and training staff. What are the most frequent problems and questions they hear from users? What are the worst problems people have? What makes people angry? Listen to sales and support calls. What do people ask about? What do they have problems understanding? How do the sales and support staff explain and help? What is the vocabulary mismatch between users and staff? Do competitive testing. Find the strengths and weaknesses in your competitors' products. Discover what users like best.

Principles of interaction design

What principle tool set does this describe? Use evidence-based design guidelines, especially when you can't conduct your own research. Usability heuristics are high-level principles to follow. Design for universal access. Accessibility can't be tacked onto the end or tested in during QA. Access is becoming a legal imperative, and expert help is available. Accessibility improvements make systems easier for everyone. Give users control. Provide the controls people need. Choice but not infinite choice. Prevent errors. Whenever an error occurs, consider how it might be eliminated through design change. What may appear to be user errors are often system-design faults. Prevent errors by understanding how they occur and design to lessen their impact. Improve error messages. For remaining errors, don't just report system state. Say what happened from a user standpoint and explain what to do in terms that are easy for users to understand. Provide helpful defaults. Be prescriptive with the default settings, because many people expect you to make the hard choices for them. Allow users to change the ones they might need or want to change. Check for inconsistencies. Work-alike is important for learnability. People tend to interpret differences as meaningful, so make use of that in your design intentionally rather than introducing arbitrary differences. Adhere to the principle of least astonishment. Meet expectations instead. Map features to needs. User research can be tied to features to show where requirements come from. Such a mapping can help preserve design rationale for the next round or the next team. When designing software, ensure that installation and updating is easy. Make installation quick and unobtrusive. Allow people to control updating if they want to. When designing devices, plan for repair and recycling. Sustainability and reuse are more important than ever. Design for conservation. Avoid waste. Reduce and eliminate nonessential packaging and disposable parts. Avoid wasting people's time, also. Streamline. Consider system usability in different cultural contexts. You are not your user. Plan how to ensure that your systems work for people in other countries. Translation is only part of the challenge. Look for perverse incentives. Perverse incentives lead to negative unintended consequences. How can people game the system or exploit it? How might you be able to address that? Consider how a malicious user might use the system in unintended ways or to harm others. Consider social implications. How will the system be used in groups of people, by groups of people, or against groups of people? Which problems could emerge from that group activity?

True-Intent Studies:

a method that asks random site visitors what their goal or intention is upon entering the site, measures their subsequent behavior, and asks whether they were successful in achieving their goal upon exiting the site.

Unmoderated Remote Panel Studies:

a panel of trained participants who have video recording and data collection software installed on their own personal devices uses a website or product while thinking aloud, having their experience recorded for immediate playback and analysis by the researcher or company.

Unmoderated UX Studies:

a quantitative or qualitative and automated method that uses a specialized research tool to captures participant behaviors (through software installed on participant computers/browsers) and attitudes (through embedded survey questions), usually by giving participants goals or scenarios to accomplish with a site or prototype.

Card Sorting:

a quantitative or qualitative method that asks users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site by exposing users' mental models.

Interviews:

a researcher meets with participants one-on-one to discuss in depth what the participant thinks about the topic in question.

Concept Testing:

a researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience; it can be done one-on-one or with larger numbers of participants, and either in person or online.

Email Surveys:

a survey in which participants are recruited from an email message.

Intercept Surveys:

a survey that is triggered during the use of a site or application

Eyetracking

an device is configured to precisely measure where participants look as they perform tasks or interact naturally with websites, applications, physical products, or environments.

Clickstream Analysis:

analyzing the record of screens or pages that users clicks on and sees, as they use a site or software product; it requires the site to be instrumented properly or the application to have telemetry data collection enabled.

Focus Groups:

groups of 3-12 participants are lead through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.

The graphical user interface (GUI /ˈɡuːi/),

is a type of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation.

Customer Feedback:

open-ended and/or close-ended information provided by a self-selected sample of users, often through a feedback link, button, form, or email.

Usability-Lab Studies

participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service.

Diary/Camera Studies:

participants are given a mechanism (manual or mechanical) to record and describe aspects of their lives that are relevant to a product or service, or simply core to the target audience; diary studies are typically longitudinal and can only be done for data that is easily recorded by participants.

Participatory Design

participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them most and why.

Desirability Studies:

participants are offered different visual-design alternatives and are expected to associate each alternative with a set of attributes selected from a closed list; these studies can be both qualitative and quantitative.

Ethnographic Field Studies:

researchers meet with and study participants in their natural environment, where they would most likely encounter the product or service in question.

Usability Benchmarking

tightly scripted usability studies are performed with several participants, using precise and predetermined measures of performance.

Moderated Remote Usability Studies:

usability studies conducted remotely with the use of tools such as screen-sharing software and remote control capabilities.


संबंधित स्टडी सेट्स

Unit 3, Lesson 1: What is matter made of?

View Set

Driver's Education Chapter 3.4 Pavement Markers

View Set

personal money midterm chapter 4

View Set

"A Separate Peace" -- Study Guide Questions & Answers -- Chapters 1-5

View Set

practice questions for maternity test 2

View Set