quiz

¡Supera tus tareas y exámenes ahora con Quizwiz!

Types of Evaluation Data

Quantitative/Obj - User performance metrics (benchmark tests) Quantitative/Sub - User opinion ratings (questionnaires) Qualitative/Obj - Critical incidents Qualitative/Sub - Think-aloud technique

Qualitative Data in Empirical Methods

Critical Incidents (obj)- -Effects of design flaws on users -Obvious errors or breakdowns, but also hesitation, head shaking, etc. -Capture information quickly, during the evaluation; don't wait until the session is over Think-Aloud (sub) - - Also "verbal protocol" - Users express verbally their thoughts, including motives, rationale, and perceptions of UX problems - Not easy for all participants to do during task performance; may have to wait until just after - Note that thinking aloud takes cognitive resources and may alter task performance

Design Walkthrough

- Can be used at almost any stage, especially effective early, before functional prototype - Need to have only: * your conceptual design: scenarios, storyboards * maybe some screen sketches or wireframes or videos - Not enough for interacting with customers or users - Audience can include: design team, UX analysts, subject-matter experts, customer representatives, potential users - goal is to explore design on behalf of users - No interaction, so you (evaluators of design team) do the driving

Heuristic Evaluation

- Each inspector browses through each part of interaction design * Assessing compliance to * Asking (to self) the heuristic questions * Noting where heuristics are supported and where violated, along with context (e.g., screenshots) - Inspectors get together as a team◦ Discuss, compare, and merge problem lists * Brainstorm suggested solutions * Decide on recommendations◦ Write report - Most widely used HCI heuristics are◦ Adapted for a great many platforms and areas ◦ Search and adapt for your projects!

Usability Inspection

- Expert walkthrough based on usability guidelines, often working from a checklist * Generally want more than one expert (if affordable!) - Indirectly assess usability/UX using heuristics & guidelines * Guidelines (and walkthrough) can be at many levels * e.g., screen layout, detailed analysis of cognitive states - May or may not use a standard set of tasks * Depends on how comparable you want judgments to be - Summarize by listing problems identified in each category, also often rating them for severity

Discount Usability Evaluation

- Goal: get the most useful information for guiding re-design with the least cost * Pioneered by Jacob Nielsen (heuristic inspection) - Do a little bit of each (analytic and empirical) * 3-4 experts find most of the guidelines issues * 4-6 users experience most of the actual use problems * Between the two, get a good sense of what to fix - Not surprisingly, a popular strategy, pretty much what you find in practice

Quasi-Empirical Evaluation

- Involve taking data using volunteer participants, but... ◦ No formal protocols and procedures * No quantitative data collected◦ Can be conducted anywhere (UX lab, conference room, office, in the field) ◦ Very relaxed controlled conditions (can interrupt and intervene) - They are defined by the freedom given to practitioner to innovate - They are flexible about goals and approaches * Punctuated with impromptu changes of pace, direction, and focus * Jump on issues that arise and milk them to get the most information about problems, their effects, and potential solutions - Best used by experienced practitioners

RITE UX Evaluation

- Rapid Iterative Testing and Evaluation (Wixon et al.) - Quasi-empirical method (abridged version of user-based testing) - Fast collaborative test-and-fix cycle * Pick low-hanging fruit with relatively low cost * Fix and retest in quick cycles - Select UX practitioner to be facilitator (to direct testing session) * Identify characteristics needed in participants * Decide tasks for participants to perform (agree on critical tasks) * Construct test script based on those tasks * Decide how to collect qualitative user behavior data * Have one participant perform small # of selected tasks while thinking aloud Record usability and * UX problems and note their severity - Come together (designers + evaluators) to discuss * Identify problem causes and solutions and determine which can be fixed * Implement fixes and bring them to the current prototype as soon as possible

Analytic vs Empirical Eval

Analytic (intrinsic) - rooted in theory, models, guidelines - conducted by experts in HCI or product domain - less expensive - more weight with developers - most likely formative - more controlled Empirical (payoff) - based on observations, surveys, interviews - conducted with the help of users - expensive - less weight with developers - more summative - more ecologically valid

Where eval methods fit

Analytic/Rig - cognitive walkthrough, GOMS Analytic/Rapid - Design walkthroughs, inspection techniques, heuristic evaluation Empirical/Rig - lab based evaluation, field evaluation studies Empirical/Rapid - RITE, Quasi-empirical evaluation, discount usability

Evaluation

Any analysis or empirical study of the UX of a prototype or system goal is to provide feedback in software dev in support of an iterative dev process recognize problems, understand underlying causes, and plan changes Evaluation: UX, usability, user-based design User Testing: Users don't like to be tested (bad)

Formative vs Summative Eval

Formative evaluation helps you form design - diagnostic - uses qual data - immediate foal: identify UX problems and causes - ultimate goal: fix the problems Summative helps you sum up design - collection data to assess level of UX quality due to design - especially for assessing improvement in user experience due to iteration of formative evaluation and redesign

H1: Visibility of System Status

Keep users informed about what is going on Appropriate visible feedback - what did I select? - what mode am I in now? - How is the system interpreting my actions?

H8: Aesthetic and Minimalist Design

No irrelevant information

Rapid vs Rigorous Eval

Rapid - involves methods that are faster and less expensive, loss of effectiveness - more appropriate for early stages of progress - initial reactions and really feedback Rigorous -Involves methods that maximize effectiveness and minimize the risk of errors regardless of speed or cost -Methods refrain from shortcuts or abridgements

H5: Error Prevention

Try to make errors impossible - only legal commands selected or legal data entered

Nielsen's Heutistics

Visibility of system status- The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and real world- The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom- Users often choose system functionsby mistake and will need a clearlymarked "emergency exit" to leave theunwanted state without having to go 3 through an extended dialogue. Support undo and redo. Consistency and Standards- Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error Prevention- Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition Rather Than Recall- Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use- Accelerators -unseen by the novice user- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetics and minimalist design- Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors- Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and Documentation- Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.

Dimensions of evaluation

evaluation data: formative or summative, rapid or rigorous, analytic or empirical, qualitative or quantitative, objective or subjective


Conjuntos de estudio relacionados

7.3 Hypothesis testing for the mean (sigma unknown)

View Set

Rosetta Stone French Unit 20 (all lessons)

View Set

Chapter 9: Introduction to Contracts

View Set

Acceleration Calculation Graph ID

View Set