Human Performance & Human Automation Interaction- Quals

¡Supera tus tareas y exámenes ahora con Quizwiz!

What primary factors underlie error forms and what does the underspecification hypothesis predict regarding the use of these underlying factors

Familiarity (similarity-matching) and frequency (frequency-gambling) Underspecification hypothesis says that familiarity is the predominant process used when 1) the retrieval cues are adequate to specify a unique knowledge item and 2) the is a large number of stored items; conversely, frequency is the predominant process used when 1) the retrieval cues are ambiguous or insufficient and 2) there is impoverished domain knowledge Reason, 1990- Ch. 4

What is the theoretical framework for basic human error types

Generic error modeling system (GEMS) -Monitoring failures (related to SB errors, precede problem detection): typically occur because of inattention (omitted attention checks) or overattention (mistimed attention checks) -Problem solving failures (RB & KB errors, follow problem detection): for RB errors, typically occur because of misapplication of a good rule or application of a bad rule (due to encoding deficiencies or action deficiencies); for KB errors, typically occur because of bounded rationality and incomplete or inaccurate mental model of the problem representation Reason, 1990- Ch.3

What are some methods for investigating human error

Naturalistic Questionnaire Laboratory experiments Simulation study Case study

What are the major errors in SA at level 2 SA

- errors at this level typically occur as the result of inability to properly integrate or comprehend the meaning of perceived data in light of an operator's goals 1) novices do not have sufficient mental models necessary for determining which cues are relevant 2) adapted or new mental model does not match the environment 3) incorrect selection of a mental model that is used to interpret perceived data 4) Even with a correctly selected model, mismatch between model or lack of match of elements to the model can result in errors 5) Overreliance on default values embedded in a model 6) No relevant model in memory, thus the need to rely on WM to develop one, WM limitations can then lead to errors Endsley, 1995

What factors influence automation bias

1. System properties LOA Reliability Consistency 2. Task context Concurrent tasks Workload Social accountability Parasuraman & Manzey, 2010

What is performance-induced performance decrements (i.e., choking under pressure) and why does it occur?

performing more poorly that expected given one's skill level in situations in which incentives for optimal performance are at a maximum (i.e., pressure) Likely occurs because it creates worry about the situation and the consequences consuming WM resources that an individual needs to perform at an optimal level Beilock & DeCaro, 2007

General Problem Solver (GPS)

A problem-solving simulation program created by Newell and Simon that embodies MEANS-ENDS ANALYSIS: involves setting a high-level goal and looking for differences between the current state of the world and the goal state, looking for a method that would reduce this difference, setting as a subgoal the application of that method and then recursively applying means-ends analysis until the final state is reached Reason, 1990

Define error

Generic term that encompasses all occasions in which a planned sequence of mental of physical activities fails to achieve its intended outcome Reason, 1990- Ch. 1

What is Hockey's (1984) cognitive state model

Different stressors induce different cognitive patternings of information processing change -Is the most widely accepted framework for systematizing stress and performance data Matthews, 2001

How does the DSA model compare/differ from Endsley & Jones team SA model

DSA developed by Salmon et al., 2008, the model described SA as being distributed in the world Differences between the models: DSA says it is sufficient that needed SA is distributed somewhere in the system, whereas team SA says it is not sufficient if the information is out there but the person who needs it is not aware of it o Team SA sees information distributed amongst technological agents as repositories for human DM to gather information, not themselves SA because they are not cognizant DMs Endsley, 2015

What happens when MWL is too high or too low

High levels of MWL can lead to errors and system failure, whereas low level can lead to complacency and errors Tsang & Wilson, 1997

Under what conditions is automation complacency found and what is the implication of these conditions regarding conclusions of automation complacency

High task load and constant automation reliability Implications: - Complacency is not a passive state that operators fall into, but rather active reallocation of attention away from monitoring task to manual task under conditions of high workload - Close relationship between trust, attention, and complacency. Specifically, attention allocation is likely influenced by an initial orientation of trust, which is reinforced when the automation maintains the same constant level of reliability Parasuraman & Manzey, 2010

What is subjective expected utility theory (SEU) and its basic assumptions about decision makers

Humans make decisions by drawing inferences from evidence in accordance with logical principles and to make uncertain judgements in the manner of intuitive scientists employing statistical decision theory. Basic assumptions about decision makers: o 1) They have clear and defined utility function- allows them to index their preference for each range of future outcomes o 2) They possess clear and exhaustive view of the possible alternative strategies open to them o 3) They can create a consistent joint probability distribution of scenarios for the future associated with each strategy o 4) They will choose between alternatives and/or possible strategies in order to maximize their subjective expected utility

How are SA errors detected?

Main clue to erroneous SA will be when a new piece of information/data does not fit well with expectancies based on their internal model Endsley, 1995

What does each level of SA encompass

Perception (level 1): involves perceiving the status, attributes, and dynamics of relevant elements in the environment (what is considered relevant differs based on an individual's operational role and the domain) Comprehension (level 2): involves a synthesis of the elements identified at level 1 including an understanding of the significance of each element(s) in relation to the operator's goal(s), also serves to provide a holistic view of the environment Projection (level 3): Project the near future actions of the elements in the environment (places a very large demand on WM resources if a schema doesn't already exist for the situation) Endsley, 1995

How can we engineering resilience into organizations?

Preliminary ideas: · Finding means to invest in safety even under pressure of scarcity and competition while also dealing with imperfect knowledge and uncertain technology · Organizational monitoring and learning · Fixating on higher order variables · Adding a new level of intelligence and analysis to the incident reporting and error counting done today [PS1]These are some preliminary ideas given by Dekker in the conclusion of the chapter, however, she states that more is said about this in the following chapters- read and add material here if I have time Dekker, 2005

How does pressure/stress impact WM ability

Pressure/stress results in worry about the situation and its consequences, consuming WM resources needed for optimal performance - The impact is typically greater for high WM individuals who typically rely on WM intensive rule-based processing. They usually must switch their strategy to low WM intensive associative processing to combat the increased strain on their WM resources from the stressful situation Beilock & DeCaro, 2007

What is the ROC curve and what do values on it represent

Receiver/relative operating characteristic (roc) Axes are hit and FA rates Area under the curve indicates sensitivity (can be calculated as Ad' for linear function or Az for a curvilinear function) Diagonal line indicates chance performance (.5), moving the line upper left indicates better sensitivity (1) The placement along the diagonal indicates response bias, values to the left on the line indicate conservative bias Macmillan & Creelman, 2005; Stanislaw & Todorov, 1997

How is human-human trust similar/different to human-automation trust

Similarities: § Conceptually § Neurologically § People's trust in a system may represent their trust in the designer of the system Differences: § Depends on different attributes · Interpersonal trust depends on ability, integrity, and benevolence · Human-automation trust depends on performance, process, and purpose of an automated system § Different progression of trust formation · Interpersonal based on first predictability, then dependability, then faith Human automation progresses in the reverse order Hoff & Bashir, 2015

What is the cry wolf effect

general syndrome whereby excessive alarms, many of them seemingly unnecessary (i.e., FA), lead to distrust or disuse of the alarm system, which may further lead to disregard or late response to some true alarms Wickens et al., 2009

How does the articulatory loop functioning explain the acoustic similarity effect and the word-length effect

§ Acoustic similarity effect can be explained by confusion within the passive store of phonologically similar items § word-length effect can be explained by memory span is a function of both the durability of item traces within the passive store and the rate at which contents can be refreshed by subvocal rehearsal, and short words can be said sub-vocally quicker than long words so more of them can be rehearsed before their traces fade from the passive store Reason, 1990- Ch. 2

What factors impact trust formation

§ Affective processing has the greatest influence § Analogic thought processes that use social norms and opinions of others to determine trustworthiness § Analytic involves the rational evaluations of a trustee's salient characteristics (like KB performance level) · Typically used only when cognitive resources are available, otherwise when resources are limited, affective and analogic processes dominant Lee & See. 2004; Hoff & Bashir, 2015

What is the Norman-Shallice attention to action model

§ Control system/structures that allow for relative autonomy of well-established motor programs · 1) Horizontal threads: each is comprised of a self-sufficient strand of specialized processing structures (schemas) · 2) Vertical threads: Interact with horizontal threads to provide means by which attentional or motivational factors can modulate the schema activation values o These come into play when novel or critical conditions merit currently active schemas insufficient to achieve the current goal Reason, 1990- Ch. 2

How does the underspecification of cognitive activity result in greater use of similarity and frequency biases

When cognitive operations are underspecified, they manifest in both frequency and similarity biases to a greater degree than they would have if the "small print" of the activity had been more precisely stated by either top-down or bottom-up processes Reason, 1990-Ch. 4

What is error-trapping

When decision automation is very high, require that action automation is sufficiently low so that the automated decision choice is executed by the operator o But this must be balanced with associated increase in workload and potential for manual errors Parasuraman et al., 2000

What is WM capacity and what are some ways it can be assessed?

a STM system that is involved in the control, regulation, and active maintenance of a limited amount of information with immediate relevance to the task at hand Can be assessed OSPAN & RSPAN Beilock & DeCaro, 2007

What is automation abuse and potential countermeasures/considerations

automation of functions by designers and implementation by managers without due regard for the consequences for human performance Potential countermeasures/considerations include 1) Define operator roles based on operator responsibilities and capabilities, rather than as a byproduct of how automation is implemented, 2) Decision to apply automation should account for importance of keeping operator involved to provide safety benefits such as opportunities to intervene, 3) Consider cost of designer errors Parasuraman & Riley, 1997

What is transparency

degree to which the inner workings or logic used by the automated system are known to the human operators to assist their understanding about a system Hoff & Bashir, 2015

What are the most popular task used to study SDT performance

1) Yes/no task: one or more signals presented during signal trials and one or more noise stimuli presented during noise trials and subjects must indicate whether a signal was present - Responses on this task are determined by a decision variable and criterion 2) Rating task: present only one stimulus type during each trial, but require graded response rather than yes/no response 3) Forced-choice task: each trial presents one signal and one or more noise stimuli and subject must indicate which stimuli was the signal Stanislaw & Todorov, 1999

Define trust

1) persistence of natural and social laws, 2) technically competent role performance, and 3) fiduciary obligations and responsibilities § 1) expectation of both natural (physical and biological dimensions) and moral-social order (humankind will be good and decent) § 2) ability of those with whom we interact in relationships to perform their roles safely and effectively § 3) our partners in interaction will place other individual's interests before their own (Bailey & Scerbo, 2007) an attitude that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability (Lee & See, 2004)

What are methods for calculating SDT measures

1)Tabular methods (weakest method, only use as a last resort because it has poor accuracy due to need to round values) 2) SDT software 3)General purpose software

What are the secondary evaluative criteria to consider when thinking about implementing automation

1) Automation reliability · Benefits of automation on human performance may be eliminated if automation is unreliable and diminishes human trust 2) Costs of decision/action outcomes o Can usually fully automate aspects that have low risk · High-level automation is justified for decision and action automation 1) in highly time-critical situations when there is insufficient time for an operator to respond and take appropriate action and 2) for decisions that have high risk o If there is a chance a human operator would need to intervene in the event of an automation failure, then only low to moderate levels of automation are recommended Parasuraman et al., 2000

What is adaptive automation

level (and possibly the type) of automation can be designed to vary depending on situational demands during operational use Parasuraman et al., 2000

What are the ways in which a person chooses a response bias

maximize the proportion correct maximize a weighted combination of hits and correct rejections (x-ray technician) Maximize expected value (reward for hits and CR, penalty for misses and FA) Macmillan & Creelman, 2005

What is automation disuse, the factors that influence it, and potential countermeasures/considerations

refers to neglect or underuse of automation (ex. Ignoring or turning off alarms) Often occurs bc of false alarms, which are influenced by values of the decision criteria and low base rates of hazardous conditions Countermeasures/considerations include 1) Decision threshold and base rate of hazardous conditions must be considered in terms of preventing hazardous conditions and considering operator trust and potential disuse of a system, 2) Consider use of likelihood alarms Parasuraman & Riley, 1997

What is automation misuse, the factors that influence it, and potential countermeasures/considerations

refers to overreliance on automation, which can result in failures of monitoring (i.e., automation complacency) or decision biases (i.e., automation bias) Factors that influence misuse include workload, automation reliability and consistency, and the saliency of automation state indicators Potential countermeasures include 1) Provide salient feedback regarding system states, 2) Reduce workload, 3) Train operators to recognize biases and compensate for them, 4) Require some level of active operator involvement in the process (increases SA), 5) Ensure other operator attentional demands do not encourage them to ignore automated processes Parasuraman & Riley, 1997

What is automation use, the factors that influence it, and potential countermeasures/considerations

refers to the voluntary activation or disengagement of automation by human operators Influenced by attitudes toward automation, reliability of automation, mental workload (including cognitive overhead), trust, self-confidence, and risk Countermeasures include 1) Better operator knowledge about how the system works, 2) Teach operators to make rational automation use decisions, 3) Ensure automation is not difficult or time consuming to turn on or off (i.e., low cognitive overhead) Parasuraman & Riley, 1997

What is an active error and latent error

Active errors effects are felt almost immediately and tend to arise from front-line operators Latent errors effects may lie dormant for a long time and become evident only when they combine with other factors and breech the systems defenses and tend to arise from those removed in time or space from direct control of the interface (managers, maintenance personnel) - Also considered the greatest threat to the safety of complex systems Reason, 1990- Ch. 7

Define forcing function

something that prevents forward progression until the problem has been corrected Ex. word not letting you close document until it is saved or you verify you do not want to save Reason, 1990- Ch. 6

Ernst Mach (1905) quote regarding human error

"Knowledge and error flow from the same mental sources, only success can tell one from the other" Reason, 1990- Ch. 1

What is important for error detection

- Immediacy and validity of feedback are critical in detection of errors o At low levels of cognition (i.e., slips/SB performance), this information is supplied by hard-wired neural mechanisms for slips, but not as available at higher levels of cognition (i.e., KB performance) Reason, 1990-Ch. 6

What are the major errors in SA at level 1 SA

- Person may fail to perceive certain information that is important for SA (incomplete SA) 1) lack of detectability or discriminability of an element, 2) physical obstruction preventing perception of an element, 3) failure of system designer to make information available, 4) failure of information sampling, 5) forgetting information already sampled, 6) or individual differences in ability to allocate attention between sources -Person may believe the value of something is different from what is actually is (inaccurate SA) 1) misperception of a signal 2) erroneous expectations Endsley, 1995

What is SA?

- a cognitive product comprising the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future Considered a level of awareness that operators have of the situation that they are engaging in; how operators develop and maintain a sufficient understanding of 'what is going on' Endsley, 1995

What questions should a designer ask to evaluate the potential mismatch between the behavior of a machine and the corresponding information presented to the user via the interface and training material

1) Does the display enable users to discern between machine configurations that are part of the users task 2) Comparing the users mental model with the machines behavior, does the users model make it possible to predict the next configuration of the machine 3) Does the display provide the user will all necessary information to reliably predict when a transition will take place Degani et al., 1999

What are the 3 classifications of modes

1) Interface modes: specifies the behavior of the interface 2) Functional modes: specifies the distinct behavior of the various functions of a machine 3) supervisory modes: specifies the level of user and machine involvement in supervising the process; ranges from manual to semiautomatic to fully automatic Degani et al., 1999

What are the interface design guidelines for optimizing SA

1) Provide information that is processed and integrated in terms of level 2 and 3 SA because of limited attention and WM (ex. Show time and distance of fuel rather than requiring operator to calculate this given other related variables) 2) Organize and present information so that it is compatible with operators goals (collocate information and ensure its format directly answers major decisions associated with goal) 3) Make critical cues that elicit mental models and schemata salient 4) Avoid salient cues for noncritical events to avoid bottom-up directed attention toward these noncritical events 5) Provide global SA at all times while also providing detailed information related to immediate goals 6) Filter or reduce extraneous information to prevent overload and facilitate SA 7) Provide system-generated support for projecting future events and states of the system 8) Support parallel processing of information (related to PCP) Endsley, 1995

Problems/flaws in SEU theory

1) Subjective utility varies from one type of decision to the next 2) Human decision makers rarely consider all possible alternatives and their probabilities, and instead tend to contemplate only a few alternatives 3) Further, reflection of past decisions is often clouded by hindsight bias

What are aspects of automatic control systems that are problematic to users and can lead to confusion and errors

1) The interface does not provide adequate information to the operator concerning the machine configuration 2) Interactions among many components of a complex machine may be confusing and intricate 3) Operator has an insufficient mental model of the machine's behavior Degani et al., 1999

What is goal-directed task analysis (GDTA) and what are its uses in the context of SA

1) Used to determine the meaningful elements to be perceived, the comprehensions that are needed, the projections to be made in a given domain 2) Can be used to determine overlapping SA requirements that need to be shared among two or more team members via a comparison of their GDTA's that lay out their individual SA requirements Endsley, 2015

What elements can predict a users expectations regarding a systems mode?

1) User tasks - Mode error will not occur if discerning between two or more machine configurations is not part of the users task 2) User knowledge - Need sufficient mental model of machine behaviors to track and anticipate machine configurations 3) User ability to sense inputs that prompt mode transitions -User must be able to sense events that trigger mode transitions via information presented via the interface; if information is not adequately displayed, mode confusion and mode errors can occur Degani et al., 1999

What factors influence automation complacency?

1. Automation reliability - Complacency more likely under constant reliability - It has been estimated that if automation reliability falls below 70% (+/- 14%) there are neither costs nor benefits associated with the automation, primarily under high workload - More likely under high reliability (Bailey & Scerbo, 2007) 2. First failure effect: complacency effect is high after first automation failure but dissipates after as operator slowly regains trust in the automation 3. Expertise - Complacency effect found for both experts and novices 4. Training - Complacency effect is not reduced via extended training on the task - However, complacency effect may be reduced if variable training is used (8-% to task A and 20% to task B), due to better multitasking ability 5. Individual differences - Considerable differences between individuals, possibly due to "complacency potential" 6. Trust (Bailey & Scerbo, 2007) 7. Workload/task complexity (Bailey & Scerbo, 2007) Parasuraman & Manzey, 2010

Types of rationality and how they relate to human error

1. Bounded rationality (Simon, 1975): The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real-world—or even for a reasonable approximation of such objective rationality (i.e., only a keyhole view of the problem) - Leads to satisficing 2. Imperfect rationality: overreliance on simplifying heuristics operating within the schematic knowledge base -Consistent errors in reasoning like poor in handling negative statements or tendency to verify generalizations rather than falsify them 3. Reluctant rationality: reluctancy to engage in laborious yet computationally powerful processes involved in analytic reasoning in favor of excessive reliance on familiar cues and well-tried problem solutions - restricts possible solutions used 4. Irrationality - Groupthink resulting in irrational planning processes Reason, 1990

What are the human performance consequences that must be considered when thinking about implementing automation and what are the general findings related to each aspect of human performance and automation

1. Mental workload Typically decreases MWL, but may increase it when the automation is difficult to initiate and engage 2. Situation awareness · Tends to decrease situation awareness when decision making functions are automated because humans tend to be less aware of changes in the environment or system state when those changes are under the control of another agent · Automation can also inhibit development of a good situation model of the information sources in the environment when the automation is consistently and repeatedly selecting and executing decisions because the operator is not actively engaged in evaluating the information sources 3. Complacency § Operator over reliance on the automation that results in the operator failing to monitor the automation and its information source and then failing to detect the occasional times when the automation fails 4. Skill degradation · Forgetting and skill decay occur with disuse Parasuraman et al., 2000

What are the major elements of an error theory and what can error theories tell us

1. Nature of the task and environment 2. Mechanisms governing performance 3. Nature of the individual Enables us to forecast the CONDITIONS under which the error will occur and the particular FORM the error will take Reason, 1990- Ch. 1

What is an intentional action without prior intention?

1. Spontaneous action- like hitting someone in the spur of the moment 2. Subsidiary action- like the small steps required to drive to work (open the car door, sit down, buckle up), only the overarching task has prior intention while these subsidiary actions do not Reason, 1990- Ch. 1

What are the major fallacies regarding the 1995 SA model as well as the correct interpretation of the model corresponding to each fallacy

1. The three levels of SA are linear a. The three levels of SA represent ascending levels of SA, not linear stages b. Relatedly, there are links between the components across the levels of SA (i.e., the information obtained at level 1 SA is integrated to form a whole picture at level 2 SA) c. Finally, the levels of SA are closely related and integrated, not discrete from one another 2. The model is a data-driven information-processing model a. The model is not just data-driven, it clearly discusses the dynamic interplay between bottom-up and top-down processing in the formation of SA 3. There is a dichotomy between the product of SA versus the process of achieving SA a. The process of SA and the product of SA are closely intertwined and not a strict dichotomy b. The product of SA often influences the process of acquiring SA 4. The model of SA is not cyclical or dynamic a. The model addresses the dynamic nature of real-world situations and thus the constant need to update SA due to the time and temporal aspects of the situation as important to SA, particularly levels 2 (comprehension) and 3 (projection) 5. The SA model fails to take into account meaning a. Even the most basic level 1 data are meaningful in a particular domain and operational role b. Further, level 2 and 3 are higher-order assessments that give "meaning" to an individual c. Finally, the elements considered meaningful for a particular domain and operational role help to dictate research, valid metrics of SA, and for improving systems and training programs 6. SA is all contained in WM a. WM is a bottleneck on SA only for novices and novel situations, otherwise, LTM and schemata can overcome the limits of WM b. Integrated relationship between WM and LTM c. Little to no relationship between WM and SA for experts i. Except for level 3 SA when predictions cannot be automatically accessed from schemas and experts must rely on WM to maintain present conditions, future conditions, rules used to generate the latter from the former, and actions that are appropriate to the future conditions 7. The SA model only represents an "In the Head" view of the world and does not encompass the wider socio-technical environment a. The model clearly takes into account various environmental and system factors that affect SA including the capabilities of the system for gathering needed information, the system interface for presenting that information effectively, the effects of stress and workload, the complexity of the system, and the characteristics of automation Endsley, 1995

Define automation

A device or system that accomplishes (partially or fully) a function that previously, or conceivably could be, carried out (partially or fully) by a human operator Parasuraman et al., 2000; Parauraman & Riley, 1997 Technology that actively selects data, transforms information, makes decisions, or controls processes (Hoff & Bashir, 2015)

What is drift into failure and what fuels this drift

A slow, incremental movement of a system's operations toward the edge of its safety envelope Fueled by scarcity and competition Dekker, 2005

What strategies do dual-process theories propose are used to facilitate performance on reasoning and DM tasks

ASSOCIATIVE PROCESSES: similarity-based associations that have been built up over repeated exposure to concurrent events - Occur spontaneously and place low demands on WM RULE-BASED PROCESSES: rely on symbolically represented explicit knowledge as conventions to guide processing - Places heavy demands on WM Beilock & DeCaro, 2007

What is the distinction between error forms by dimensions

Additional information not in table Type of activity SB errors precede detection of a problem, KB & RB follow problem detection Control mode SB & RB errors are characterized by feedforward control (emanate from stored knowledge structures), KB errors are characterized by feedback errors Reason, 1990- Ch. 3

How do MWL and SA relate? Are they the same?

Attaining a high level of SA can be mentally demanding and compete with actual task demands for same limited resources, adding to workload, which could impact SA concepts do not replace or encompass one another, however, they are affected by many of the same variables Tsang & Wilson, 1997

Why is associative processing relatively immune to pressure inducing situation?

Because it circumvents the need to maintain and manipulate the immediate problem steps (i.e., low WM demands) Beilock & DeCaro, 2007

Why does pressure/stress not impact the performance of individuals with lower WM capacity but does impact individuals with higher WM capacity

Because pressure/stress induces worry about the situation and its consequences and this worry consumes WM resources - Thus it does not impact low WM individuals as much because they rely more heavily on associative processes that have a low tax on WM capacity so pressure-induced consumption of WM does not disrupt performance - And it impacts high WM capacity individuals because they are more likely to rely on rule-based processing which has a high WM demand Beilock & DeCaro, 2007

Why do high WM individuals seem to be less likely to recognize and use simpler associative processing under low performance pressure compared to low WM individuals?

Because they are especially good at focusing attention on select task properties and ignoring others, however, this comes at the expense of making these individuals less likely to detect alternative problem solutions Beilock & DeCaro, 2007

What are the three classification levels of error?

Behavioral- observable Contextual- contextual triggers Conceptual- Causal mechanisms Reason, 1990- Ch. 1

What determines how complex and error prone a system is

Complexity of interactions (high interactivity increases complexity, linear interaction decreases complexity) Tightness of coupling (tight coupling increases complexity, loose coupling decreases complexity) Reason, 1990- Ch. 7

What is situated SA and what are its limitations according to Endsley

Developed by Chiappe et al., 2011, says that operators maintain their understanding of dynamic situations by relying on minimal internal representations and engaging in frequent interactions with a structured environment. Operators sample limited amounts of information from the environment in cycles, and extract its relevance by combining it with an easily accessible context as per relevance theory (RT) Limitations according to Endsley (2015) 1. Says that only limited information constituting SA is held in WM and that this is separated from LTM which cannot hold SA, which is not in line with the majority of research on expert SA 2. This model does not believe in the role of LTM, mental models, or schemas, however, the volume of information to be acquired cannot be explained by the limited number of offloading techniques available (heuristics, environmental cues) 3. Relevance Theory is not capable of clarifying which cognitive mechanisms are used to determine relevance or determine a priori which information will be relevant to a decision maker 4. Relevance theory also states that people will use minimal effort to acquire and process information, however, as seen with the relationship of confidence and decisions, extra effort is often used to raise confidence in some piece of information Endsley, 2015

What are the main components of the team SA model

Developed by Endsley & Jones, 2001 1) Team SA requirements o Related to sharing levels 1-3 SA elements that are in common across team members o Data (system, environment, other team members), comprehension (status relevant to ones own goals/requirements, status relevant to team members goals/requirements, impact of ones own actions on others, self, and mission), and projection (actions of other team members) 2) Team SA devices o Related to communication, shared displays, and shared environment 3) Team SA mechanisms o Related to mechanisms that improve shared SA like shared mental models 4) Team SA processes o How team SA is developed o Self-checking, coordinated, prioritized, questioning, etc. Endsley, 2015

What is the data-frame model of sensemaking and how does it compare/differ to/from the 1995 SA model

Developed by Klein (2006) considers more cognitive aspect of SA than original sensemaking models · How Klein claims they differ (but they actually don't) 1) Looks at SA as both a process and product, although this fallacy has been discussed as the 1995 model also does both 2) considers frames people use to determine what information is important in the first place, however, this is also a fallacy discussed earlier as the 1995 model discusses how mental models and schemas serve the same function 3) considers functions such as problem detection, problem identification, anticipatory thinking, forming explanations, seeing relationships, and projecting the future, however, these are all functions that are discussed in the 1995 model as well (typically corresponding to levels 2 and 3) · How they actually differ 1) Sensemaking is only focused on deliberate efforts to understand events, not the more dynamic, rapid, and automatic situation assessment 2) Sensemaking fails to explain many of the aspects of cognition explained by the 1995 model including how mental models/frames are developed, modified, linked to goals and plans, supported during active replanning, how expectancies and preconceptions are developed and affect this process, and the many task, system, and environmental factors that can affect this process Endsley, 2015

What is sensemaking and how does it compare to the 1995 SA model

Developed by Weick (1995), focused on how people make sense of the information and situations in which they find themselves , largely at the organizational level with respect to explaining organizational accidents or unusual events Comparison to 1995 SA model: 1) 1995 model encapsulates that SA can be achieved via a conscious deliberate process as well as highly automatic process of situation recognition via schemas, however, sensemaking considers only the latter. 2) Sensemaking is generally backward-looking, whereas the 1995 model is generally forward-looking 3) Sensemaking ends with what explanation it derives, whereas the 1995 model also includes how people use those explanations to inform their fuller understanding of the situation Endsely, 2015

What is the three-layered framework for conceptualizing trust variability

Dispositional trust: person's enduring tendency to trust automation -Age, gender, culture, personality Situational trust: depends on the context of an interaction -Internal variability: self-confidence, expertise, mood, attentional capacity -External variability: type of system, system complexity, task difficulty, workload, perceived risks and benefits of using automation, organizational setting, and framing of a task Learned trust: based on past experience with a specific automated system -Initial learned trust: preexisting knowledge- attitudes/expectancies, reputation of the system/brand, experience with the system/similar technology, understanding of the system -Dynamic learned trust: System performance- reliability, validity, dependability, predictability, timing of error, difficulty of error, type of error, usefulness Design features- ease of use, transparency/feedback, communication style, level of control Hoff & Bashir, 2015

What is the difference between error types and error forms

Error type- presumed origin of errors corresponding to the cognitive stages involved in conceiving and carrying out an action sequence (Planning- Mistake, Storage- lapse, Execution-slip) Error form- recurrent varieties of fallibility that appear in all kinds of cognitive activity, irrespective of error type Reason, 1990- Ch. 1

What are some limitations of post-hoc accident investigations that makes them unlikely to prevent future accidents

Events that occurred are typically a unique combination of factors that are necessary but insufficient that are unlikely to occur again Hindsight bias Fundamental attribution error Reason, 1990-Ch. 7

What is the cognitive balance sheet of errors and an example

Every psychological advantage has a corresponding debit (correct performance and systematic errors are two sides of the same coin) Stems from Norman-Shallice attention to action model (Ch. 2) Ex. automaticity makes slips inevitable Reason, 1990- Ch. 1 & 2

Aside from WM capacity, what is another way that pressure can result in choking

Explained by explicit monitoring theories, for proceduralized/automatized tasks, pressure can increase self-consciousness and in turn increase inward focus of attention to the step-by-step process of completing an automatic task, impeding performance Beilock & DeCaro, 2007

What are some early influences on human error

Fruedian slip Meringer and speech errors Bartlett and schemata and their related systematic errors Reason, 1990- Ch. 2

What is a control model and how does it relate back to drift into failure

Functional account of living processes that coevolve with respect to a set of environmental conditions, and that maintain a dynamic and reciprocal relationship with those conditions. Specifically, it uses the idea of hierarchies and constraints to represent the emergent interactions of a complex system § Sociotechnical system consists of different levels, where each superordinate level imposes constraints on the subordinate levels § But even this conceptualization is not totally accurate, as it oversimplifies the system by focusing on only one-dimensional hierarchical representations Regarding drift into failure, drift is a gradual erosion of the quality or the enforcement of safety constraints on the behavior of subordinate levels (like the number of flights before lubrication was needed were extended- i.e., the constraint is eroding over time) Dekker, 2005

What is complacency potential, what factors influence it, and how is it measured

Has been defined as behavioral tendency to react in a less attentive manner when interacting with a specific automated system, however, it has also been more broadly defined as general attitudes of reliance and trust toward automation Factors that influence it include reliability and consistency of the system, LOA, operator's previous experience with the system, and individual characteristics of the operator Measure via the complacency potential rating scale (CPRS) Parasuraman & Manzey, 2010

What is it considered when there is no prior intention or intention in action and how does it relate to errors?

Involuntary or Nonintentional actions- like hitting someone during an epileptic seizure These are not classified as errors because errors are tied to prior intentions Reason, 1990, Ch. 1

What is 6 sigma and how does it relate to the chance of an accident

It represents 6 standard deviations from the mean (i.e., 99.99966% of all opportunities to produce some feature of a part are statistically expected to be free of defects) It means that most systems that achieve this are relatively safe and have a very low likelihood of failure Dekker, 2005

How should a designer determine the type and level of automation for a particular system

Iterative process o First decide what stage within the framework (and corresponding to the simplified model of human-information processing) should be automated o Next, decide what level of automation should be applied within each functional domain § Consider human performance consequences (MWL, SA, complacency, skill degradation) § Decide on level of automation based on primary criteria and then evaluate it against secondary criteria, iterate if necessary § Consider secondary evaluative criteria (reliability of automation as well as the costs of decision/action consequences) § Decide on final type and level of automation that met both primary and secondary evaluative criteria Parasuraman et al., 2000

Define MWL

Many definitions exist - portion of an operator's limited capacity actually required to perform a particular task (underlying assumption is that an operator has a limited processing capacity or limited processing resources) Tsang & Wilson, 1997

What was the methodology and major findings of Wickens et al., 2009 study regarding the cry wolf effect

Method: Aircraft track and alert system behavior data surrounding 495 conflict alerts were analyzed to identify true and false alerts, trajectory type, and controller behavior. Forty-five percent of the alerts were false, ranging from 0.28 to 0.58. Results: Although centers with more false alerts contributed to more nonresponses, there was no evidence that these were nonresponses to true alerts or that response times were delayed in those centers. Instead, controllers showed desirable anticipatory behavior (i.e., less reliance) by issuing trajectory changes prior to the alert. Those trajectory pairs whose conflicts were more difficult to visualize induced more reliance on, and less compliance with, the alerting system. Conclusion: The high false-alarm rate does not appear to induce cry wolf behavior in the context of en route ATC conflict alerts. Wickens et al., 2009

What is prior intention and intended action but failure to achieve the desired outcome

Mistake This is a PLANNING failure because there was a mismatch between prior intention and the intended consequence Reason, 1990- Ch. 1

What is a mode and how are they triggered to change

Mode: A machine configuration that corresponds to a unique behavior Modes are triggered to change via input, which can be done manually by an operator or automatically by the machine Degani et al., 1999

Define automation complacency and common features across automation complacency definitions

Poorer detection of system malfunctions under automation control compared with manual control (Parasuraman & Manzey, 2010); psychological state characterized by a low index of suspicion (Bailey & Scerbo, 2007); operator overreliance on automation that results in the operator failing to monitor the automation and its information source and failing to detect the occasional times when automation fails (Parasuraman et al., 2000) Common features across definitions: - Involves human operator monitoring automation - Frequency of monitoring is lower than some standard or optimal value - As a result of substandard monitoring, there is an observable performance decrement, typically missed or delayed response to a signal Parasuraman & Manzey, 2010; Bailey & Scerbo, 2007; Parasuraman et al., 2000

What are the cognitive processes that impede error detection

Relevance bias § Especially due to bounded rationality, only some cues can be examined and those that are tend to be relevant to the current hypothesis and those that appear disconfirming tend to be ignored Partial explanations § Errors are not detected because people are willing to accept only a rough agreement between actual state of the world and their theory about it Overlap of world and mental model § A person's mental model of a problem space is likely to correspond in large part to reality, even though it may be wrong in some respects § Thus most of the things one does will produce results predicted by their mental model and having expectations frequently confirmed reduces the sensitivity of the error detection mechanisms Reason, 1990-Ch. 6

What are aspects of automation dependence, what is anticipatory behavior, and how does these concepts relate?

Reliance: circumstances in which an operator does not respond when alerting system is silent Compliance: circumstance in which an operator responds when an alert occurs Anticipatory behavior: human acts in response to a dangerous event prior to an alert event These concepts relate because anticipatory behavior is considered a lack of reliance Wickens et al., 2009

Support for setting a high versus low threshold for signals

Research that supports a high threshold (few FA, possible misses or delays) as advantageous: - Many FA can disrupt operator's concurrent tasks because operator must either carry out alert-triggered action or cross check raw data to confirm the alert was false, so reducing FA can be useful Research that supports a low threshold (many FA, few misses) - Operators can still confirm the system is working well if FA are deemed "acceptable" by an operator when the alert is premature but accurate, however, "bad" FA that are seemingly unrelated to state of raw data can reduce operator trust in the system Wickens et al., 2009

What variables are used to measure response bias and sensitivity, what are the ranges of values

Response bias β, which assumes responses are based on a likelihood ratio · Numerator is the ratio of obtaining x on a signal trial (i.e., the height of the signal distribution at x), the denominator is the likelihood of obtaining x on a noise trial (i.e., the height of the noise distribution at x) · Subjects respond "yes" when likelihood ratio exceeds β and "no" otherwise · 0 indicates no response bias · < 1 indicates liberal response bias · > 1 indicates conservative response bias · Impacted by changes in sensitivity c, which assumes response are based on decision variable (not likelihood ratio) · Defined as the distance between the criterion and the neutral point (no bias) o If criterion is located a neutral point, there is no bias o Negative values indicate liberal bias o Positive values indicate conservative bias · This measure assumes that subjects respond "yes" when decision variable exceeds the criterion and "no" otherwise · Unaffected by changes in sensitivity Sensitivity d' · 0 indicates inability to distinguish · Higher values indicate better ability · Negative values indicate sampling error or response confusion § Does not vary with response bias, as long as assumptions are met (Signal and noise distribution are normal & have the same SD) A' used if assumptions as not met (nonparametric) Stanislaw & Todorov, 1999

What is automation bias, what are the corresponding errors it can produce, and what factors contribute to automation bias

Results when operator's use of a decision aid is a heuristic replacement for vigilant information seeking and processing Errors - Can result in error of omission, where operator does not notice or respond to a critical situation because the aid did not alert them to it - Can result in error of commission, where operator follows an incorrect recommendation from the aid Factors -Cognitive miser hypothesis: humans seek to minimize cognitive effort in DM using heuristics and decision rules - perceived trust humans have in decision aids as powerful agents with superior analysis capability -Diffusion of responsibility akin to social loafing Parasuraman & Manzey, 2010

What is the difference between SA and situation assessment

SA: the PERCEPTION (Level 1 SA) of the elements in the environment within a volume of space and time (dynamic), COMPREHENSION (Level 2 SA) of their meaning, and the PROJECTION (Level 3 SA) of their status in the near future (i.e., the product) Situation assessment: the process of achieving, acquiring, or maintaining SA Endsley, 1995

What is a mental model, situational model, schema, and script

Schema: Coherent frameworks for understanding information, encompassing highly complex system components, states, and functioning Script: a special type of schema, provides sequences of appropriate actions for different types of task performance Mental model: mechanism whereby humans are able to generate descriptions of system purpose and form, explanations of system functioning and observed system states, and predictions of future states (can be thought of as a schema for a certain system) Situational model: schema depicting the current state of the system model Endsley, 1995

What are the three modes of error detection and modes of error detection within each of the overarching modes

Self-monitoring: 1) Standard check (SC): Centrally evoked mechanism that emerges as a part of the problem-solving technique. Most KB mistakes are detected this way 2) Direct error hypothesis (DEH): Spontaneous data-driven process involving a match between stored representation of past error with currently observed one. Most slips are detected this way. 3) Error suspicion (ES): Spontaneous data-driven process involving general expectations and problem-solving efforts triggering the detection problem. Most RB mistakes are detected via a mix of this and the DEH modes of error detection Environmental cueing Forcing function Cued discovery Other people Reason, 1990-6

What is sensitivity and response bias in terms of SDT and how are they measured/calculated

Sensitivity is a measure of accuracy in discrimination, or degree of overlap between signal and noise distributions -measured by the standardized difference between H and FA rate Response bias is a persons tilt toward one response or another, determined based on the location of the criterion -measured by the sum of the standardized H and FA rate Macmillan & Creelman, 2005; Stanislaw & Todorov, 1999

What are the stages of human information processing and their associated types of automation

Sensory processing/acquisition Perception and working memory/analysis decision making/decision response selection/action Parasuraman et al., 2000

What is prior intention but an unintended action (i.e., the action did not go as planned)

Slip or lapse This is an EXECUTION failure because there was a mismatch between the intended action and what was executed Reason, 1990- Ch. 1

What are the three types of errors and details regarding each

Slip- (Execution failure) Lapse- (Storage failure) Mistake- (Planning failure) - Failure of expertise: (Rule-based error) - Lack of expertise: (Knowledge-based error) Reason, 1990- Ch. 1

What are specific and general activators that activate schemata

Specific: bring a given schema into play at a particular time, these include intention, context (physical and semantic), and related schema General: provide background activation to schemata irrespective of current intentional state or context, these include recency, frequency, shared elements, and affective change Reason, 1990- Ch.4

What is supervisory control and what are the levels of supervisory control

Supervisory control: initiating, monitoring, and adjusting processes in systems that are otherwise automatically controlled § Four distinct levels of supervisory control · Task interactive system (TIS): closed loop control over hardware components of the task (propeller, pump, switch, valve) · Human interactive system (HIS): controls the TIS and communicates the state of the system to the operator (via controls and displays) Reason, 1990-7

What factors impact error detection rates

Task complexity: rates tend to decrease as complexity increases Error types: § SB errors occur the most often, followed by RB, then KB § Error types are detected at similar frequencies (when considering relative detection rates) § Error correction seems to be highest for SB, then RB, then KB errors · However, omitting a step is the most commonly undetected and uncorrected SB error Reason, 1990- Ch. 6

What is team SA, shared SA, and distributed SA

Team SA: degree to which every team member possesses the SA required for their responsibilities Shared SA: degree to which team members have the same SA on shared SA requirements Distributed SA: SA in teams in which members are separated by distance, time, and/or obstacles (Stanton et al., 2013 define this as distributed cognition on the overall system, which includes both human and technological agents. They also provide a DSA model) Endsley (1995; 2015)

What is mode awareness and what happens when mode awareness is lacking

The ability of an operator to track and anticipate the behavior of the automated system Insufficient mode awareness results in degraded ability to allocate attention effectively and to detect errors, failures, or miscommunications Sarter & Woods, 1995

How can you increase the hit rate without also increasing the false alarm rate

The only way to increase the hit rate while reducing the FA rate is to reduce the overlap between the signal and the noise distribution (sensitivity) Stanislaw & Todorov, 1997

What does it mean if two machines are coupled

The output of one is the input to the other that triggers an automatic mode change Degani et al., 1999

What is the relationship between coping and self-regulation and what is self-regulation

They are closely related Self-regulation: proximate motivational process by which a person influences the direction, amount, and form of committed effort during task engagement Organized at three levels - Lowest level is automatic and generates intrusive thoughts -Executive level that regulates coping -Schema-like self-knowledge in LTM Matthews, 2001

What have root causes analyses told us about the nature/cause of most errors

They are due to maintenance-related omissions and very few are initiated by front-line personnel Reason, 1990- Ch. 7

What is a structuralist model, discuss its use in relation to modeling sociotechnical systems

They assume systems are controlled by rigid mechanical structures (static) and that examining the independent components of a system do not distort the analysis by taking the whole apart limited in modeling sociotechnical systems because these systems are internally plastic, flexible, organic with functioning controlled by dynamic relations and ecological adaptation Dekker, 2005- Ch 1 & 2

What are some notable cases of human error throughout history

Three mile island Bhopal Challenger explosion Chernobyl disaster Kings cross tube station fire Reason, 1990- Ch 1.

Define violation and describe the types of violations

Violation: deliberate but not necessary reprehensible deviation from those practices deemed necessary to maintain the safe operation of a potentially hazardous system Types: Routine: largely habitual, forming an established part of an individual's behavioral repertoire Exceptional: singular violations occurring in a particular set of circumstances Reason, 1990-7

How do errors differ from violations

Violations consider the social context surrounding the behavior (i.e., operating procedures, rules, etc.) whereas errors are concerned with the cognitive process of the individual, thus they are mediated by different cognitive mechanisms § Violations involve the willful disregard of rules over intrinsically driven whereas decision errors result from lack of knowledge (Shappell et al., 2007) § Additionally, potential mitigation strategies differ (Shappell et al., 2007): · Decision errors can be improved with scenario-based training, planning aids, and education · Violations can potentially be mitigated by enforcing standards and increasing accountability Reason, 1990-7

When is SDT used

Whenever two possible stimulus types must be discriminated Stanislaw & Todorov, 1999

What are the individual factors and underlie and affect SA

abilities (preattentive processing, attention, WM), experience & training (LTM, mental models, schema/script), preconceptions/expectations (perception), and goals and objectives 1. Preattentive processing- determines which portions of the environment are attended to based on cue salience and preattentive processing 2. Attention- major limit on SA because it constrains a person's ability to perceive multiple items in parallel. However, there are many ways these attention limit constraints on SA can be overcome including: a) sequential information sampling, b) slightly increasing physiological arousal, c) presenting information in different formats in accordance with MRT, d) Attention sharing as an individual skill, e) automaticity 3. Perception- expectancies and preconceived notions impact the perception of the elements in an environment. Further, because perceived information is classified by LTM stores, this enables experts to extract more information from an environment compared to novices 4. WM- when perceived information is integrated and compared with existing information, this forms the basis of level 2 SA. Further, when projecting future states, information regarding current and future states, rules used to compare them, and subsequent actions must be held in WM (unless a schema already exists for the situation, then WM would not be needed) 5. LTM- structures in LTM (mental models, schemas, scripts) help to overcome limits of WM. These structures are developed via training and experience 6. Automaticity- enables the use of mental models and schemata via the automatic matching of critical cues in the environment to internal models, helping to overcome attention and WM limitations. However, this process also results in lower SA, especially for novel situations (like stopping at the gas station on your way home from work but forgetting because it is not part of your drive home from work schema) 7. Goals- helps people actively seek environmental information in light of their goals (i.e., top-down processing), and similarly, helps recognize patterns in the environment that indicate whether new/updated plans are needed to achieve goals (dynamic process between goals, mental models, schemas/scripts, and environmental information) Endsely, 1995

What is the banality-of-accidents thesis and what does it tell us about the relationship between drifting into failure and incident reporting

accidents, and the drift that precedes them, are associated with normal people doing normal work in normal organizations Because accidents are the product of normal people doing normal work, incident reporting will almost never capture drift into failure because the things that precede this drift are all considered normal and not noteworthy Dekker, 2005

What is systems thinking and how does it relate to accidents/failures

according to systems thinking, failures are seen as an emergent feature or organic, ecological, transactive processes rather than an end point of trajectory through holes in layers of defense Systems thinking says that if we want to understand failures past 6-sigma, we need to stop looking for failures that precede it because it is normal work (banality of accidents thesis) and not failure that leads to failure Need to focus on emergent properties, which can be seen only at a higher level, not by dissecting individual components and looking at them in isolation Dekker, 2005

What is clumsy automation

clumsy use of technology, such as proliferation of modes, creates the potential for new forms of human-machine failure Sarter & Woods, 1995

What is an automated decision aid, what types of cognitive support do they provide, and how are they misused so that they may result in automation bias

device that supports human DM in complex environments Provides support in information analysis via alerts regarding situational changes that may warrant action or decision making via recommendations regarding advice on choice and action Misused by generating cues that are salient and draw user attention OR because users ascribe greater power and authority to the aid compared to other sources of advice Parasuraman & Manzey, 2010

What is organizational resilience

not a property, it is a capability; a capability to recognize the boundaries of safe operations, a capability to steer back from them in a controlled manner, a capability to recover from loss of control if it does occur Dekker, 2005

What are the basic and human elements in a general view of accident causation

o Basic elements of production § The decision makers: Managers who decide on how goals should be met including allocation of resources § Line management: Department specialists who implement the strategies of the DM § Preconditions: Reliable equipment, skilled and motivated workforces, etc. § Productive activities: Integration of human and mechanical elements § Defenses: Safeguards against foreseeable hazards The human elements § Fallible decisions: inevitable part of the design and management process, often fueled by conflicting goals of production and safety § Line management deficiencies: May help to mitigate or exacerbate poor decisions made by upper-level management § Psychological precursors of unsafe acts: Latent states § Unsafe acts: These are errors or violations that are committed in the presence of a potential hazard, an act is not considered unsafe without the potential hazard § Inadequate defenses: the limited window of accident opportunity Reason, 1990-7

What elements influence the appropriateness of trust

o Calibration: correspondence between a person's trust in the automation and the automation's capabilities § Overtrust: poor calibration in which trust exceeds the system capabilities § Distrust: poor calibration in which trust falls short of the automation's capabilities o Resolution: how precisely a judgement of trust differentiates levels of automation capability § Low resolution indicates that large changes in automation capability map onto a small range of trust o Specificity: degree to which trust is associated with a particular component or aspect of the trustee § Functional specificity: differentiation of functions, subfunctions, and modes of automation § Temporal specificity: changes of trust as a function of the situation or over time Lee & See, 2004

What are the major steps in selecting an appropriate MWL assessment strategy

o Delineate the objective(s) of the MWL assessment § Major objectives include prediction, evaluation, diagnosis o Perform a task/mission/system analysis o Assess the constraints and resource availability § Time, cost, equipment, and expertise o Select the types of workload measures to be used o Select the specific workload assessment technique(s) o Familiarize oneself with the chosen technique o Formulate the design for evaluation o Reexamine the appropriateness of the selected workload measures Tsang & Wilson, 1997

Countermeasures for complacency

o Design automated systems to clearly identify the purpose of the automation, specify the processes and algorithms used by showing intermediate and final result to user, and tie this information to the operational context and operator's goals o Training operators regarding the expected reliability and factors that govern the automated systems behavior, and allow operators to experience how the performance of the automated system depends on contextual/situational factors o Adaptive automation: systems where the level, functionality, and/or number of automated systems can be modified in real time allowing for a restructuring of the task environment based on evolving situational demands § Can provide best match between task demands and cognitive resources available to an operator § Thus can improve monitoring via improving attentional resources and enhancing the quality of information processing supporting task performance Bailey & Scerbo, 2007

Broadly, what are some strategies for reducing error

o Eliminate affordances for error o Intelligent decision support systems (these support the plan formulation stage) o Memory aids for maintenance personnel (these support plan storage and execution stages) o Training o Ecological interface design o Self-knowledge about error types and mechanisms Reason, 1990-8

At the KB level, what are the primary failure modes and their corresponding consequences

o Failures arise from bounded rationality and incomplete or inaccurate mental model of the problem space o Selectivity mistakes can occur as a result of selective processing of task information, especially when attention is directed to salient aspects of the problem configuration rather than logically important aspects o Workspace limitations the cognitive strain on the workspace varies with the form of the problem presentation because information is recalled from the workspace in the order it was presented (first in first out FIFO) o Out of sight, out of mind related to the manifestation of the availability heuristic, not only does it heavily weigh information that comes to mind easily, but it ignores that which is not immediately present o Confirmation bias current hypothesis is not relinquished, even in the face of disconfirming evidence o Overconfidence in evaluating the correctness of knowledge o Biased reviewing of selected course of action o Illusory correlation mistaken belief there is a relationship between two variables when one does not exist o Halo effects processor have difficulty in processing independently two separate ordering of the same person or object so they reduce discrepant orderings to a single ordering o Problems with causality tend to oversimply o Problems with complexity § Delayed feedback, even minimally, can result in virtually no improvements in performance even with practice · Subjects fail to form a predictive model of the situation and are primarily data-driven § Insufficient consideration of processes in time subjects fail to consider how something developed over time to be where it is at now § Difficulty with exponential developments subjects tend to underestimate exponential rate of change § Thinking in casual series not casual nets people tend to see things in a linear sequence rather than considering the side effects that ripple outward § Thematic vagabonding moving from issue to issue, treating each superficially · Like escape behavior where they leave challenging topic alone so they don't have to face it more than necessary § Encysting almost the opposite of thematic vagabonding, people linger over certain topics and small details while disregarding more important issues Reason, 1990- Ch. 3

What is the catch 22 of supervisory control

o Humans are used to troubleshoot emergencies and novel situations, however, under stress, humans are more likely to use effortless, parallel, preprogramed operations and their associated heuristics o Further, the solution would seem to be to have operators build an extensive repertoire of recovery routines, however, these routines would be unlikely to be relevant in the case of a novel situation Reason, 1990- Ch. 7

At the SB level, what are the primary failure modes and their corresponding consequences

o Inattention (omitted attention checks) § Double-capture slips can result in strong habit intrusion or exclusion, branching errors, overshooting a stopping rule § Omissions following interruptions can lead to program counter failures where RB correction to SB hiccup is counted as part of planned sequence and so original sequence is picked up one or two steps further along § Reduced intentionality happens due to delay between intention formulation and execution, errors include what-am-I-doing-here and I-should-be-doing-something-but-I-can't-remember-what experiences § Perceptual confusions because of degradation of acceptance criteria that result in accepting a look alike for the intended object § Interference errors when two currently active plans become entangled, can result in blend of speech and action or spoonerism o Overattention (mistimed checks) § Omissions § Repetitions § Reversals Reason, 1990- Ch. 3

Potential countermeasures for automation complacency and bias

o LOA and type of processing supported by automation can influence its costs and benefits as well as complacency and SA (discussed in Parasuraman et al., 2000) o Automation bias may be mitigated via aiding information analysis rather than decision support o Changing situational conditions § Raise perceived accountability of operators § Use flexible strategies of function allocation § Importance of negative feedback loop o Provide practical experience with an automated system Parasuraman & Manzey, 2010

At the RB level, what are the primary failure modes and their corresponding consequences

o Misapplication of good rules (Good rule is one that has proven utility, but misapplied when environmental conditions share common features with the appropriate states but also possess elements that merit a different set of actions) § First exceptions likely that the first encounter an individual has with a situation will be an exception to a general rule, and the occurrence of an error in applying the rule is what leads to the development of a lower order more specific rule § Signs, countersigns and nonsigns · Signs are inputs that satisfy some or all of the conditional aspects of an appropriate rule · Countersigns are inputs that indicate the more general rule is inapplicable (may not gain attention, or if they do, sometimes are argued away) · Nonsigns are inputs which do not relate to any existing rules, but which constituted noise within the pattern recognition system § Informational overload means the operator can process only a limited number of indicators, so those selected are likely to match the conditional components of several rules · Adds to the difficulty in detecting countersigns § Rule strength cognitive system is biased toward the strength of the rule, which is determined by the number of times it has achieved a successful outcome in the past § General rules are likely to be stronger because they are encountered more frequently than the lower-level rules · There is likely a relationship between rule level in the hierarchy and rule strength § Redundancy some pieces of information will allow the operator to identify that certain signs tend to cooccur, so the operator learns to identify key signs and sees the remaining signs as redundant (related to partial matching) · Thus certain elements within a rule will acquire greater strength relative to other elements (meaning they carry more weight in the matching process) · Can bias the operator to attend to some cues more than others, with the bias favoring informative signs rather than rarer countersigns § Rigidity stubborn tendency to apply a familiar but cumbersome solution when a simpler one is readily available · Like Beilock & Decaro study o Application of bad rules § Encoding deficiencies: · Certain features of the problem space are not encoded at all o Often occurs during the acquisition of complex skills when the cognitive demands of some components of the overall activity screen out rule sets associated with other equally important components (rule structures are fragmented) · Certain features are misrepresented in the conditional component of the rule o Feedback necessary to disconfirm bad rules may be misconstrued or absent altogether o An erroneous general rule may be protected by the existence of domain-specific exception rules § Likely when problem solver encounters relatively few exceptions to the erroneous general rule § Action deficiencies: · Wrong rules · Inelegant or clumsy rules · Inadvisable rules Reason, 1990

What is a theoretical framework for MWL

o Multiple resource model (Wickens) § The higher the similarity in resource demands among task components, the more severe competition for similar resources, resulting in higher levels of workload Tsang & Wilson, 1997

What are limitations of the dynamic stress and sustained attention model

o Not yet useful for intermittent forms of stress, better for continuous sources o Consideration of individual differences has yet to be fully accounted for in the model o The model is still limited in its ability to provide predictions for multivariate forms of stress Hancock & Warm, 1989

What is a problem configuration, what error/performance level does it relate to, what are the various problem configurations, and what is the importance in knowing about different problem configurations

o Problem configuration: the set of cues, indicators, signs, symptoms, and calling conditions that are immediately available to a problem solver and upon which they work to find a solution (Associated with KB performance) § Static configurations: problems in which the physical characteristics of the problem space remain fixed regardless of the activities of the problem solver § Reactive-dynamic configurations: the problem configuration changes as a direct consequence of the problem solvers actions § Multiple dynamic configurations: the problem configuration can change both as the result of the problem solvers activities and spontaneously due to independent or situational factors · Bounded problems are further defined by variability that arises from limited and known sources (other player's chess moves in a game) · Complex problems where variability can stem from many sources, some of which may be little understood or anticipated § It is important to understand the various problem configurations because they require different strategies and elicit different forms of problem-solving pathology Reason, 1990- Ch.3

How to rules at the RB level change when exceptions are encountered

o Rules are organized as a hierarchy with the most prototypical rule at the top level § When exceptions to the rule are encountered, increasingly more specific rules are generated at lower levels within the hierarchy § The addition of these rules at lower levels in the hierarchy increase the complexity and adaptability of the overall model Reason, 1990- Ch. 3

What is accident proneness theory and why has it been discredited

o Says some individuals have greater likelihood to be involved in an accident, and this greater likelihood is due to some enduring feature of the individual, like personality, information processing deficiency, etc. o Theory failed because could not a priori determine who these accident prone individuals were or what underlying enduring characteristics they shared that contributed to their greater likelihood of being in an accident Reason, 1990-7

What are the properties of choosing a MWL measure

o Sensitivity: how well a measure detects changes in MWL o Diagnosticity: how precisely a measure can reveal the nature of the workload o Intrusiveness: whether the measure interferes with performance of the task o Validity: whether workload measurement is measuring MWL § Concurrent validity- with other measures § Predictive validity o Reliability: whether measure is stable and consistent over time o Ease of use: how easy it is to collect and analyze data from a measure o Operator acceptance Tsang & Wilson, 1997

How does adjusting the criterion in terms of SDT impact hits and FA

o Setting the criterion lower (or at a more liberal value) means the criterion will almost always be exceeded on signal trials (producing many hits), but also on many noise trials (producing false alarms) o Setting the criterion higher (or at a more conservative value) means the criterion will almost never be exceeded on signal trials or noise trials (low FA rate, and low hit rate) Stanislaw & Todorov, 1997

What are the types of MWL measures and the advantages/disadvantages associated with each

o Subjective: operators rate level of mental effort they feel is required to accomplish a task § Advantages: Ease of use, face validity, operator acceptance § Disadvantages: susceptible to memory decay and operator bias o Performance: rely on operator behavior to determine workload § Primary task measure · Advantages: face validity · Disadvantages: does not work well for extremely low workload conditions § Secondary task measure · must compete for same processing resources as primary task to be sensitive · Advantages: Diagnosticity · Disadvantage: Intrusiveness o Physiological: measure change sin operator physiology associated with cognitive task demands § Heart rate, heart rhythm, heart rate variability, blink rate, EEG, ERP § Advantages: nonintrusive, continuous § Disadvantage: ease of use, operator acceptance o Analytic: rely on modeling the workload situation § Mathematical, engineering, and psychological models designed to be used in a predictive and evaluative fashion · Timeline analysis (although this assumes only a single channel processor) § Advantages: each parameter and assumption of the model must be made explicit Disadvantage: ease of use Tsang & Wilson, 1997

Why is sustained attention a source of stress

o The vigilance task itself is a major source of stress § Likely related to the impoverished displays observers must monitor o Often results in fatigue and drowsiness § May stem from need to maintain high level of alertness and having no control over events that may occur § Control is a big factor in stress on vigilance task because of its role in the coping and appraisal process Hancock & Warm, 1989

What are the upper and lower bounds of automation

o Upper bound: the highest level the automation could be designed to go but should not exceed, but also not necessarily the required level Lower bound: minimal level of automation required for acceptable system performance Parasuraman et al., 2000

What are the potential applications ok knowing how individual WM capacity impacts decision making and reasoning problem solving strategy selection and how it impacts performance under pressure

o Used to develop models of executive functioning that capture the complexity of real-world performance o Used to develop training regimens and performance strategies that can maximize skill success and minimize failure, particularly in stress-inducing situations such as when optimal performance is highly incentivized Beilock & DeCaro, 2007

What is analysis automation and examples at different levels of automation

o involves cognitive processes such as working memory and inferential processes and these operations occur prior to the point of decision § Low levels may include algorithms that can be applied to incoming data that can be extrapolated over time (i.e., prediction) § Higher levels may involve integration: several input variables are combined into a single value · Ex. Use a display with emergent perceptual features § More complex levels involve an information manager that provide context-dependent summaries of data to the user Parasuraman et al., 2000

What is action automation and examples at different levels of automation

o involves execution of the action choice § Typically replaces the hand or voice of the human § Different levels of action automation can be defined by the relative amount of manual versus automatic activity involved in executing the response Parasuraman et al., 2000

What is decision automation and examples at different levels of automation

o involves selection from among decision alternatives § Automation at this level involves varying levels of augmentation or replacement of human selection and decision options with machine decision making § Expert systems are designed with conditional logic (production rule) to prescribe a specific decision choice if particular conditions exist § Different levels of automation are defined with the table proposed by Sheridan (pasted above) Parasuraman et al., 2000

What is acquisition automation and examples at different levels of automation

o refers to sensing and registration of multiple sources of input data and is equivalent to the sensation stage of human information processing § Low levels of automation may consist of mechanically moving sensors in order to scan and observe § Moderate levels of automation may involve organizing information by some criteria and highlighting some part of the information · Highlighting preserves the visibility of the raw data § High levels of automation may involve filtering, or selecting and bringing certain of items of information to the operator's attention · Filtering does not preserve the visibility of the raw data Parasuraman et al., 2000

What is a transformation error, and what transformation errors tend to occur at each transformation from real-world to SA

real-world--> system knowledge--> interface knowledge--> SA 1. From real-world to system knowledge: the system may not acquire all the needed information from the real world 2. From system knowledge to interface knowledge: of the information acquired by the system, not all of it may be displayed to the operator 3. From interface knowledge to SA: Of the information displayed by the interface, there may be inaccurate or incomplete transmission to the operator (due to perceptual, attentional, or WM constraints) Endsley, 1995

What is Lazarus and Folkman's (1984) transactional theory of stress

stress is a relationship between the person and the environment that is appraised by the person as taxing or exceeding their resources and endangering their well-being Appraisal Primary: personal significance of events Secondary: coping ability Categories of processing Task/problem-focused coping: directed toward changing external reality Emotion-focused coping: directed toward changing the way the person thinks or feels about stress Matthews, 2001

What are the task/system factors that underlie and affect SA

system capability, interface design, stress and workload, complexity, and automation 1. System design- people gain information from the environment after it has undergone several transformations (real-world--> system knowledge--> interface knowledge--> SA), and a transformation error, or loss of information, can occur at each of these transformations 2. Interface design- impacts SA by determining what information can be acquired, how accurately it can be acquired, and to what degree it is compatible with the operator's SA needs 3. Stress- slight levels of stress can increase arousal and have a positive impact on performance, however, high levels of stress negatively impact performance via narrowing the field of attention (cognitive tunneling and premature closure) or by decreasing WM capacity and retrieval 4. Workload- Relationship between SA and WL are essentially independent across a wide range of constructs, only under high WL levels are decrements in SA expected A. low SA, low WL: inattentiveness, vigilance problems, low motivation B. low SA, high WL: operator can attend to only a subset of information and has erroneous or incomplete perception and integration of information C. high SA, low WL: ideal state D. high SA, high WL: operator is working hard, but is successful in achieving an accurate and complete picture of the situation 5. Complexity- increased complexity negatively impacts an operator's WL, and subsequently their SA 6. Automation- automation of unnecessary manual work can positively impact WL and SA, however, automation can also result in out-of-the-loop performance decrements (i.e., diminished ability to detect errors and perform task manually if automation fails) for several reasons: A. loss of vigilance and increased complacency associated with assumption of a monitoring role B. Difference between being an active processor of information versus a passive recipient of information C. Loss of or change in the type of feedback provided to the operator concerning the state of the system Endsley, 1995

Metatrust

trust a person has that the other person's trust in the automation is appropriate Lee & See, 2004

Why can it be difficult to determine the MWL redline and what are some approaches to how this can be done

§ Difficult to determine because of the multifaceted nature of humans and lack of sufficient understanding of all the variables that contribute to human performance § One approach to addressing this redline problem is to use past experience to derive a level of subjective WL that can be used to indicate a problem area · SWAT has been used for this purpose and determined that this score is 40 +/- 10 § Another approach is to collect MWL values for an existing and acceptable system to provide a standard or criteria to compare a new and upgraded system with Tsang & Wilson, 1997

What is the vector model of stress

§ Each input stress can be represented by a vector where the orientation of the vector represents the qualitative nature of stress and the length of the vector represents the intensity of the stress § Vectors can be summed to represent interactions among stressors quantitatively and enables predictions regarding the location of points of maximal adaptability for various stress combinations Hancock & Warm, 1989

What is the vigilance decrement function

§ First documented by Norman Mackworth § Represents how the capacity of an individual initially begins to decline, followed by a relative plateau in which the capacity to respond does not change substantially § is the result of the average performance of individuals, whose individual performance can vary greatly from the function ("superstars" and "sleepers") § Can be thought of as a form of iatrogenic disease, that is, the primary pattern of behavior that characterized vigilance is actually a result of the conditions created initially by the contemporary system and display designers Hancock, 2013

What conditions result in vigilance and its associated decrement (i.e., as a technology-induced iatrogenic disease)

§ Generated via externally imposed compulsion and poorly designed equipment Hancock, 2013

What is the invigilant function and what determines the level of this invigilant function

§ Hancock argues that the vigilance decrement is actually an invigilant increment whereby operators are adapting their response to a new, constraining, and most frequently uninteresting situation § The invigilant function is proportional to the iatrogenic level of display poverty Hancock, 2013

How attention as a commodity relates to vigilance

§ If there are ever greater sources of information and humans continuously adapt to environmental expectancies by engaging in strategies such as multitasking, can they then sit and watch a single (often uninteresting) source of stimulation for very rare cues for action for an extended period of time? § Will there be a generational or even evolutionary difference because of how one grew up (pda, smartphone, tv, etc.), and if so, can they ever retrace and focus back on a single source alone? § Hancock thinks not; as soon as cognitive stimulation falls below an individual's "set point", he thinks they will seek out additional/alternate sources of information (Hancock, 2017)

What are some sources of cognitive underspecification

§ Incomplete or ambiguous inputs § Fragmented retrieval cues § Incomplete or inaccurate knowledge § Losses from prospective memory § Failures of attentional monitoring Reason, 1990- Ch. 4

How does individual WM ability impact strategy selection for decision making and reasoning tasks

§ Individuals with low WM capacity are thought to be more likely to rely on associative processing because they have less of a capacity needed to support rule-based processing' § Individuals with high WM capacity are thought to be more likely to rely on rule-based processing - Two experiments support these findings Beilock & DeCaro, 2007

What is the trinity of stress

§ Input approach: considering the physical environment · Deterministic · Stress signature § Adaptive/compensatory approach: focus on the appraisal and coping mechanisms of the exposed individual · Nomothetic/lawful tendencies that apply to all § Output approach: concerns the output of the individual, typically related to ongoing changes to different bodily functions and performance efficiency · Idiographic/individualistic Goal-directed behavior Hancock & Warm, 1989

What is Smith & Hancock's (1995) Perceptual cycle model of SA

§ Interaction with the world is directed by internally held schemata, which results in individual sampling environment to confirm that the situation conforms to expectations, and the outcome of this sampling leads to modifications of the schema, which directs further exploration · Process continues in an infinite cyclical nature (direct, sample, modify, repeat) Stanton et al., 2013

What is a source of compulsion and how does it impact vigilance and the associated decrement

§ Intrinsic/internal: individual can control the situation themselves (i.e., when to continue or stop the vigil) and thus the associated level of stress · This is what distinguishes sustained attention from vigilance § Extrinsic/external: your job and associated punishments force individual to continue vigil, increasing the associated levels of stress · Vigilance and its associated decrement are derived from this external imposition of the need to sustain attention Hancock, 2013

What are the major errors in SA at level 3 SA

§ Lack of a good mental model, despite a good understanding of the situation, can make future projections difficult § Some people are not good at mental simulation, possibly because of attention and memory limitations Endsley, 1995

Why is unitary arousal theory insufficient for explaining the relationship between stress and vigilance

§ Lacks predictive power § Often applied post-hoc Hancock & Warm, 1989

What is malleable attentional resource theory and how can it explain the effects of automation induced complacency

§ Malleable attentional resource theory (MART): posits that during times of low WL, the attentional capacity of an operator shrinks in much the same way it is exhausted when task demands are high Bailey & Scerbo, 2007

What are potential countermeasures for vigilance and the associated decrement

§ Mitigating the externally/extrinsically imposed compulsion of tasks · Hedonistic tasks that result in self-imposed monitoring (like video games) · Integrating monitoring task with another more appealing task · Hiding the monitoring task behind another ongoing task § Reducing display poverty Hancock, 2013

What is a mode error and mode ambiguity and how do they relate

§ Mode error: occurs when a situation is falsely classified resulting in an intended and appropriate action for the perceived situation, nut inappropriate for the actual situation (i.e., a type of mistake) § Mode ambiguity: when the same user input elicits two different interpretations depending on the mode § Mode ambiguity does not always lead to mode errors, it does only when a user has a false expectation about the result of their action Degani et al., 1999

DSA model by Salmon et al., 2008

§ Model describes SA as being distributed in the world · No one member has overall SA, it is distributed around the system · SA is viewed as an emergent property of a system rather than existing in the mind of an individual Endsley, 2015

What are narrow and broadband approaches to studying vigilance and stress

§ Narrow-band approach: examines the effects of a variety of stresses on a single task § Broad-band approach: examines the effect of a single source of stress on multiple distinct tasks Hancock & Warm, 1989

What are the major components of the dynamic model of stress and sustained attention

§ Normative zone: region demands no adaptation because input stress is insufficient to initiate compensatory activity § Input stress levels vary from underload/hypostress to overload/hyperstress · Zone of comfort: located at center of continuum § Minor levels of stress do not impact output because stress is absorbed by adaptive capabilities · Maximal adaptability: level of input stress that can be tolerated by an individual without subsequent output disturbance 1) Physiological zone of maximal adaptability: akin to homeostatic adjustment 2) Psychological zone of maximal adaptability: akin to attentional resources, this zone is smaller than the physiological zone § High levels of stress that exceed the zones of maximal adaptivity result in dynamic instability Extension of this model includes information rate and structure/meaning 1) Information rate: temporal flow of the environment 2) Structure: meaning given by the individual perceiver that is based on past experience as well as expected future actions based on aims and goals of the individual operator Hancock & Warm, 1989

Countermeasures for coping with mode errors and aiding mode awareness

§ Reduce the number and complexity of the modes § Support the new knowledge demands created by complex automated resources through new approaches to training human supervisory controllers · Enhance attentional control skills · Enhance knowledge activation IN CONTEXT · Encourage knowledge relevant to difficult but infrequently occurring situations § Interface and design changes that support mode awareness by revealing what they automation is doing, why it is doing it, and what is will do next · Provide salient auditory or kinesthetic feedback to indicate mode transitions · Provide history of interactions as a trace of past system behavior · Implement forcing functions: something that prevents the behavior from continuing until the problem has been corrected · Lower level of automation for decision automation so that human must agree with mode transition before it takes place Sarter & Woods, 1995

Is complacency a fixation or attention failure and what evidence supports this view?

§ Related to attentional failure over just fixation failure as forced eye fixations to the automated task did not eliminate the complacency effect Parasuraman & Manzey, 2010

At the RB level, what factors determine which rule of several competing rules will be applied

§ Rule should match salient features of the environment § Strength of the rule (how frequently it has performed successfully in the past) § How specific a rule is (more specific means it is more likely to be selected) § Degree of support the rule receives from competing rules (degree of compatibility it has with currently active information) Reason, 1990- Ch.3

Methods for measuring mode or situation awareness

§ Subjective ratings · Situation awareness rating technique (SART) · Limited by bias in reporting accuracy and memory deficiencies, so need to be combined with other techniques § Explicit performance measure · Situation awareness global assessment technique (SAGAT) · Highly intrusive § Implicit performance measures · Experimental scenarios that include tasks and events that probe the participants SA · Low intrusiveness but probes must be operationally significant (i.e., provides cues to the operator that if perceived should lead to an observable change in behavior) Sarter & Woods, 1995

Define vigilance

§ The ability of an organism to maintain their focus of attention and remain alert to stimuli over prolonged periods of time (Warm et al., 2008) § Reconceptualizes as an adaptive adjustment by the observer to the stressful and externally imposed need to constrain what is normally the free-roving, self-directed, information-acquiring capacity of attention (Hancock, 2013)

What are the findings of thermal and acoustic stress on performance

§ Thermal stress: any change in internal body temperature that disrupts the boundaries of thermal homeostasis results in a significant breakdown in capability · Bidirectional impact on vigilance (too high and too low are bad) · Direct physiological analog to internal body temperature · Many other functions of the human system are captured and synchronized to the temperature rhythm o One such variable is the perception of duration. It is perceived to be shorter when body temperature is increasing and slows duration perception when body temperature is decreasing § Acoustic stress: not well established, likely that interactive effects are so contingent on specific circumstances that broad generalizations are meaningless · Unidirectional impact on vigilance (low level good, high level bad) · Also difficult to measure because there is no direct physiological analog · Although there are more consistent findings regarding the effect of continuous noise on vigilance performance: o Performance is degraded when there is a high level of white noise and processing demands are high o Performance is facilitated when varied noise level is low and processing demands are low Hancock & Warm, 1989

Rasmussen's skill-knowledge-rule framework and errors related to each level

§ framework levels correspond to decreasing levels of familiarity with the environment or task Skill-based level (akin to slips and lapses): performance is governed by stored patterns of preprogramed instructions represented as analog structures in a time space domain o Errors at this level are related to the intrinsic variability of force, space, or time coordination Rule-based level (akin to mistake, specifically failure of expertise): Familiar problem solutions are governed by stored rules (if X, then Y) o Errors at this level are related to incorrect recall of procedures or misclassification of the situation leading to application of wrong rule Knowledge-based level (akin to mistake, specifically lack of expertise): Novel situations that require actions to be planned on-line, using conscious analytical processes and stored knowledge o Errors at this level arise from resource limitations (bounded rationality) and incomplete or incorrect knowledge Reason, 1990; Rasmussen, 1983

How do the human elements in the general view of accident causation interact, and based on their interaction, what is the best approach to reducing the chance of an accident

§ human elements can be thought of hierarchically, where each superordinate level can manifest in numerous subordinate failure types · This means that typically stopping the superordinate problems eliminates many subordinate problems, rather than trying to approach the problem from the lowest level without addressing its overarching cause · Best way this is done is via feedback loops that are responded to rapidly Reason, 1990-7

What factors contribute to display poverty and therefore the invigilant function

· Absence of target comparator, forcing operators to hold long-standing representation of searched for target in memory · High rate of searchable items · Unpredictable signal-to-noise ratio · Absence of current knowledge of performance and subsequent knowledge of results · Distracting sources of stress · Monotonous, boring, repetitive nature of displays Hancock, 2013

What do the vigilance studies related to source of compulsion tell us about vigilance and its associated decrement

· Findings from these studies show that stress associated with vigilance is strongly mediated by the nature of compulsion (external), whereas the nature of the decline is more associated with the physical characteristics of the task itself Hancock, 2013

What are the possible system responses to error

· Gagging: force function to prevent user from expressing unrealizable intention · Warning: inform user of potentially dangerous situation · Do nothing · Self-correct: system attempts to correct the error · Lets talk about it: system responds to error by beginning a dialogue · Teach me: system asks user what they had in mind Reason, 1990- Ch. 6

What are factors that contribute to lack of mode awareness

· Lack of salient or timely feedback · Gaps or misconceptions in operator mental model · Lack of good design to make operator tasks like monitoring and integration easier · When multiple operators are involved · Behavior is complex · Transitions are frequent Sarter & Woods, 1995

What are some methods for measuring automation reliability

· Methods for determining automation reliability o Fault tree analysis o Event tree analysis o Software reliability analysis Parasuraman et al., 2000

What is the taxonomy of vigilance

· Parasuraman and Davies (1977) developed a 2 (event rate: low vs high) X 2 (task type: successive versus simultaneous comparison) matrix · Matrix helped to localize the vigilance decrement function (i.e., occurred only in the combination of high event rate and successive comparisons that heavily tax memory) Hancock, 2017

Under what conditions do measures of MWL tend to dissociate

· Performance and subjective measures tend to dissociate under conditions of LOW WORKLOAD (because performance measures are not sensitive to low WL levels) · Also when subjects are performing DATA-LIMITED TASKS (i.e., more resources will not improve performance because operator is limited by the DATA they have- knowledge) because performance will continue to decline but subjective estimates have likely already reached the ceiling · occasionally performance and subjective measures dissociate when greater effort (higher subjective WL) increases performance because typically higher subjective workload corresponds with lower performance · NUMBER OF TASKS typically increased subjective workload, so two easy tasks would be rated subjectively higher (and produce better performance) than a single difficult task (that produces lower performance) · Performance measures are typically sensitive to SEVERITY OF RESOURCE COMPETITION, whereas subjective measures are not · Subjective measures are more influenced by INFORMATION AVAILABLE TO CONSCIOUSNESS (i.e., central processing demands like WM) over those that are not represented well consciously (i.e., response execution processing demands), thus would dissociate with performance measures that are diagnostic of these WL differences · When a workload measure is affected by FACTORS OTHER THAN MENTAL DEMAND, like physical activity (heart rate), changes in luminance (pupillometry, blink rate), familiarity, training, fatigue, motivation, personal matters, illness, and biases Tsang & Wilson, 1997

Reasons why a high number of FAs may not result in a cry wolf effect? (especially as it relates to Wickens et al., 2009 study)

· Possibly because these FA were forgivable or acceptable in that they could be seen as resulting from a conservative threshold, reinforcing trust in the system · OR because cry-wolf effect is typically found in dual-task setting where controller has to switch attention and this task is just a single monitoring task · OR because CA system uses only visual (not auditory) alarms, which may have been less intrusive and annoying · OR because other confounding factors masked the effect Wickens et al., 2009

According to Schaefer et al., (2016) what factors influence trust in automation

· The human element o Traits (stable): Age, gender, ethnicity, personality, and trust propensity o States (dynamic): Stress, fatigue, attentional control o Cognitive factors: Understanding of how automation works, self-perceived ability to use automation, and expectancy concerning automation o Emotive factors: Confidence in the automation, attitudes toward automation, commitment to/satisfaction with the automation, and comfort with the automation · The automation o Features of the automation: Level of the automation, Mode of automation (fixed, adjustable, adaptable), Aesthetics, Mode of communication (visual, auditory, tactile) o Capabilities of the automation: Reliability/errors, Cues/feedback/alarms, behavior/consistency/expectancy · The environment o Team collaboration: Role interdependence, Team composition, Mental models, Cultural/societal impacts, In-group membership o Task/context: Risk/uncertainty, Context/task type, Physical environment *Highlighted variables/factors were found to be the most important from a meta-analysis


Conjuntos de estudio relacionados

Chapter 3: Visual Displays of Data

View Set

Chapter 16 Mining and Mineral Resources

View Set

Chapter 3: Prenatal Development, Birth, and the Newborn

View Set

PC Pro Chapter 9.1 - Windows System Tools

View Set

Uppers, Downers, and all Arounders (Biomedical Pharmacology EX 2)

View Set

Chapter 12 Economics Gross Domestic Product and Growth

View Set