Automation

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

The algorithms involved in recent Financial Crises relied on which decision-making theory?

algorithms based on Normative Theory & did not account for heuristics

How does automation fundamentally change the nature of a human's role in the system?

Can reduce workload, stress and errors , can also increase too. Merely changes the nature of the cognitive tasks •Human transforms •Agent to Monitor The human operator transforms from an agent actually performing work to a monitor of the automation, continually watching and making sure that it is working correct

What is the difference between trust in automation and automation dependence?

Trust in automation= Cognitive/affective state of the user w/ regard to automation's capabilities is subjetively Automation dependence= Objective behavior measured via human/tech interaction Trust •Trust in automation •Cognitive/affective state of the user w/ regard to automation's capabilities •Measured subjectively •Automation dependence •Objective behavior measured via human/tech interaction •Different, but usually correlated •Subject to individual difference significant issue that contributes to the effective or non-effective use of automation is trust. Trust is a cognitive or affective state in the human operator regarding the automation's abilities. In short, it is a person's feeling of confidence in the machine to do what is asked of it. This is different than automation dependence which is a behavior pattern; an objective behavior that can be measured by investigating human-technology interactions. Though these two concepts are distinct from one another, they are correlated and subject to individual difference

In what context is complacency most likely to occur?

Typically found in multiple task load scenarios. when people put their attention towards something else because of heavy workload, complacency occurs •Typically found in multiple task load scenarios •Automated + manual tasks •Competition for operator attn

What is the most important factor in establishing trust in automation?

#1 factor in determining trust, the more reliable, the more trusted but 100% reliability is extremely rare

What are the two anchor points of the automation continuum?

- Fully manual performance (no automation) - Full automation Automation, however, is not an all or nothing proposition. In fact, it runs along a continuum. On either end of this spectrum is either fully manual performance (wherein the human does everything without technological assistance) and full automation (where the machine does everything). In between are many different levels.

Know the automation issues inherent to the Royal Majesty case.

- Non-salient change in automation notice, 1 letter on the LCD display can change blindness - Over-trusted the automation and is followed for over a day can make you Ignored other indicators Case Study: Royal Majesty •1995 •Millions in Damage •Automatic Radar Plotting Aid (ARPA) •Navigation •Based on GPS •Crew Duty •Monitoring as secondary task •Cable b/w antenna & ARPA = frayed •GPS signal lost •ARPA defaulted to 'dead reckoning' mode •No autocorrect •Tides •Winds •Ran aground near Nantucket Island Case Study: Royal Majesty •Non-salient change in automation notice •1 letter on the LCD display! •Change blindness •Over-trusted the automation •Followed for over a day! •Ignored other indicators (Other vessels, Lights of shore) The Royal Majesty was a commercial cruise liner in the mid-1990s involved in an accident that cost millions of dollars in property damage. Cruise liners at the time were equipped with a system called ARPA (Automatic Radar Plotting Aid) which performed the navigational functions for the ship based on signals provided by the Global Positioning System. As the automation would perform these functions, the crew's duty was to monitor its progress while performing other tasks. On one voyage, however, the cable connecting ARPA to the antenna receiving the GPS signal became frayed and the GPS signal was consequently lost. Without a signal, ARPA defaulted to what's called a dead reckoning mode. It would continue in a straight line until told to adjust by a human. It therefore proceeded ahead regardless of important nautical considerations such as the tides and prevailing winds. As a result, the entire ship ran aground near Nantucket Island. s you can see here on the Royal Majesty's navigational interface, the loss of GPS signal and functioning mode is relayed by the change in only a single letter on this complex display. As humans are subject to change blindness, it is no wonder the crew did not notice the change. Another issue at play here, that we will discuss at greater length, is the fact that the crew over-trusted the automation. They followed this dead reckoning for over a day despite the obvious discrepancies between the display and the real world. They were able to see other vessels and the lights of the shore, yet they followed the course laid out by the automation anyway

What is the calibration curve?

0-1 scale # of errors/# of opportunities for error diagonal line = perfect calibration •Dynamic •Points can shift •1st failure •Severe under-trust •Will shift back However, no system can be 100% reliable and so trust is usually displayed along a calibration curve. It provides a score between 0 (meaning no trust) and 1 (being highest trust). The score along this continuum is generated by dividing the number of errors the automation has made by the number of times the automation could have malfunctioned. A diagonal line would signify perfectly calibrated trust (meaning, the operator trusts it as far as it can be trusted). However, this trust level is dynamic and can shift with experience. Usually, the first time the automation fails, the score sinks incredibly low, but as the automation continues to be used, the score will climb higher again

What are the four stages of automation?

1. Information Acquisition 2. Information Analysis 3. Decision Selection 4. Action Implementation Stages of Automation •Stage 1: Information Acquisition •Sensory processing •Multiple info sources •Pre-processing •Selective attn •Low Level Example •Scan + Observe •Cameras •High Level Example •Organizing incoming stimuli •Priority List •Stage 2: Information Analysis •Information manipulation + integration in WM •Other memory issues •Rehearsal •Integration w/ LTM •Inference •Low Level Example •Trend or Predictor displays •Projected future course of data •High Level Example •Integration vs. Prediction •Summate into single value/message Stage 3: Decision Selection •Evaluation of material •Formulation of optimal strategy •Choice implementation •Automation can provide... •List of alternatives •Prioritized list •Single choice •Stage 4: Action Implementation •Consistent w/ decision of Stage 3 •Machine execution •Low Level Example: Photocopying •High Level Example: Da Vinci robot

What is the difference between adaptive aiding and adaptive task allocation?

Adaptive aiding = Certain task component is made simpler via automation Adaptive task allocation = An entire task (from a larger multitask context) is shifted to automation Adaptive aiding occurs when the human is performing a task, but said task is made easier when automation takes it over in part. Adaptive task allocation, on the other hand, is when circumstances dictate that an entire sub- task be delegated to the automation as the demands are too much for the human operator. Which of these options is most appropriate depends on whichever option reduces workload on the human to the greatest extent

What is the 'alarm false alarm' problem?

Alarm activates, No failure condition exists & Operator under-calibrates the true value of the alarm The root of this problem is the Alarm/False Alarm Problem. Essentially, false alarms that signal a failure when no failure condition is actually in place leads the operator to under-calibrate the true value of the alarm. The consequences of mistrust can be relatively minor in that productivity may be less than what it could potentially be with the aid of automation. Other consequences, on the other hand, can entail major, life-or-death consequences for operators. For instance, so many false alarms were going off in the main engine of locomotives that train engineers would place duct tape over the speakers so that they could not hear any auditory input from the s

What is adjustable autonomy?

Allow for change in human-directed vs. machine-directed changes in relative autonomy so, who should ultimately decide to implement, change, or remove automation? The short answer is that science does not yet know. Should it be the automation that decides based on these three inferences sources? Should it be the human themselves? Human assessment of our own capabilities is sadly lacking due to concerns like overconfidence bias and illusory superiority. Therefore, we simply cannot say for sure who should have the authority to make these decisions. Human factors researchers, however, continue to perform experiments to derive an appropriate answer. In the meantime, a compromise is to foster adjustable autonomy wherein both components of the system have a say.

Know the automation issues inherent to the Financial Crises of 2008 & 2010.

Computerized derivatives trading, Automated transactions made High frequency of Millions of trade with no human oversight! Case Study: Financial Crises •2008/2010 •Computerized derivatives trading •Algorithms •Automated transactions •High frequency •Millions of trade •No human oversight! •Traders •Securities & Exchange Commission •Also, algorithms based on Normative Theory •Did not account for heuristic Finally, the misuse of automation played a significant role of the financial crises and recession of 2008 and 2010. Many stock traders would make use of only derivatives trading. That is, using automated algorithms to make buying and selling decisions for stocks. For instance, once the price of Stock X reaches a certain value, then sell it. Such automated transactions were happening at an extremely high frequency, millions of times per day in markets around the world. Moreover, there was no human oversight of this process; not by the traders who had put the algorithms into use or by the United States' Securities and Exchange Commission or SEC. These algorithms, however, were based on normative decision-making theory which certainly is not how the real-world operates and does not take into account the heuristics and biases with which such decisions are made

What is mistrust?

Disuse of automation Even when accurate Mistrust occurs when operators refuse to use automation even when it proves accurate and would help their perform

What is dynamic function allocation?

Division of labor (H + M) is changeable, flexible, and context dependent the division of labor between the human and the machine is changeable, flexible, and dependent on the context in which the work must be performed. Adaptive automation therefore allows humans to re-structure their work to account for what should be automated, how to infer when workload is too high, and when these shifts in workload take place.

What is the generation effect?

Easier to remember an action that you chose and performed, rather than one you saw done

Successfully designed human-automation systems should use which type of approach?

Focus should be joint human-automation performance with technical aspects and human centered automation

What are some of the factors that determine the severity of complacency?

Frequency of automation failures, First failure effect & Time Complacency •Severity •Frequency of automation failures •More severe when infrequent •First failure effect •Based on previous experience •Time •Long periods of error-free functioning Complacency can happen to a greater or lesser extent depending on many factors. Firstly would be the frequency of automation failures. The less often the automation fails, the more complacent people will be when using it. Secondly, the first failure effect that we spoke about in our discussion of calibration curves also comes into play. Thirdly, time: the longer the amount between automation failures, the more complacent people tend to be. Complacency is, moreover, more likely to manifest when the person is working under conditions of higher workload with several tasks competing for their limited attentional resourc

Know the different ways that feedback may be deficient for the user.

Functioning + underlying reasons, type are No feedback or Silent Importance of Feedback •Deficient feedback mechanisms •Functioning + underlying reasons •Types of Deficiencies •No feedback •'Silent' •Insufficiently salient •Does not draw operator's attn •Ex: Eastern 401 •Ambiguous •Increases confusion •Irrelevant •Lacking detail •Inflexible •Phone menu as a result, feedback and the visibility of automation status are of paramount importance to functionality and safety. Feedback must not only be present, but it must also be delivered in an effective way in order to foster good performance. There are many ways that feedback can be sub-par. Silent feedback, for example, is when there is no feedback at all. In human factors, this is completely unacceptable. Insufficiently salient feedback is when some sort of warning is provided but it does not do enough to capture the operator's attention (as was the case with Eastern Flight 401). Feedback can be ambiguous (which fosters confusion), irrelevant, lacking in detail, and inflexible

What is the role of the 'task manager'?

Human, auto and combination Adaptive Automation •Human MW gauged by 'task manager' •Assigns more to auto (if WL is high) •Assigns more to human (if WL is low) •Nature of task manager: Human, Auto, Combination adaptive automation affords the ability to adjust levels of automation, but when any why should we do this? The goal of automation is to reduce human mental workload. Therefore, human mental workload is gauged by what is called the task manager. The task manager is responsible for increasing the level of automation when human workload is high, and decreasing it when workload is low. The task manager itself may be a human, automation, or some combination of the two. Such complex tasks usual involve what is called dynamic function allocation; that is, the division of labor between the human and the machine is changeable, flexible, and dependent on the context in which the work must be performed. Adaptive automation therefore allows humans to re-structure their work to account for what should be automated, how to infer when workload is too high, and when these shifts in workload take place.

What are the different methods by which one can design a more effective human-automation interaction?

Increases routine performance or Decreases workload

What is automation etiquette?

Know when and when not to interrupt If shifts in automation level are called for, good automation etiquette needs to be in place to determine where, when, how, and how much of this shift occurs so as not to influence performance

What is out of the loop unfamiliarity?

Losses and Via complacency Out of the Loop Unfamiliarity (OOTLUF) •Losses •Degraded detection •Awareness/diagnosis •Manual skill •Via complacency Another consequence of complacency is out of the loop unfamiliarity. In the throes of this phenomenon, operators are less aware of what is going on and what they need to do to secure task success. As you can imagine, it has profound consequences on performance and saf

What are some methods of mitigating mistrust?

Make automation role + functioning more transparent and Training on how to supervise/monitor the algorithms Trust Calibration Techniques •Mitigating Mistrust •Make automation role + functioning more transparent •Good visual displays •Training on how to supervise/ monitor the algorithms •Implement likelihood displays •2 or more graded levels of certainty that a critical condition exists •Allows the system to say 'I'm not sure' vs. false alarm or nothing in order to avoid mistrusting automation, its functions should be as transparent as possible to the human operator. Good visual displays that provide information about system status are therefore highly recommended. Human operators should moreover receive training in how to supervise or monitor the automation's algorithms to add additionally understanding of how the automation is doing the work it is supposed to do. Finally, implement likelihood displays. Instead of running into the alarm/false alarm problem that fuels mistrust as the system only has two options: warn the human or do not warn the human; provide a likelihood display. Give the system the option to say: I'm not 100% what is going on, but there is definitely a situation or condition here that you need to be aware of and keep an eye on

What are the two main issues regarding automation complexity?

Observability- More embedded, less observable to human. 'Automation Surprises'- Auto functioning dissimilar to human method Automation Complexity •Reduces human error •Higher # of non-human components increases probability of system error •Hardware + software •Example: Boeing 787 Dreamliner •Millions of lines of code •'Bugs' can lead to problems •Issue 1: Observability •More embedded, less observable to human •Mutually intelligibility is important •Must provide feedback re: system state •Agent-based systems: simple, low-risk tasks •Issue 2: 'Automation Surprises' •Auto functioning dissimilar to human method •Mismatch of expectations •Increases •Surprise •Uncertainty •Suspicion •Can lead to assumptions of system error/failure •Can be fatal explanation: The first is observability. The algorithms and programs that run automation are not readily observable by the human operator, yet it is very important that the humans who interact with these systems know how they are supposed to operate. Good automation must therefore provide humans with information about how it is functioning. The second issues is quote, unquote "automation surprises". While they may be performing the same tasks, humans and automation do not go about performing work the same way. The mismatch in humans' expectations of how the automation should work versus how it actually does can lead to increases in surprise, uncertainty, and suspicion; all of which would be detrimental to effective interactions. These increase the chances of system failures, which in many domains can mean people's lives

What is deskilling?

Operator's ability to complete the task manually will decline over time. good operators therefore choose to do it by hand occasionally A sort of use-it-or-lose-it principle.

What are Grice's Conversational Maxims?

Originally for verbal communication Grice's Conversational Maxims •Originally for verbal communication •Applies to H-A interaction as well •Maxims of • Quantity: Make contribution only as informative as required. not too much, not too little • Quality: Be accurate when saying something, If not sure, don't say • Relation: be relevant to the situation • Manner: Avoid obscurity, Avoid ambiguity, Be brief, Be orderly

What is complacency?

Over trusting automatton with a Tendency not to monitor automation or the information sources it uses Active re-allocation of attn away from the automation to other manual tasks in cases of high workload •Can lead to •Inadequate monitoring •Fewer system checks •Insufficient time to intervene appropriately •Insufficient awareness to intervene appropriately •Generation effect •Typically found in multiple task load scenarios •Automated + manual tasks •Competition for operator attn •Value of monitoring is extremely high, even though its expectancy is low

What are stages of automation and which researcher(s) conceived of them?

Parasuraman (2000; 2008) extended S & V's taxonomy it accounts for changes in time & Sub-stages of task In the year 2000, Raja Parasuraman expanded on Sheridan and Verplanck's work. He conceived of stages of automation; that is, levels of automation can change over time depending on workload or the nature of sub-tasks to be accomplished. He incorporated these ideas into a 4 Stage Model, which is conceptually similar to the 3 stage model of decision-maki Stage Model •Conceptually similar to 3 Stage Model of Decision-Making Parasuraman's Model •Auto can be applied at different levels at each of the four stages •Spanning the spectrum •Examples •System A •High Levels: Stages 1-3 •Low Level: Stage 4 •Theater High Altitude Area Defense (THAAD) •Fire and intercept ballistic missiles •High control over firing mechanism •System B •High Levels across board •Robonaut

What is automation and what is the goal of its use?

Performance by machines (typically computers) of functions that were previously carried out, whether fully or partially, by humans -sensing of the environment via artificial sensors •Data processing/decision-making by computers •Mechanical action by motors/devices that affectchange in the environment •Communication of processed information to people goal is to reduce both physical and mental workload

What factors constitute a trade-off in human-automation interaction?

Productivity + Performance vs. Safety, Satisfaction, Recoverability Recommendations seek to maximize the latter given then that we must accommodate both the automation and the human, how do we accomplish this goal in such a way as to foster more effective human-automation interactions? We must first acknowledge the trade off that exists between productivity and performance outcomes on the one hand, and safety, user satisfaction, and error recoverability on the other. If one must prioritize between the two, the priority must be on safety. To this end, the issues that need addressing are the nature of feedback, determining the appropriate level and stage of automation for certain tasks, rules of etiquette for human-automation interaction, and how to appropriately calibrate trust in such systems

What are some methods of mitigating over-trust or complacency?

Providing automation reliability info and Pre-performance exposure to possible failures in order to avoid over-trust in automation, provide the operator with information regarding the automation's reliability, and expose them (during training) to the common possible system failures so that they can readily recognize and react to them when they oc

What is the difference between static and adaptive automation?

Static= Characteristics of automation = set, fixed at design stage. Executed in same fashionduring operational use. Example is ATM Adaptive = Level and/or stage is not fixed. Can change during system operation. Example is Volvo's Self Driving Car.

What is supervisory control?

System in which human does not directly operate, human works via intermediary usually a computer More sophisticated automation is leading to humans adopting supervisory control over systems. That is a system with which the human does not operate directly, but rather interacts with it via an intermediary (most often a computer). This intermediary then has effectors (such as robotic arms and the like) that then act on the environment in the human's stead

What are the five main reasons why humans choose to automate tasks?

Tasks that Humans Cannot Perform Compensating for Human Performance Limitations Augmenting/Assisting Human Performance Economics Productivity Why we Automate: Five Categories of Automation • 1. Tasks that Humans Cannot Perform •Impossible •Too dangerous •Examples •Booster Rocket Guidance •Disaster Rescue Robots •Automation = essential + unavoidable •Regardless of risks/costs 2. Compensating for Human Performance Limitations •Humans can do the tasks •Poor performance •Significant costs •Workload •Examples •Autopilot •Ground Proximity Warning System (GPWS) 3. Augmenting/Assisting Human Performance •Human still performs task •Automation helps w/ necessary sub-tasks or mental operations to accomplish overall goal •Alleviates limited WMC •"Decluttering" of visual display 4. Economics •Cheaper than ppl •Needs no training •Programming •Not necessarily better! •User friendliness and user satisfaction may suffer •Examples •Portable traffic lights •Automated telephone menu 5. Productivity •Limited manpower available •Example •US Army •Unmanned aerial vehicles •Limited # of pilots •1 pilot, more drones •Increase in workload

What is the cry wolf effect?

Under trusting automation which can lead to disabling w/out prior experience

What are the three sources for inferring necessary changes in levels or stages of automation?

gauging the Environment, Continuous assessment of system performance and Continuous assessment of human workload How to Infer •When changes in levels/stages are necessary •Both operator + automated system should have awareness of each other's current •Capabilities •Performance •State •Three ways: 1. Environmentally 2. Continuous assessment of performance 3. Continuous assessment of Workload how do we infer when changes in the levels or stages of automation are necessary? Such inferences are predicated on the understanding that both the human and the automation are fully informed of each other's capabilities, performance output, and operational state. There are three principal ways or inference sources that we will discuss which can determine when shifts in automation are necessary. They are by gauging the environment, continuous assessment of system performance, and continuous assessment of human workload

What are levels of automation and which researcher(s) conceived of them?

instead, remember that there are ten, and the higher the level, the more the automation works independently of human input. It is important to remember that even at the highest levels of autonomy, humans and automation are still inter-dependent on one another. The human does not refrain from working just because the machine is doing more, instead they are performing a different kind of work: monitoring. System A: High Levels: Stages 1-3 & Low Level: Stage 4 System B : High Levels across board & Robonaut Levels of Automation •The Sheridan & Verplanck (1978) Taxonomy •Higher levels: increased computer autonomy Levels of Automation •10 is an arbitrary number •Illustrates levels of responsibility/autonomy for the technology •Higher # = more responsibility for comp., less cognitive work for user •Human + tech = inter- dependent! •Human workload changes •Monitoring In 1978, Sheridan and Verplanck created a taxonomy to describe these different levels of automation. They conceived of ten levels, with higher levels representing a higher level of autonomy for the automation. I must emphasize here that you do not need to memorize all 10 levels of autom

Know the automation issues inherent to the Eastern Airlines Flight 401 case.

• Preoccupied w/ gear indication light and missed the autopilot disconnect • Poor feedback on behalf of system regarding automation state • NTSB dictated disengagement should be clearly indicated Case Study: Eastern 401 Flight •1972 •Crash of L-1011 aircraft •Florida Everglades •Preoccupied w/ gear indication light •Missed the autopilot disconnect •Poor feedback on behalf of system regarding automation state •NTSB dictated disengagement should be clearly indicated •Pilot confirmation: intentional vs. unintentional •Modern autopilot system •Produce auditory or visual alarm upon disconnect •Requires a second command to acknowledge + halt ale The cockpit crew consisted of three flight professionals (that is, captain, co-captain, and navigator), and they were concerned by a gear indicator light on their console. By being illuminated, this light indicated that their landing gear may not have come down completely. This obviously caused concern as it is difficult to land safely without landing gear. In their attempts to determine if it was indeed the landing gear that had malfunctioned or merely the light, all three crew missed the fact that the autopilot had disengaged. Moreover, no warning had gone off to inform them that this was the case. So, as they were investigating this light, the flight kept sinking lower and lower until it crashed. In the investigation that followed, the National Transportation Safety Board concluded that the disengagement of the autopilot system should be accompanied by an auditory cue and that the pilot must provide input to the system regarding whether or not they had in fact intended to turn the autopilot off. Therefore, in modern autopilot systems, a visual or auditory alarm sounds whenever the autopilot is disengaged and the pilot must input a second command to confirm the choice and turn off the al

What is automation bias?

•Heuristic replacement for vigilant information seeking and processing •Ascribe greater power and authority to automation than other sources of information and advice •Consequence of over-trust •Creates attentional tunneling (Wickens & Alexander, 2009) •52% of pilots complied w/ automated directions into an obstacle •Even though could be seen in raw data + out thewindow •Why? Lazy! Thinking is too hard. Overestimating capabilities of automation Slide 1 Automation bias is a cognitive tendency or heuristic that replaces our vigilant seeking and processing of information. It leads us to ascribe greater power and authority to the automation when compared to other sources of information and advice. As such, it is a maladaptive consequence of over-trusting automation. This bias, moreover, fosters attentional tunneling. individuals tend to overestimate how much automation is capable of intervening when things go wrong


संबंधित स्टडी सेट्स

Pdeaux ALL - Set 6 - Chef Selections

View Set

Fluid and Electrolyte Balance Water and Blood Volume Balance

View Set

D2-Kosa Kata: Membersihkan Kelas

View Set