ITE302c_All

¡Supera tus tareas y exámenes ahora con Quizwiz!

Question 1 What causes bias? A. Bias is caused by the media. B. Biases are caused by the opinions of our families. C. Biases are learned from our families, our social groups, and the media. D. Bias is biological, we are born with it.

C

Question 1 Which of the following is a formative ethics theory that states that maximizing happiness is the primary standard for determining what is right or wrong? A. Categorical imperative B. Deontology C. Utilitarianism D. Virtue ethics

C

Question 10 How can a visual contract be easier to understand than a written contract? A. The use of pictures without any text makes it easier for most people to comprehend the details of a contract. B. The use of pictures is more accessible to all people, whereas a written contract may be difficult for someone with a reading disability to understand. C. The use of pictures with simple text makes it easier for the layperson to understand the details of a contract. D. The use of pictures explains the contract in greater detail than a written contract, making it easier for anyone to understand the particulars.

C

Which of the following is not a valid risk response technique? A. Accept B. Avoid C. Ignore D. Transfer

C

Why does the trolley problem pose an ethical predicament? A. You have to make a choice between two scenarios where choosing one leads to loss of life in the other. B. There are so many potential outcomes that it becomes difficult to choose one that is most ethical. C. The moral responsibility is split between you and the person controlling the trolley. D. You as the actor don't have sufficient control over the circumstance.

A

4. At what point should ethical consideration ideally be applied to emerging technologies? A. From its inception, through maintenance, to applying foresight regarding its decommissioning. B. Upon delivery, with appropriate warranties where necessary. C. Once an ethical issue has received negative feedback in public media. D. During periodic reviews, with ongoing customer feedback solicited.

A

In AI, the principle of privacy is most commonly referred to in the context of which of the following concepts? A. Data protection B. Transparency C. Personal protection D. Human control

A

Question 10 Which of the following is one explanation for why cognitive biases exist? A. We receive too much information and are overloaded. B. We use them to help remember things. C. They help us think logically instead of emotionally. D. They are taught to us in school.

A

Question 2 What is the fundamental attribution error? A. When you say your bad behavior is caused by the situation, but when other people display the same bad behavior, it is caused by a personality trait. B. When you believe your ideas are normal and that the majority of people agree with you. C. When you incorrectly assume a cause and effect relationship for two correlated variables. D. When you believe your chances of experiencing something negative are lower and your chances of experiencing something positive are higher than others.

A

Question 2 Which of the following describes why explainability is important? A. It provides accountability and trust. B. It enables you to explain a system for shareholder purposes. C. It is necessary so that internal stakeholders can understand how a system works. D. It provides interpretations of a system's actions.

A

Question 6 What is the difference between beneficence and non-maleficence? A. Beneficence refers to "do only good" and non-maleficence refers to "do no harm." B. Non-maleficence refers only to malevolent artificial general intelligence (AGI), while beneficence can refer to any "good" emerging technology. C. Beneficence is a less important goal for the field of AI than non-maleficence. D. Beneficence and non-maleficence are quite similar and often interchangeable.

A

Question 7 What does it mean to call a click-through agreement a "contract of adhesion"? A. One party is forced into a "take-it-or-leave-it" situation. B. One party is forced into using the service after agreeing. C. Both parties are legally bound by the agreement. D. Both parties are equally responsible for ensuring the agreement is adhered to.

A

Question 9 Your organization has developed an AI system that recommends treatments for hospital patients. Some questions have been raised about the ethics of how these treatments are determined. What applied ethics domain do these concerns fall under? A. Bioethics B. Environmental ethics C. Engineering ethics D. Business ethics

A

Which of the following is often in opposition to moral relativism? A. Evidence-based policy B. Customs and conventions C. Subjective perspectives D. Cultural mores

A

Which of the following resources does the National Institute of Standards and Technology (NIST) provide to organizations? A. Reference materials B. Security tools C. Industrial configurations D. Measurement technologies

A

Which school of philosophical thought primarily advocates for the greatest good for the greatest amount of people? A. Utilitarianism B. Deontology C. Kantian ethics D. Virtue ethics

A

Why is the question of robot rights and emancipation one that isn't as important as addressing issues of bias, privacy, transparency, and other principles discussed in the various ethical frameworks? A. These rights necessitate that robots become sentient entities, which is currently not feasible. B. There is no legal precedent for granting rights to entities that are not humans. C. Robots are mechanical instruments and therefore don't deserve to have rights. D. Humans are anthropocentric and don't want to extend rights to other sentient entities.

A

Your business handles the personal data of California residents. Which of the following regulations would enable a resident to request that their data be deleted from your company's files? A. CCPA B. OECD Privacy Guidelines C. PCI DSS D. COPPA

A

Which of the following are tasks in the overall risk management process? (Select three.) A. Deployment B. Identification C. Elimination D. Mitigation E. Analysis

BDE

How can confirmation bias impact us socially? (Select two.) A. It can impede socio-political cooperation. B. It can lead to groupthink, which can in turn halt forward progress. C. It can lead to a diverse set of friends. D. It can prevent us from being social with other people.

AB

Question 10 Which of the following are examples of how AI can limit human autonomy? (Select two.) A. Weapon systems can limit human autonomy as humans may not have decision-making capability or understand the decision making. B. AI systems might impact certain vulnerable groups such as the elderly and children differently than the rest of the population, which could limit those groups' autonomy. C. AI systems can assist individuals with automated, repetitive, or dangerous tasks. D. AI systems can perform tasks that humans cannot, such as processing millions of data records in a matter of seconds.

AB

Which of the following are advantages to adopting standards frameworks like ISO 27000? (Select two.) A. Formal certification processes that provide competitive advantage B. International support, recognition, and involvement C. Technology-specific focus and precise implementation instructions D. Regulatory weight and legal enforcement

AB

Which of the following are important aspects of resolving complex and confounding business pressures? (Select two.) A. Engaging with multiple stakeholders to understand their particular needs B. Managing expectations that not everyone can get what they want, when they want it C. Assuring everyone that their desires can be accommodated without compromise D. Prioritizing ethical and safety concerns over business desires in all cases and situations

AB

6. Which of the following are important ethical elements to safeguard within ethical AI systems? (Select two.) A. Accountability and management of bias. B. The number of layers, tensors, or parameters used in a model. C. Transparency and explainability, balanced with privacy. D. Performance and optimization.

AC

Question 8 Which of the following are ways to participate in engineering activism? (Select two.) A. Follow a code of ethics. B. Perform all tasks required of you. C. Engage with the public. D. Follow ethics by design.

AC

Which of the following are logical arguments in favor of an organization maintaining compliance? (Select two.) A. Avoidance of reputational damage B. Reduced costs of development C. Long-term cost savings due to avoiding fines D. Reduced time to deployment

AC

Which of the following describe corporate hegemony? (Select two.) A. Consolidating interests through mergers and acquisitions B. Spending large sums on corporate branding and marketing C. Locking out smaller players, leading to monopolies or cartels D. Making multiple investments in a similar space to improve the outcomes of success

AC

Which of the following ethical considerations should have priority in an emergency situation like the use of contact-tracing solutions during a pandemic? (Select two.) A. Accountability B. Bias C. Privacy D. Explainability

AC

Which of the following statements are correct about a variable that is normally distributed? (Select two.) A. The mean, median, and mode of all measurements is the same, and all are located at the center of the distribution. B. The tails of a normal distribution are denser than the center. C. The variable's distribution, when graphed, exhibits a symmetrical bell shape. D. Less than half of all measurements fall within one standard deviation of the mean.

AC

You plan on streamlining your company's product experience, but you also want to uphold the agency and autonomy of your users. Which of the following actions would uphold these principles? (Select two.) A. Refraining from guiding users into something they didn't wish for or intend B. Enabling government agencies to have a personalized interface with software C. Respecting the right of the user to choose and customize their experiences D. Applying machine intelligence to simulate customer behavior

AC

Question 2 Which of the following statements are promoted by the categorical imperative? (Select three.) A. Each person must use reason to will moral laws. B. You have a moral duty to choose your actions based on their potential outcomes. C. Act in such a way that your actions may become a universal law. D. Don't treat people as a means to an end; treat them always as an end.

ACD

Question 10 What does it mean to say that human rights are inalienable? (Select two.) A. Inalienable rights are inherent in all human beings. B. Inalienable rights are derived from tradition. C. Inalienable rights are conditional. D. Inalienable rights cannot be taken away except in extreme circumstances.

AD

Question 7 Which of the following are valid concerns regarding electronic personalities? (Select two.) A. That they would create unfair advantages, as not everyone has access to the same rights. B. That individuals would start applying for electronic personalities. C. That they will make it more difficult to access information. D. That they would absolve manufacturers of liability.

AD

1. Which of the following defines the AI black box problem? A. Not being able to know how something crashed or failed B. The challenge of understanding the inner workings of opaque systems C. Machine intelligence making something illusory, like pulling a rabbit from a hat D. A dangerous machine intelligence put in a digital prison

B

7. Which of the following is the generally agreed upon current state of the art of AI? A. Superintelligence B. Narrow AI C. Perceptrons D. Strong AI

B

How do AI and other data-driven technologies use probability? A. By determining the objective likelihood of some event happening B. By providing a model of belief about the likelihood of some event happening C. By guaranteeing that some event will occur with 100% likelihood D. By estimating the likelihood of some event happening without input data

B

How does increasing AI performance often conflict with the desire for explainability? A. Increasing AI performance sometimes reduces the transparency of input data used in training, making it more difficult to explain decision-making processes. B. Increasing AI performance sometimes leads to greater model complexity, making it more difficult to explain decision-making processes. C. Increasing AI performance sometimes leads to certain evaluation metrics no longer being useful, making it more difficult to explain decision-making processes. D. Increasing AI performance sometimes removes human-in-the-loop (HITL) methods, making it more difficult to explain decision-making processes.

B

If an AI-enabled system enables addictive behavior, which of the following makes for the most compelling argument to stop development work on that system? A. The process for obtaining consent has not been made transparent to the user. B. The system, as designed, acts counter to the well-being of the users. C. There is a lack of accountability on the part of the user since they overuse the service. D. The user will share more data with the system because of increased use.

B

If you are attempting to build a new framework for the research and development (R&D) of AI, which of the following frameworks might you look at first for its emphasis in this area? A. The G20 AI Principles B. The Beijing AI Principles C. The American Medical Association's definition of artificial intelligence D. The Montreal Declaration for a Responsible Development of Artificial Intelligence

B

In the following scatter plot, the GrossIncome variable is plotted against the Revenue variable. What type of correlation does this plot suggest? A. There is a weak positive correlation between both variables. B. There is a strong positive correlation between both variables. C. There is a weak negative correlation between both variables. D. There is a strong negative correlation between both variables.

B

In using AI-enabled solutions within the context of medical imaging analysis, which of the following is the most important ethical consideration? A. Privacy B. Explainability C. Bias D. Security

B

Question 1 What does "ethics by design" mean? A. A reference to one of the tenets of engineering activism. B. An approach in which ethics is considered from the initial design stage. C. A reference to the framework set forth by IEEE's Ethically Aligned Design. D. A creative design approach as the focus for ethics.

B

Question 3 Which of the following describes an opt-out policy in regards to the collection of private data? A. Data about the user is always collected, regardless of the user's consent. B. Data about the user is automatically collected unless that user explicitly states that you should not do so. c. Data about that user isn't collected until that user explicitly states you are allowed to. d. Data about the user is never collected, regardless of the user's consent.

B

Question 8 Which of the following is a type of technology contract that establishes the goals of both parties and describes how those goals will be achieved? A. Terms of Service (ToS) B. Service-level agreement (SLA) C. Software as a Service (SaaS) D. End-user license agreement (EULA)

B

Question 9 How can AI uphold justice? A. The more AI-based products being used in the justice system, the more justice can be upheld. B. AI systems can be designed from the start to help promote fairness and minimize bias. C. AI systems can replace human judges, who are often biased. D. AI can automate many of the clerical tasks involved in the justice system.

B

Question 9 How does a smart contract differ from a traditional contract? A. Smart contracts serve a different purpose than traditional contracts. B. Smart contracts eliminate the need for a central authority. C. Smart contracts guarantee that all parties are anonymous. D. Smart contracts are more effective than traditional contracts.

B

Which of the following frameworks primarily promotes human rights? A. The Montreal Declaration B. The Toronto Declaration C. The Asilomar AI Principles D. The Beijing AI Principles

B

Which of the following is the most important argument in favor of content moderation in online platforms? A. It prevents the development of monopolies in terms of content creators. B. It prevents the spread of disinformation that can cause harm to vulnerable populations. C. It creates adequate incentives for everyone to share their opinions. D. It helps uphold freedom of expression for everyone and doesn't give anyone special rights.

B

You have a dataset of customers that includes each customer's gender, location, and other personal attributes. The label you are trying to predict is how much sales revenue each customer is likely to generate for the business based on these attributes. What type of machine learning outcome is this problem suited for? A. Dimensionality reduction B. Regression C. Clustering D. Classification

B

You're training a model to classify whether or not a bridge is likely to collapse given several factors. You have a dataset of thousands of existing bridges and their attributes, where each bridge is labeled as having collapsed or not collapsed. Only a handful of bridges in the dataset are labeled as having collapsed—the rest are labeled as not collapsed. Given your goal of minimizing bridge collapse and the severe harm it can cause, which of the following metrics would be most useful for evaluating the model? A. Accuracy B. Recall C. Precision D. Confusion matrix

B

2. Which of the following elements are important aspects of ethical integrity with regards to data? (Select two.) A. What type of data (audio, visual, etc.) is being collected and/or utilized. B. Whether the data was gathered in an ethical manner. C. If the holders of data are trustworthy entities. D. If the data is commercially viable or monetarily valuable.

BC

3. Which of the following best describes why data is sometimes compared to oil? (Select two.) A. Data can damage the environment. B. Data can be monetarily valuable. C. Data can fuel algorithmic technologies. D. Data can be easily monopolized.

BC

8. Which of the following describe important aspects of why emerging technologies are so capable and powerful? (Select two.) A. They are exciting and captivating to many people. B. They can automate very complex operations. C. They may be able to self-improve by learning from data. D. They can displace workers by performing their jobs more efficiently.

BC

9. Management asks someone to do a data-related task. Which of the following would likely be ethically problematic? (Select two.) A. Aggregate data together. B. Delete any erroneous data. C. Manipulate data or alter its interpretation. D. Change data to another format.

BC

Which of the following are requirements set forth by the Biometric Information Privacy Act (BIPA)? (Select two.) A. Organizations must store biometric data in local, on-premises databases. B. Organizations must obtain consent from individuals regarding the collection and use of biometric data. C. Organizations must destroy biometric data in a timely fashion. D. Organizations must not transmit biometric data across an unsecured network like the Internet.

BC

Which of the following are ways that regulations differ from ethical frameworks? (Select two.) A. Regulations are flexible in their implementation. B. Regulations provide a clear basis for potential litigation. C. Regulations have legal enforcement behind them. D. Regulations are often industry led.

BC

Which of the following ethical domains does the IEEE 7000 series explore? (Select two.) A. Personnel safety B. Emulated empathy C. Machine-readable privacy terms D. Fair competition

BC

Question 3 Which of the following are actions that can help combat implicit bias? (Select three.) A. Surrounding yourself with others who have similar experiences. B. Exposing yourself to "counter-stereotypical" examples. C. Interacting with diverse groups of people. D. Cultivating awareness of your own biases. E. Obtaining your information from the same one or two media sources that your family and friends access.

BCD

Question 4 Which of the following describe how adopting ethical practices can be a strategic differentiator? (Select three.) A. It will reduce your business obligations toward customers and business partners. B. It will encourage applicants to apply for your company. C. It will build customer trust. D. It will ensure you comply with regulations. E. It will support the development of strategic partnerships.

BCE

10. Which of the following describe important aspects in the role of an ethical AI engineer? (Select two.) A. Building and maintaining computational hardware. B. Cleaning and sorting data, and auditing for bias. C. Writing new equations to express intelligence. D. Keeping up with the latest developments and vulnerabilities.

BD

Question 4 Which of the following are key principles of privacy by design? (Select two.) A. Organizations must keep the focus of privacy protections on the business rather than the user. B. Organizations must incorporate privacy protections throughout the project lifecycle. C. Organizations must not expose the operational practices and technologies used to protect user privacy. D. Organizations must be proactive in protecting against privacy risks, not reactive.

BD

Which of the following are important elements of the data minimization principle? (Select two.) A. Only delete data that can be easily replaced B. Only collect data that is strictly necessary C. Only compress data that needs to be kept as small as possible D. Only keep data for as long as it is needed

BD

Which of the following does the Brazilian General Data Protection Act (LGPD) mandate? (Select two.) A. Data protection analysts B. Data protection officers C. Data protection audits D. Data protection impact assessments

BD

Question 3 Which of the following statements accurately describes the philosophical concept of predeterminism? A. All future events are determined by preceding events, as in a chain, but human beings may still be able to interfere with this chain of events. B. All events are predestined to happen by a supernatural force. C. All events, past, present, and future, are determined in advance. D. Human beings are able to make choices whose outcomes are not already determined.

C

Question 4 How do norms differ from morals? A. Norms form the basis for morals. B. Morals are collective; norms are more personal. C. Morals involve value judgments; norms do not. D. Norms are universal to all cultures; morals are not.

C

Question 4 When conducting an opinion poll, which of the following biases do you need to guard against the most when collecting your data? A. Misclassification bias B. Modeling bias C. Sampling bias D. Correlation bias

C

Question 5 Which of the following best describes beneficence? A. Beneficence is a term coined by IBM that relates to their Green Horizons initiative in 2014. B. Beneficence is the promotion of well-being for moral agents like humans. C. Beneficence is the promotion of well-being, not just for moral agents like humans, but of animals, the environment, and societies. D. Beneficence is the promotion of efficient systems that perform rapidly and benefit companies.

C

Question 5 What is the purpose of differential privacy? A. To remove the direct identifiers that can be used to identify individuals. B. To ensure the data is completely confidential and cannot be read by unauthorized parties. C. To enable parties to share private data without revealing individuals represented in the data. D. To only allow certain parties to access certain portions of the data.

C

Question 5 Which of the following is an example of a cognitive bias? A. Modeling bias B. Correlation bias C. Anchoring bias D. Misclassification bias

C

Question 6 Which of the following describes the concept of liability? A. Taking ownership of an assigned task. B. Answering for one's actions to an authority figure. C. The legal responsibility for one's actions. D. The moral duty one has to take action.

C

Question 8 Which of the following is an example of applied ethics? A. Pluralism B. Virtue ethics C. Professional ethics D. Moral relativism

C

Question 9 Which of the following describes an illusory-correlation bias? A. When you correlate variables that do not exist in your data set. B. When you incorrectly assume a correlation because there is an illusory confounding variable. C. When you incorrectly assume a cause and effect relationship because two variables are correlated. D. When you correlate a variable with a confounding variable.

C

Which of the following describes an ethical framework? A. Ethical frameworks raise timeless ethical questions that are not easily put into action. B. Ethical frameworks apply meta-ethical theories to everyday business operations. C. Ethical frameworks seek to mitigate ethical concerns by creating actionable steps. D. Ethical frameworks consolidate regulatory requirements for an industry.

C

Which of the following describes dual-use or multipurpose data? A. Data that can be easily shared with a partner or family member for mutual enjoyment. B. Data that can be transformed into multiple forms, e.g. extracting audio from a video file. C. Data collected for one application that could also be applied to another application in a different domain. D. Data that can be used in multiple devices or formats, such as a video on a Smart TV, tablet, and computer.

C

Which of the following describes the principle of transparency in the context of AI systems? A. Transparency enables human observers to understand the decision-making process of an AI system. B. Transparency enables human observers to tweak the decision-making process of an AI system. C. Transparency enables human observers to see inside the decision-making process of an AI system. D. Transparency enables human observers to reproduce the decision-making process of an AI system.

C

Which of the following explains why efficiency can sometimes incur systemic fragility? A. Increased efficiency tends to compound over time B. Increased efficiency tends to create cost savings C. Efficiency benefits may lead to complex second-order costs D. High-efficiency machines often require more maintenance

C

Which of the following is the most important ethical consideration regarding technical developments like deepfakes? A. They violate data sharing agreements in many jurisdictions. B. They are built on technological progress made by a third-party organization. C. They usurp a person's likeness and can then be weaponized against them. D. They take away monetization opportunities, leaving individuals unfairly compensated for their data.

C

Which of the following metrics is used to evaluate a linear regression machine learning model? A. Goodhart's Law B. Accuracy C. Cost function D. Receiver operating characteristic (ROC)

C

Which of the following software development principles is essential in the real-world deployment of AI-enabled software applications in critical scenarios like self-driving cars? A. Version control of the AI models deployed B. Continuous integration and deployment of patch updates C. Robustness to adversarial examples D. Architectural design analysis

C

Which type of entity are the OECD Principles on Artificial Intelligence mostly geared towards? A. Individuals B. Private corporations C. National governments D. Municipal governments

C

Why are anonymization and pseudonymization insufficient protection measures against breaches of data privacy and security? A. They destroy the usefulness of the data. B. They only work in scenarios with particular kinds of personal information. C. They can be broken by combining this data with other publicly available data. D. They don't integrate well into data science and machine learning workflows.

C

Why do smart toys raise additional ethical concerns over those that are raised in the course of other products and services that use AI? A. They are used in the privacy of homes rather than in public settings, like other products or services. B. The smart toys store personal data on the device, which can be stolen. C. Children are more susceptible to manipulation and therefore need extra protective measures. D. It is difficult to obtain informed consent for the use of the smart toy.

C

Question 5 Which of the following are consequences of saying that someone or something has moral agency? (Select two.) A. The moral agent acts in a morally correct manner. B. The moral agent follows a deontological code of ethics. C. The moral agent can be held responsible for their actions. D. The moral agent is capable of determining right and wrong.

CD

Question 7 Which of the following statements are true regarding the purpose of moral psychology? (Select two.) A. Moral psychology seeks to understand the nature of what it means to be moral. B. Moral psychology seeks to understand what the best way to act morally is. C. Moral psychology seeks to understand how the human mind develops morality. D. Moral psychology seeks to understand why people act morally or immorally.

CD

Which of the following are possible benefits of a human-in-the-loop (HITL) architecture? (Select two.) A. Improving the speed of autonomous decision making B. Eliminating the potential for human error in decision making C. Mitigating excessive scope or potential collateral damage D. Balancing the negative effects of an AI system on people with the effects on environments and objects

CD

Which of the following risk analysis methods use words like "likely," "unlikely," and "rare" to describe the likelihood of risk, and words like "low," "medium," and "high" to describe the impact of risk? (Select two.) A. Semi-qualitative analysis B. Quantitative analysis C. Semi-quantitative analysis D. Qualitative analysis

CD

How does the "virtuous cycle" that benefits Big Tech operate? A. Better classes of customers lead to richer and more refined data for algorithmic systems. B. Organizations write algorithms with fewer biases, which leads to fairer outcomes. C. By acting virtuous, the public respects Big Tech more and more. D. Data-driven algorithms improve solutions, leading to new customers, and better data.

D

Question 1 Which of the following, by itself, qualifies as personally identifiable information (PII)? A. System events added to a log B. Temperature readings for an office building C. A user's customer ID in an online ordering system D. A user's home address

D

Question 2 Why are groups like race and religion considered protected classes? A. Organizations are legally not allowed to collect information about these groups. B. These groups can be used to personally identify someone. C. People use these groups as the basis for their identities. D. These groups have been used as the basis for wholesale discrimination.

D

Question 3 Which of the following describes personhood? A. Personhood is an individual's right to freedom. B. Personhood is a concept that applies to narrow AI. C. Personhood is the legal protection provided to AI systems. D. Personhood is often used to dictate how something is treated.

D

Question 6 Are criminal justice risk assessments race-neutral? A. Yes, technology in itself is not racist. B. No, it is designed with intention to be unfair. C. Yes, algorithms replace human judgement and they are unbiased. D. No, the data is biased as it reflects historical bias.

D

Question 6 Why is deciding how to act using moral reasoning not always a feasible goal for human beings? A. Moral reasoning has few tangible benefits for most people. B. Most people are not educated on normative ethical theories and therefore cannot perform true moral reasoning. C. Moral reasoning is too complicated to apply to a real-world situation. D. Human decision making is often influenced by emotion and not logic.

D

Question 7 Which type of bias causes people to trust an automated decision-making system (ADS) over a human's decision? A. Confirmation bias B. Complacency bias C. Implicit bias D. Automation bias

D

The Children's Online Privacy Protection Act (COPPA) safeguards the privacy of which age group's personal information? A. Anyone under 18 years old B. Anyone between 13 and 18 years old C. Anyone between 5 and 13 years old D. Anyone under 13 years old

D

Which of the following best describes capability caution as referenced in the Asilomar AI Principles? A. If there is no understanding of the internal mechanisms of AI, then AI development should be halted. B. Should there be a greater reliance on AI, measures should be taken to ensure that humans are still capable of finding work. C. We should keep limits on on what artificial general intelligence (AGI) is capable of. D. Given a lack of consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

D

Which of the following is a case study that best represents the principle of professional responsibility? A. The IEEE Ethically Aligned Design's discussion on classical ethics B. The Beijing AI Principles' tenets about the use of AI C. The Asilomar AI Principles' definition of capability caution D. The American Medical Association's definition of AI as augmented intelligence

D

Which of the following is a notable aspect of the Personal Information Protection and Electronic Documents Act (PIPEDA) when compared to similar laws and regulations? A. The early date of its inauguration B. Its nationwide scope and specific national focus C. Its exclusive focus on a single domain rather than a breadth of domains D. A stipulation to continue providing service even if data usage is denied

D

Which of the following is a standard or regulation that focuses on ensuring the implementation of strong cybersecurity techniques like network security and cryptography to protect data? A. PIPEDA B. FERPA C. POPI D. PCI DSS

D

Which of the following principles are most commonly cited in AI-based ethical frameworks? A. Happiness and spiritual contentment B. Human control and autonomy C. Fairness and non-discrimination D. Transparency and explainability

D

Which of the following statements accurately describes variance? A. Variance measures the error between predicted values and actual values. B. Variance measures the shape of the tails in a distribution relative to the center. C. Variance measures how much a variable's distribution differs from a normal distribution. D. Variance measures how far a data example is from the mean.

D


Conjuntos de estudio relacionados

Chapter 11: Special Collections and Point-of-Care Testing

View Set

Chapter 10: Mobile, Linux, and OS X Operating Systems

View Set

Foundations Ch 40 Fluid/Electrolyte/Acid-Base

View Set

[Maternity] Chapter 1: Perspectives on Maternal, Newborn and Women Health

View Set