AI Fundamentals with IBM SkillsBuild AI Ethics
Rutherford is trying to implement a privacy control to fortify the AI system that he is working on. He decides to limit the amount of personal and sensitive data that is collected and takes steps to ensure that the data collected is only as granular as needed. Which of the following privacy controls is Rutherford using?
Data minimization
Which of the following adds random noise during model training to reduce the impact of any single individual on the model's outcomes and to give a guarantee that an individual in the training data set could not be identified?
Differential privacy
When you asked her why there were so many romantic comedies in her Netflix recommendations, your grandmother explained that Netflix looks at the movies she watches and the movies she likes. She said they use that information to look for other movies to recommend. Since she watches a lot of romantic comedies, she gets a lot of them recommended to her. Which pillar of AI ethics is Netflix adhering to in this example?
Explainability
Lester is working on an AI team that is concerned with making sure that all the people in his company receive equitable treatment. In which of the following areas of AI ethics is Lester's team working?
Fairness
Daniel is trying to get into nursing school. His resume is excellent, but he was turned down by all the schools he applied to. When he looked into the admissions process, he found that all the schools used the same AI model to process the applications. When Daniel looked at the data for the past 10 years, he found that in every year no less than 60% of applicants were female and the rest were male. He also found that every year at least 90% of the students accepted were female. According to this data, which of the following seems to be true?
Females are a privileged group.
Imagine you're building an AI-powered tutor for students. To personalize learning experiences, the system needs access to some student data. However, you prioritize protecting student privacy and don't want to use anything that could be considered personal information (PI). Which of the following would be considered personal information (PI)?
Home address
Clara has been brought in to evaluate an AI system. Based on the following graphic, she finds that Model B is more interpretable than Model A. Which of the following best describes why Model B is more interpretable than Model A?
Model B's flow is clear and easy to understand.
While preparing a data set for an AI system dealing with patient data, Nora needs to determine which of the following information qualifies as sensitive personal information (SPI) and needs to be removed from the data set. Which of the following would be considered SPI?
Patient name
Maria is on a team creating an AI model to help recruit engineers. Maria wants to ensure that the data set does not include specific attributes such as race, age, sex at birth, and ethnicity. What type of attributes are these examples of?
Protected attributes
The first x-ray in the following graphic shows no disease. An adversary adds noise to the data and then sends the data through the AI model, which results in an x-ray showing disease present. Which of the following areas of AI ethics is involved in defending against such an attack?
Robustness
Uma works for a large travel booking company. She is responsible for disclosing all information related to the data her company uses to build their vacation recommendation AI system. Which of the following is Uma in charge of?
Transparency
Rose is working as an auditor on a large AI project. In an effort to provide transparency, she is creating a framework to map key details of the system to the people who can provide that information. Which of the following teams is responsible for the algorithms used to train the AI?
***NOT*** Data team Design team
Bao is reviewing a diagnostic AI model when he notices something strange. The data set is larger than it should be. After reviewing the activity over the past weeks, he notices that extra items were placed in the data set that could have resulted in the AI model rendering incorrect diagnoses. However, the AI model was able to process the bogus data and still render accurate and correct diagnoses for each data item. This is an example of which of the following?
***NOT*** Privacy Fairness Transparency
Which of the following business roles is responsible for the deployment of AI models?
AI Ops Engineer
Which of the following terms describes systematic errors in AI systems that, intentionally or not, generate unfair decisions?
Bais