Review for CS310 exam (no floridi pt. 2)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Nonmaleficence:

(Do No Harm) Privacy, Security, and Capability Caution;

Beneficence:

(Do Only Good) Promoting Well being, Preserving Dignity, and Sustaining the Planet;

Autonomy:

(Set Standards and Norms) Power to Decide to Decide (Again);

Floridi's General Argument

1. AI is not a new form of intelligence but a new form of agency; 2. Its success is due to the decoupling of agency from intelligence. 3. Continued success requires that we envelope this decoupled agency through environments that are structured to be AI- friendly. 4. Conclusion: The future success of AI will be determined by the human ability to develop tools that advance factors 1-3.

ACM ethics

1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing. 1.2 Avoid harm. 1.3 Be honest and trustworthy.

Five Principles of AI Ethics based on comparison of current AI Ethics Frameworks

Beneficence Non-maleficence Autonomy Justice Explicability

Ethics of Care

Carol Gilligan, Nel Noddings, Virginia Held Comes out of second wave feminist ethics ● Moral life is conceived as a network of relationships with specific people; ● 'living well' means caring for those people, attending to their needs, and maintaining their trust. (Rachels, 2010) ● Emphasis is on the community rather than the individual becoming moral

Explicability:

Enabling the Other Principles through Intelligibility and Accountability.

Ethics Lobbying

Exploiting digital ethics to delay, revise, replace, or avoid good and necessary legislation and its enforcement around the design, development, deployment of AI artifacts. Solution: Good legislation and enforcement, limit lobbying actors influence on lawmakers, Work to have public exert ethical pressure. Expose lobbying and distinguish it from genuine self-regulation work on codes of conduct for the AI industry.

Responsibility Ethics

Hans Jonas responsibility ethics is an approach to ethical reasoning that begins with the notion that human beings are not only or even primarily acting beings but are beings that are constantly reacting—responding—to powers, forces, and events that are beyond our control. The scale and scope of moral action therefore cannot be confined to individuals as intentional actors the intentionality to take responsibility for what we decide to create and not to create. We are not just following orders for a nice paycheck.

What two points do utilitarians draw upon?

I. the principle of social utility should be used to determine morality; II. social utility can be measured by the amount of happiness produced for society as a whole.

Deontological Theories

Immanuel Kant ● Kant argued that morality must ultimately be grounded in the concept of duty or obligations that humans have to one another. ● For Kant, morality can never be grounded in the consequences of human actions. ● In Kant's view, morality has nothing to do with the promotion of happiness or the achievement of desirable consequences. ■ Kant defends his ethical theory on the grounds that: 1) humans are rational, autonomous agents; 2) human beings are ends-in-themselves, and not means to ends.

Utilitarianism

Jeremy Bentham, John Stuart Mill ● In this view, the consequences (i.e., the ends achieved) of actions and policies that provide the ultimate standard against which moral decisions must be evaluated. ● So if choosing between acts A or B, the morally correct action will be the one that produces the most desirable outcome. ● Utilitarians argue that it is the consequences of the greatest number of individuals, or the majority, in a given society that deserve consideration in moral deliberation.

Contractualism

Locke, Rosseau, Hobbes ● From the perspective of social-contract theory, a moral system comes into being by virtue of certain contractual agreements between individuals. ● One virtue of the social contract model is that it gives us a motivation for being moral. ● This type of motivation for establishing a moral system is absent in both the utilitarian or deontological theories. ● So a contract-based ethical theory would seem to have one advantage over both.

Ethics Shirking

Malpractice of doing increasingly less ethical work...fulfilling duties, respecting rights, honoring commitments. The less ethical work that is done the lower return there is perceived to be. Seen in the absence of clear allocation of responsibility for harms. Solution: More fairness and less bias. Reallocate responsibilities with praise and blame, reward and punishment, causal accountability and legal liability.

Ethics Dumping

Malpractice of exporting research activities about digital processes, products, service in other contexts or places in ways that would be totally unacceptable and importing the outcomes of unethical research activities. Solution: Ethical management of research through public funding. Establish certification system of AI artifacts with provenance and training dataset and model labels/cards showing production standards.

Ethics Bluewashing

Malpractice of making unsubstantiated claims that the technology is more digitally ethical than it is in reality through masking, deceiving, distracting to save costs or gain advantages. Solution: Transparency and Education. Improving public, lawmakers, and technologists understanding with clear detection metrics, certifications, and making acts of AI fraud visible and shameful.

Ethics Shopping

Pick and choose based on own interests, motivations, benefits. Solution: Establish clear, shared, and publicly accepted standards.

Virtue ethics

Plato and Aristotle ● ignores the roles that consequences, duties, and social contracts play in moral systems in determining the appropriate standard for evaluating moral behavior. ● Virtue ethics focuses on criteria having to do with the character development of individuals and their acquisition of good character traits from the kinds of habits they develop. ● On this view, becoming an ethical person requires more than simply memorizing and deliberating on certain kinds of rules. ● Aristotle believed that to be a moral person, one had to acquire (through practice and forming habits) the right virtues (strengths or excellences).

Justice:

Promoting Prosperity, Preserving Solidarity, and Avoiding Unfairness;

Recommendations for how we can deliver a 'Good AI Society,'

Recommendations for how we can deliver a 'Good AI Society,' Ethical Development: Ethical Impact Analysis along with Security Threat Models Transparency and Accountability: Making sure that AI systems are explainable and monitored after release Public Engagement / Trust: Develop tech with public participation, create systems for change based on stakeholder feedback Institutional Oversight Prioritizing AI for Social Good: Public Interest Tech - not only for commercial applications Collaboration between Sectors and Disciplines: Cooperate between sectors to write AI policy and move forward with development in an ethical way Increasing AI Education and Literacy: To prepare people for working with AI and helping them understand its impact


Kaugnay na mga set ng pag-aaral

Insurance Test (Casualty Liability Basics)

View Set

APUSH Vol. 1 to 1877 Ch. 16 The South and the Slavery Controversy, 1793-1860

View Set

Chapter 51: Breast Disorders ANS

View Set

U.S. Environmental History Final

View Set

Customer Relationship Management

View Set

فيزياء-الفصل ثاني + ثالث

View Set

Hamilton Zahn History test on chapter 9

View Set

Cash and Receivables CPA questions

View Set

Florida Realtor Sales Associate State Exam

View Set