Petropoulos, Georgios. "The Dark Side of Artificial Intelligence: Manipulation of Human Behaviour." Bruegel-Blogs, 2 Feb. 2022, p. NA. Gale Academic OneFile, link.gale.com/apps/doc/A691416100/AONE?u=lincclin_pcc&sid=bookmark-AONE&xid=e27c3c1e. Accessed 19

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Paragraph 6

"A simple theoretical framework developed in a 2021 study (an extended model is a work in progress, see the reference in the study) can be used to assess behavioural manipulation enabled through AI. The study mostly deals with users' "prime vulnerability moments" , which are detected by a platform's AI algorithm. Users are sent ads for products that they purchase impulsively during these moments, even if the products are of bad quality and do not increase user utility. The study found that this strategy reduces the derived benefit of the user so that the AI platform will extract more surplus, and also distorts consumption, creating additional inefficiencies."

Difficulty with a framework

"Even with such a framework in place, detecting AI manipulation strategies in practice can be very challenging. In specific contexts and cases, it is very hard to distinguish manipulative behaviour from business as usual practices. AI systems are designed to react and provide available options as an optimal response to user behaviour. It is not always easy to justify the difference between an AI algorithm that provides the best recommendation based on users' behavioural characteristics and manipulative AI behaviour where the recommendation only includes inferior choices that maximise firms' profits. In the Google shopping case, the European Commission took around 10 years and had to collect huge amounts of data to demonstrate that the internet search giant had manipulated its sponsored search results."

Paragraph 2

"For example, the use of AI in the workplace may bring benefits for firm productivity , but can also be associated with lower quality jobs for workers . Algorithmic decision-making may incorporate biases that can lead to discrimination (eg in hiring decisions , in access to bank loans , in health care , in housing and other areas)."

Paragraph 4

"Manipulation can take many forms: the exploitation of human biases detected by AI algorithms, personalised addictive strategies for consumption of (online) goods, or taking advantage of the emotionally vulnerable state of individuals to promote products and services that match well with their temporary emotions. Manipulation often comes together with clever design tactics , marketing strategies , predatory advertising and pervasive behavioural price discrimination , in order to guide users to inferior choices that can easily be monetised by the firms that employ AI algorithms."

Paragraph 3

"One potential threat from AI in terms of manipulating human behaviour is so far under-studied." "Digital firms can shape the framework and control the timing of their offers, and can target users at the individual level with manipulative strategies that are much more effective and difficult to detect."

Paragraph 14

"Quite often it is said that AI systems are like a black box and no one knows how they operate exactly. As a result, it is hard to achieve transparency. This is not entirely true with respect to manipulation. The provider of these systems can introduce specific constraints to avoid manipulative behaviour. It is more an issue of how to design these systems and what the objective function for their operation will be (including the constraints). Algorithmic manipulation should in principle be explainable by the team of designers who wrote the algorithmic code and observe the algorithm's performance"

Paragraph 13

"The first important step to achieve this goal is to improve transparency over AI's scope and capabilities. There should be a clear understanding about how AI systems work on their tasks. Users should be informed upfront how their information (especially, sensitive personal information) is going to be used by AI algorithms."

Three criteria that should be met by all providers of AI system

"The second important step is to ensure that this transparency requirement is respected by all providers of AI systems. To achieve this, the three criteria should be met: * Human oversight is needed to closely follow an AI system's performance and output. Article 14 of the draft European Union Artificial Intelligence Act (AIA) proposes that the provider of the AI system should identify and ensure that a human oversight mechanism is in place. Of course, the provider has also a commercial interest in closely following the performance of their AI system. * Human oversight should include a proper accountability framework to provide the correct incentives for the provider. This also means that consumer protection authorities should improve their computational capabilities and be able to experiment with AI algorithmic systems they investigate in order to correctly assess any wrongdoing and enforce the accountability framework. * Transparency should not come in the form of very complex notices that make it harder for users to understand the purpose of AI systems. In contrast, there should be two layers of information on the scope and capabilities of the AI systems: the first that is short, accurate and simple to understand for users, and a second where more detail and information is added and is available at any time to consumer protection authorities."

Rules for AI and economic impact concern

"These rules will provide a framework for the operation of AI systems which should be followed by the provider of the AI system in its design and deployment. However, these rules should be well targeted and with no excessive constraints that could undermine the economic efficiencies (both private and social) that these systems generate, or could reduce incentives for innovation and AI adoption."

The Public and AI

"This practical difficulty brings us to the fourth important step. We need to increase public awareness. Educational and training programmes can be designed to help individuals (from a young age) to become familiar with the dangers and the risks of their online behaviour in the AI era. This will also be helpful with respect to the psychological harm that AI and more generally technology addictive strategies can cause, especially in the case of teenagers. Furthermore, there should be more public discussion about this dark side of AI and how individuals can be protected."

Paragraph 1

'Many firms collect an enormous amount of data as an input for their artificial intelligence algorithms. Facebook Likes, for example, can be used to predict with a high degree of accuracy various characteristics of Facebook users: "sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender, " according to one study .' (qtd Michal Kosinski1, David Stillwell, Thore Graepel in "Private traits and attributes are predictable from digital records of human behavior")

Conclusion of AI

AI can generate enormous social benefits , especially in the years to come. Creating a proper regulatory framework for its development and deployment that minimises its potential risks and adequately protects individuals is necessary to grasp the full benefits of the AI revolution.

The European Union AIA and it's merit

For all this to happen, a proper regulatory framework is needed. The European Commission took a human-centric regulatory approach with emphasis on fundamental rights in its April 2021 AIA regulatory proposal. However, AIA is not sufficient to address the risk of manipulation. This is because it only prohibits manipulation that raises the possibility of physical or psychological harm (see Article 5a and Article 5b). But in most cases, AI manipulation is related to economic harms, namely, the reduction of the economic value of users. These economic effects are not considered in the AIA prohibitions.

Paragraph 12

Under Important steps to address potential manipulation by AI "The possibility of this behavioural manipulation calls for policies that ensure human autonomy and self-determination in any interaction between humans and AI systems. AI should not subordinate, deceive or manipulate humans, but should instead complement and augment their skills (see the European Commission's Ethics Guidelines for Trustworthy AI ).""

Paragraph 5

Under Success from opacity: "Lack of transparency helps the success of these manipulation strategies. Users of AI systems do not in many cases know the exact objectives of AI algorithms and how their sensitive personal information is used in pursuit of these objectives. The US chain store Target has used AI and data analytics techniques to forecast whether women are pregnant in order to send them hidden ads for baby products. Uber users have complained that they pay more for rides if their smartphone battery is low, even if officially, the level of a user's smartphone's battery does not belong to the parameters that impact Uber's pricing model . Big tech firms have often been accused of manipulation related to the ranking of search results to their own benefit, with the European Commission's Google shopping de cision being one of the most popular examples. Meanwhile, Facebook received a record fine from the US Federal Trade Commission for manipulating privacy rights of its users (resulting in a lower quality of service)."


Ensembles d'études connexes

WK 5 Fractures, Musculoskeletal congenital disorders

View Set

Chap. 24: asepsis and infection control

View Set

Majority and Supermajority (Part III).

View Set

Health 14 (Pearson Get Fit Stay Well)

View Set

American History: Unit 2 Study Guide

View Set

FTCE K-6: Language Arts & Reading

View Set

Review for Unit 1 Assessment Quizlet

View Set