AI and National Security - COMPS

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

NIST, 2023 (Risk Management Framework components PS SAFER, trade offs)

"PS SAFER" acronym Privacy-enhanced Secure/Resilient Safe - but how do I know it's safe? Accountable/Transparent Fair - with harmful biased managed Explainable Reliable The problem is sometimes you can get too reliant on the framework and then you miss your blindspots to edge cases that don't fit the model.

Chahal, 2020 - synthetic data and its pitfalls

*This is a reading: Synthetic data: Data that is generated artificially, rather than collected from the real world. It can be used to train machine learning models, especially in cases where real-world data is scarce or difficult to collect. It can be: Generated in large quantities, privacy-preserving; used to generate data for rare events; used to generate data for dangerous or unethical situations. It can be difficult to generate realistic synthetic data. It can be biased (like all data), and expensive to generate. It can also more easily lead to overfitting.

Buchanan, 2020 - AI Triad (and what's important about each factor)

1. The foundation of artificial intelligence. MLs rely on the quality, quantity, and diversity of this to be successful and is often more important than the algorithm used 2. the recipe. AI is more powerful now and effective thanks to these; they extract the analysis, intelligence, and patterns from the data. It's the mathematical and computational techniques that AI systems use to process data and make decisions. Neural networks are an example of this. 3. The power and infrastructure required for this in AI tasks have grown significantly in recent years. Training deep learning models, in particular, demands substantial resources, often involving GPUs (Graphics Processing Units) or specialized hardware like TPUs (Tensor Processing Units). Compute power is the easiest to tell intent, capabilities, etc. of our adversaries. Discounts talent. It takes a lot of skill to really hone AI triad. The triad is useful but it is a lot more complex than three main components.

Hao, 2018 (ethics, RAI, dilemma)

A study analyzing millions of decisions made in a game called Moral Machine reveals significant cultural differences in ethical priorities. The game presented participants with dilemmas like sacrificing one life to save five, with variations considering age, social status, and action vs. inaction. Results were that cultural differences (identity, wealth, geography) in how people think self-driving cars should make life-or-death choices. Poorer countries more tolerant of jaywalking, individualistic cultures value saving more lives, East/West/South regions share similar preferences. This can help design ethical self-driving cars and inform AI policy, but the study encourages moving beyond "trolley problem" thinking to consider risk and bias.

Cases for versus against AI AWS

AGAINST AI AWS: -Causes unnecessary suffering (proportionality) -Weapon is inherently discriminate -Weapon is ineffective -Other effective means exist for accomplishing a similar military objective -Development will lead to wide scale proliferation There is significant public concern and civil society engagement FOR AI AWS: -America's rivals and adversaries are already investing in autonomous weapons technology -DoD has a duty to maintain a competitive advantage over rivals and potential adversaries -DoD has been consistently committed to producing and employing safe and reliable weapons systems -US should develop autonomous weapons in accordance with existing policeis and practices, otherwise it will risk losing a competitive edge against principal rivals.

Goldstein, 2023

AI reduces costs of disinfo campaigns further by automating content production, reducing overhead in persona creation and generating culturally appropriate outputs that are less likely to seem inauthentic (e. Spanish language models) thus hard to track and attribute. Also allows less resourced actors to use AI to run disinfo ops. → challenges realisms ideas of state as primary actors AI to drive up IOs in places where public commentary is solicited to overwhelm or falsify public opinion (e. A majority of public comments to the FCC on net neutrality in 2017 were driven by falsely repeated comments). AI enables dynamic content creation so actors aren't repeating the same content over and over. This was previously reliant on humans to change content. This also means that actors can deploy personalized chatbots that interact with targets on eon one

Klein, 2023 (shifts in AI, climate change, geopolitical impact)

AI's application in complex problems like protein folding shows promise, but we still have problems re: formulating appropriate objectives and addressing problems like hallucination. AI holds potential in climate change mitigation, but responsible deployment is crucial amid the transition to business applications from development stage.

Buchanan, 2021 (GPT-3, disinformation)

As part of well-resourced human-machine teams, AI can produce moderate-quality disinformation in a highly scalable manner. Mitigation efforts must focus elsewhere, such as on the infrastructure that distributes the messages. LLMs like GPT-3 seem better suited for disinformation—at least in its least subtle forms—than information, more adept as fabulists than as staid truth-tellers.

Weinstein, 2022 (Chinese innovation)

China's innovation strategy: ditching copycats, embracing "re-innovation" with state support, from Baidu to WeChat, it's a rising global competitor, but still struggles with advanced chip manufacturing and is reliant on imported tech

Bhuiyan, 2022 (dual use tech, surveillance state, SenseTime)

Chinese-based union startup SenseTime's facial recognition tech was used for surveillance on Uyghur Muslims, drew international scrutiny and sanctions July 2019 - SenseTime filed a patent for a facial recognition feature that could, among other things, distinguish between people who were and were not ethnically Uyghur. In response, US added it to Bureau of Industry and Security's entity list; the US treasury department's non-specially designated nationals (SDN) Chinese military-industrial complex companies list. SenseTime's ability to operate was limitedly impacted bc they were an intentionally weak sanction. Survellience state video and AI can be used manipulated to be used as pretext to justifying murder as "counter terrorism"

Sedova, 2021 (computational propaganda, AI disinformation, AI dual use)

Computational propaganda: the use of automated accounts to publish posts. Social bots are user accounts automated to interact with other user accounts. firehose of falsehood method: repetitive, high-volume messaging aimed at audiences on the political extremes to exacerbate tensions. modern active measures include: hack-and-leaks, forgeries, bot campaigns and proxy troll orgs. AI will boost disinfo's volume, velocity, variety, and virality online (e. deepfakes, audio manipulation)

Metz, 2016 (symbiosis of AI/humans, AlphaGo)

Despite AlphaGo's dominance, the match highlights the symbiotic relationship between humans and AI, with Go players improved their skills through interaction with the machine. AlphaGo's achievements hint at the potential for AI to advance human understanding and capability, rather than simply surpassing it.

Shane, 2018 (Maven, private sector and DoD, ethics)

Google didn't renew its contract with the Pentagon for Project Maven bc employees didn't want their work used for lethality and didn't align with Google's self image, despite Microsoft and Amazon working with DoD. Project Maven: uses AI to interpret video images for potential use in drone strikes.

Kozyrkov, 2022 - Bias-variance definition and their tradeoff in machine learning.

High bias (underfitting) - model pays little attention to the training data and is unable to capture the underlying patterns. Model is too simple to represent the data accurately. VERSUS High variance (overfitting) - model is highly sensitive to the training data and captures noise in addition to the underlying patterns. This leads to overfitting, where the model fits the training data very well but fails to generalize to new, unseen data. It's much more likely that you're going to overfit your training data so you'll see it work less well IRL than it was working in your validation phase.

Buchanan, 2022 Ch6 (human-AI role in warfare, AI in warfare)

Human role in warfare is changing. AI will speed up decision-making, amplify the precision of ordnance, allow humans to focus on strategic decisions, and enable military technologies and operations that were not possible before. These include unmanned submarines, drone swarm warfare, and self-targeting missiles. Democracies must be able to win in algorithmic warfare, in which AI will operate faster than humans can think. We could lose our technical advantage against China and end up repeating Pearl Harbor - being unprepared for new war. Governments will likely resolve ethical questions around lethal AI in warfare on the fly. → very aligned with realism → hegemony is the only way to be sure of success. Exmaple: DARPA's AlphaDogfight, showed that algorithms could defeat even the best human pilots.

Buchanan, 2022 Ch4 Failure (ways to fail, concerns with AI awareness)

If the AI system is capable of better strategic planning than humans, it is irrelevant that the AI does not know the stakes of the battle. If attaining this understanding of stakes and impact matters, then there are almost certainly circumstances in which it is inappropriate to use a machine learning system. If understanding does not matter, then the focus should be on overcoming other kinds of failures. There are gaps between human instruction and AI interpretation. True understanding by AI still hasn't happened: MLs still lack reliability for mission-critical tasks due to environmental variations and literal interpretation of instructions. Opacity in ML systems complicates debugging and undermines trust, as they cannot explain their decision-making processes.

Google, 2020 - GAN (who created, what are the parts and how do they work)

Introduced in 2014 by Ian Goodfellow Provide a means to generate new data that resembles training data A generator: generates fake outputs A discriminator or evaluator: tries to differentiate real outputs from fake ones The two networks "compete" against one another in a zero-sum game, in which the generator attempts to comes up with fakes and the evaluator determines if they are real or fake

Buchanan, 2022 Ch8 Lying (disinformation, propoganda)

Nakasone/CYBERCOM thinks AI generated disinfo will be in diplomacy, democratic processes, and warfare. AI influences various aspects of users' online experiences, from the ads they see to the social media content they engage with. Platforms like Facebook heavily rely on AI algorithms, with 64 percent of extremist group joins attributed to these algorithms. Despite causing problems, tech companies like Facebook believe that more AI is the solution to moderating content effectively. Example: Huawei's use of deep fake technology. Automated defense mechanisms + human judgment = best defense against deepfakes. However, tech companies face challenges in balancing research sharing with the risk of misuse, as seen in OpenAI's cautious approach with GPT-3. The impact of AI on global politics and warfare underscores the importance of responsible use and oversight.

Buchanan, 2022 Ch9 Fear (security dilemma, nuclear weapons)

New technology AMPLIFIES FEAR AND THE SECURITY DILEMMA (e. Nuclear weapons amplified tensions of the Cold War). unclear if AI amplifies fear yet since it can be used so broadly. Controlling AI is a lot harder than controlling nuclear weapons. Algorithms are difficult to count and easy to digitally copy or move; harder to proliferate. No government will ever grant adversaries unrestricted access to its classified computer networks, like IAEA has access to nuclear programs, and it will be virtually impossible to prove that a particular kind of dangerous algorithm has not been developed. if you have enough data, you can infer intent.

Adversarial learning

Process of extracting information about the behavior and characteristics of an ML system and/or learning how to manipulate the inputs into an ML system in order to obtain a preferred outcome. By training models against such attacks, we can improve their overall robustness and security.

Bansemer, 2021 (hacking, cybersecurity, AI offensive)

Reading connections: Stuxnet, attribution AI tools for attacking will be designed for maximum impact (fast entry, objective achievement). Autonomous AI hackers are the worst case scenario; they could adjust operations in real time without human intervention (using gamification like AlphaGo). Two biggest concerns would be speed and control Hackers arent currently utilizing AI to compromise tech systems, but will soon. AI can also support cyber defense; though not currently at the advancement as what the offensive has ability to use. Currently, hackers aren't using AI because conventional attacks work well. Once cyber defenses improve is when they will begin using AI tools for hacking will become widely available as AI becomes widely available

Brown, 2023 AI and Strategic Competition take aways (Influences, alignment, acquisition)

Significant factors affecting US AI strategy compared to China/Russia: -Culture and worldview → cult of the individual, distrust of big tech (difficulty in developing national unity behind a critical asset- think NASA during space race, superpower mentality (arrogance). -Competition between two different global internet philosophies: Western-decentralized vs China-internet sovereignty - US struggles to build national unity behind AI for whole-of-gov approach and assign resources to the factors identified in strategy documents ACQUISITION POLICY NEEDS OVERHAUL ASAP Bad acquisition policy is severely degrading capabilities → private companies and venture capital does not want to work with the govt because the govt is a horrible customer Technical Aspects that favor AI in authoritarianism: - surveillance systems → highly exportable - centralized data - centralized planning and funding of critical systems (ex. China)

Chahal, 2020 - (strategic data competition, China)

The notion that China has a data advantage over the U.S. at this current moment is not accurate because a raw data advantage does not necessarily translate into advanced military AI. Just because CCP has a lot of data doesn't necessarily mean it can be applied in a beneficial way. Whoever reaches the experimentation phase (i.e., where data for a specific application is digitally stored, cleaned and transformed, labeled, and optimized to train a machine learning algorithm) is at an advantage over others for that application, as it is positioned to move faster toward developing its aimed AI application.

Imbrie, 2020 (US alliance w EU, US role in AI global strategy, China)

U.S. lacks a cohesive strategy for AI collaboration with allies and has caused strain with alliances due to disagreements over burden-sharing and perceived U.S. disengagement of alliances. EU has taken safety route. China and Russia exploit AI to export authoritarian practices (censorship, oppression, lethality, survellience, propaganda) America needs to lead the way as a systems integrator for the west to defeat authoritarian exports of AI. Specifically, capitalize on advances in AI and machine learning to: → foster sustainable and inclusive economic growth → improve service delivery → promote transparent and accountable governance → strengthened data privacy standards → respect for civil liberties → economic empowerment of citizens within rules-based market economies → cleaner, safer, and more efficient transportation → precision medical diagnosis → greater access to education → more effective disaster response.

Buchanan, 2022 Ch5 Inventing (China, AI autocracy, US drawbacks due to democracy)

US doesn't have as much influence over its technology sector compared to China and SOE and private corp integration for dual use/military-civil fusion. China also has expansive cyber espionage capabilities and dedication to AI advancement which US does not. US cannot compel major tech firms to support its intelligence efforts to the extent that China does, highlighting a crucial asymmetry in the competition for AI supremacy.

Giacaglia, 2019 - Transformer

Underlying neural network infrastructure that produces content; enables parallel processing of text for training of large data models; is able to learn the importance of word order from data through positional encoding and attention - where it notes the order and position of words; self-attention allows for understanding of the complex relationship between words in an input sequence; attention is a model that enables it to learn the importance of word order.

what's made AI more powerful in recent years using the AI triad

access to more data, capabilities to create synthetic data, access to the cloud and cheaper access to entry.

Lex Fridman, 2023 (consciousness of AI, worst case scenarios)

believes we may reach a level of AI succeeding human intelligence, turn into a dystopian 1984 nightmare of humans loving the oppression AI imposes. Chat GPT4 knows how to fake consciousness. Sam Altman does believe AI can be conscious

Chat GPT response on AI proliferation dangers

→ AI proliferation: widespread dissemination and adoption of AI across various sectors, industries, and countries. As AI technologies become increasingly accessible and affordable, they are being deployed in diverse applications, with potential dangers.

Pavel, 2023 (AI impact on geopolitics)

→ Big tech reaches across national boundaries; and simultaneously influence local communities in much more comprehensive and invasive ways (e. gathering personal data). they influence the economy bc they influence the spread of information (and disinformation) and create new communities and subcultures. It's directly contributed to major shifts in domestic politics and thus foreign politics (e. Trump) → believes that AI become an actor itself (consciousness) → The borderless nature of AI makes it hard to control or regulate. As computing power expands, models are optimized, and open-source frameworks mature, the ability to create highly impactful AI applications will become increasingly diffuse.

Hoffman, 2021 (AI cyber security, security dilemma, proliferation)

→ machine learning could intensify existing dynamics in cyber competition. Whether attacking or defending, gaining intelligence by infiltrating adversaries' networks may become essential. → Adversaries might target machine learning systems themselves by compromising supply chains, poisoning training data, deploying advanced malware, or exploiting trust in machine learning outputs. → solution: the west must secure supply chains, and engage in responsible offensive operations while maintaining communication with adversaries to mitigate escalation risks. Offense: → Intruding into systems during their development phase allows attackers to reverse-engineer or sabotage the system to exploit vulnerabilities later. This advanced access enables attackers to gather intelligence on defense strategies, increasing the value of infiltrating networks early. Defense Proposition: → Machine learning systems cannot be easily patched, necessitating adaptability to defend against evolving threats. Defenses against one type of deception may inadvertently open vulnerabilities to other forms of attacks.


संबंधित स्टडी सेट्स

Physical Development In Early Childhood

View Set

*Italian Unification, potential essay questions (20 marker), History, A Level

View Set

English Quiz 3 - Using Writing Skills

View Set