Data c104 Final - Readings
The Most Surveilled Place in America (de Valle)
- "All the surveillance technology in the world won't stop people from trying to cross the border; it's an obstacle, not, as Customs and Border Protection would have you believe, a deterrent." - Biden and the federal government have invested billions on border surveillance technology over the past three decades yet the number of unauthorized border crossings has gone up year after year + migrants deaths increasing - Desert as greatest deterrent: since mid-1990s, Border Patrol's "prevention through deterrence" strategy focuses on forcing migrants to cross in hostile desert terrain - How the military and smugglers depend on each other → Both exist because legal migration is nearly impossible; both make profit on this fact - Smugglers increasingly tied to cartels (discounted crossing if carry drugs in backpack) - Volunteers aiding migrants are increasingly surveilled and criminalized too
The Alleghany Proposal (Eubanks)
- Though designed to support rather than replace human decision-making, in practice AFST trains the intake workers AFST as "an opinion embedded in mathematics" - 3 key components of the model which reflect human decision-making & affect predictive accuracy: 1) outcome variables → inherently subjective, we lack perfect proxies for child mistreatment, have to use community re-referral and child placement 2) predictive variables → found through regression methods, and correlation not necessarily causation 3) validation data → model tested on families who use public services only, missing families who use private services - AFST's reliance on call referrals as a proxy for child maltreatment → perpetuate racial bias (mixed race and Black families more often the object of call referrals) - CYF requirements and expectations → intensify the challenges and vulnerabilities of low-income families because reliance on public services = more scrutiny
Moral Crumple Zones (Elish)
- "Moral crumple zone" → misattribution of responsibility to a human actor who had limited control over the behavior of an automated or autonomous system (like a car's crumple zone protecting driver, moral crumple zone protects system) - Concept is useful because it highlight how 1) structural features of a system and the 2) media's portrayal of accidents can take advantage of human operators to fill the gaps in accountability that may arise in the context of new and complex system - Mismatch in sociotechnical systems between control and responsibility → "While control has been effectively distributed, responsibility has not scaled accordingly." - Two examples of sociotechnical systems producing moral crumple zones: 1) Three-Mile Island → Nuclear plant failure caused by poor design and human error, but the media attributed blame solely on operators. Because it was designed as an automated system, control room indicators didn't convey system conditions clearly, which made it difficult for operators to understand and respond to the problem. Though only operators were blamed, designers, managers, and industry regulators were also responsible. 2) Air France Flight 447 → Airplane crash that killed 228 people and was portrayed in the media as the fault of pilots. The Airbus design (default autopilot) contributed to chaos and confusion in the cockpit (generally, automated systems make for worse human control) - Moral crumple zones likely to emerge: 1) in immediate aftermath of highly publicized event and 2) when the operator knows about system malfunction (regardless of whether they have agency to address) - Elish's solution → Improve technical safety certification paradigms to take into account the interactional aspect of system components and define boundaries of responsibility
1984 (Orwell)
- 1984 explores how technology is intertwined with power; hierarchical surveillance by omnipresent "Big Brother" - Co-production of hierarchical social order and technologies that enforce certain narratives (constant surveillance, rewriting history to fit authority's narrative; distorting reality) - Technologies shaped by what kind of social order is prescribed by top-down authority; technological possibilities are created through what Big Brother wants to do with society
It's Official: Cars Are Terrible at Privacy and Security (Caltrider)
- All 25 car brands reviewed collect too much personal data and use it generate more data about your inferred preferences - Most share your data and give drivers no data control (don't ensure right to have personal data deleted) - Couldn't confirm any brands met Minimum Security Standards (encrypt personal information) - Nissan's creepy categories like "sexual activity" and "genetic information" - Most brands signed on to list of Consumer Protection Principles for things like data minimization, transparency, and choice but fail to adhere to any of them (they know what they should do, but are not) - Lack of consumer choice is huge issue: 1) no good options, and researching privacy is too time-consuming, confusing, and difficult for average consumer, 2) consent is an illusion → cars ignore or assume consent - Tesla's Customer Privacy Notice says opting out of vehicle data collection → "may result in your vehicle suffering from reduced functionality, serious damage, or inoperability." - Authors want to increase awareness of privacy issues to hold brands accountable
A Universe of Data (Sadowski)
- The core trade-off of digital capitalism, according to Sadowski, is data in exchange for convenience: tech companies provide users with convenient services while collecting and profiting from their personal data. - The "data extraction imperative" involves the relentless hoarding of data, motivated by the belief that data is inherently valuable. By framing data as a "natural resource," tech companies legitimize their extensive data collection practices, expanding the reach of digital capitalism and benefiting from increased control over digital ecosystems. - Living in the "age of Amazon" → data generates value: 1) To profile and target people 2) To optimize systems 3) To manage things 4) To model probabilities 5) To build stuff 6) To grow the value of assets - How flows of data correspond to flows of power and profit → purposeful rhetoric of data universality casts everything as within domain of digital capitalism
The uselessness of AI ethics (Munn)
- Argues for "uselessness" of AI ethical principles because: 1) Meaningless → contested or incoherent, thus difficult to apply 2) Isolated → within an industry / education system that devalues ethics 3) Toothless → adhere to corporate agendas, thus inconsequential - Hard-to-operationalize principles creates a gap between these high-minded but ineffectual principles and actual AI practice - Lack of consensus about terms like "fairness" and "privacy" results in "business as usual" and "box-ticking" (Terms are too abstract, ambiguous, and contested) - Two key tensions underlying attempt to operationalize ethical principles: 1) Inter-principle tension → how to manage competing ethical demands on an AI design 2) Intra-principle tension → how to materialize a principle into a technological form - Munn argues that this is difficult work and there are no shortcuts → must engage with social/political questions + lots of prototyping, testing, and rejecting different designs - Criticizes dominant turn to AI principles → fruitless and dangerous distraction, diverting immense financial and human resources away from potentially more effective activity
Coded Exposure (Benjamin)
- Argues photography is not neutral, objective, or universal - How development and application of visual technologies is related to issues of race and power (embedded with human bias; favoring lighter skin; "reproduce long-standing forms of authority and hierarchy") - How visual technologies make blackness both invisible and visible → Paradox of recognition (over-surveillance and invisibility)
Why are Certain Countries Poor? Dismantling comparative models of development (Brooks)
- Brooks criticizes modernization theories that portray every country as developing along one single pathway as a "colonizer's model of the world" - Comparative models → 1) ignore the interconnectedness of nations existing at the same time and 2) Assumes Western nations = the ideal model nation - Brooks argues that comparisons must be relational → include interactions between countries with understanding that impoverishment as a condition that is related to a global history of colonialism and is sustained by unequal world trade (not just internal national factors)
The lawmaker behind California's vetoed AI bill, SB 1047, has harsh words for Silicon Valley | TechCrunch (Zeff)
- CA Senator Scott Weiner claims SV institutions spread misinformation about SB 1047 that contributed to its failure Tech executives like Andreeson Horowitz spread the narrative that SB 1047 would send founders to jail (only true if they lied on forms) - Big tech trade coalition and open AI chief strategist spread that the bill would push tech companies out of California (but would affect all companies doing business in California, not just those located there) - Famous figures in AI world (Fei-Fei Ling "godmother of AI") saying that it would require shutting down of open source programs by model provider (but only required to shut down if in possession) - Even if failed, SB 1047 impacted the public conversation about AI safety in California → Vitalized the conversation, emphasizing need to address these problems, prompting building of new task forces, organizing researchers, drafting new laws and regulations
UC Berkeley Committee for Protection of Human Subjects, About CPHS/OPHS and Guide to the IRB Review Process
- Committee for Protection of Human Subjects (CPHS) → two groups that serve as Institutional Review Boards (IRBs) for all human subjects research at the university - Office for Protection of Human Subjects (OPHS) → the administrative office that supports the CPHS by coordinating research review, adhering to changing policies, rules and regulations, handling research protocol and compliance issues, and providing education and outreach to investigators - Application for CPHS review → only needed if project constitutes human subjects research by federal guidelines; requires CITI training; approval granted for 1-year or 10 years
Labor (Crawford)
- Crawford describes how AI-driven workplace practices emerge from the same underlying model of industrial capitalism and labor automation as older practices of exploitation, driven by underlying logic that values conformity, standardization, and interoperability - Expansion/Evolution of workplace mechanisms for control and surveillance → from "inspection houses" to observing and controlling workers in increasingly intimate ways, down to the last micro-movement - At Amazon warehouse, Crawford observes the toll of anxiety-provoking, time-sensitive conditions (bandages and advil in vending machines) - Human-fueled automation (the maintenance and behind-the-scenes work that goes into giving AI systems the appearance of working seamlessly and fully automated) → Critical for creation and upkeep because this work is necessary for systems to function properly and perform well - The perception of AI systems as entirely autonomous relies upon keeping the human labor behind it invisible - True labor costs of AI are played down because to recognize it and compensate accordingly would make AI systems more expensive and less efficient (could not continue crowdsourcing for such cheap pay) - Based on techno-evangelists' definition of efficiency as greatest extraction of shareholder value (emphasizes standardization, simplification, and speed) - For labor and tech work culture, implications of this definition include workers having longer hours, less pay, and more insecurity in their positions - Resistance and sabotage reveal that workers do retain agency to affect change in workplace culture → organization and solidarity
It's Not Technology That's Disrupting Our Jobs - NYT (Hyman)
- Critical of technological determinism → instead, argues that social change is primarily driven by our decisions about how to organize our world; technology follows to accelerate and consolidate change - Argues that job insecurity and gig-economy are not an unavoidable consequence of technologies → instead, argues that nature of work is a social choice constructed by corporate and policy decisions - Example: industrious revolution was necessary precondition for technologies of Industrial Revolution (its social impact was made possible because of the industrious revolution's separation of home and work life; wage work changed social order of labor) - Frames the present moment as a second industrious revolution → Post-1970s, more insecure and temporary work emerge after philosophical shift towards strictly financial view of corporations favoring lean corporations - Argues technologies like Uber did not create a new gig-economy labor supply, but took advantage of what was already there → "Uber is a symptom, not a cause." - Freedom of gig-economy for workers just means "freedom to be afraid" in light of severed obligations between businesses and employees - To make capitalism work for us, must understand that insecurity is not the inevitable cost of technological progress and we always have a choice
On Racialized Tech Organizations and Complaint: A Goodbye to Google | by Alex Hanna | Medium (Hanna)
- Different, positive experience of Ethical AI team vs. Google's broader company culture (inclusive, rooted in ethical Black feminist framework of growth, nurturing, and wanting to see each other succeed) - Quitting because she is tired of Google's toxic workplace problems (being yelled at by managers after pointing out the very direct harm that their products were causing to a marginalized population; staggering low numbers of Black female employees; tendency to promote people who don't care about harmful effects of system) - How Google, like other tech companies, invisibly maintains white supremacy in the workplace and in their products + offers methods tech workers can use to challenge and expose their employers' ongoing investment in white supremacy - Complaints reveal the "institutional mechanics of Google" - Complaint as a "a strategy of coalitioning and solidarity" - Naming the whiteness of organizational practices can help deconstruct how tech companies are terrible places to work for people of color + challenge company's ongoing investment in white supremacy - Suggestion to tech workers → continue to complain, to be a feminist ear for others, and develop institutional analyses of your own
Anatomy of an AI System (Crawford and Joler)
- Each small moment of technological convenience requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data - Diagram combines and visualizes these 3 central, extractive processes across time as a visual description of the birth, life and death of a single Amazon Echo unit - Echo user → simultaneously a consumer, a resource, a worker, and a product - Need "context of a geological process" to reveal vastness of true costs of technological convenience - Environmental and social hidden costs of "digital labor" (building and maintaining the stack of digital systems) → these processes create unequal concentrations of wealth and power - Social costs → indentured labor in hazardous mines to extract rare earth minerals; strictly controlled and sometimes dangerous hardware manufacturing and assembly processes in Chinese factories; exploited outsourced cognitive workers in developing countries labelling AI training data sets; informal physical workers cleaning up toxic waste dumps; extreme inequality and worker alienation in Amazon labor structure - Environmental costs → earth removed in extraction is discarded as waste called "tailings" to be dumped back into the hills and streams and pollute with ammonium; mining and refining activities consume vast amount of water and generate large quantities of CO2 emissions; destruction of natural landscapes like in Indonesia - "Extractivism" → relationship between different forms of extractive operations in contemporary capitalism - AI systems inscribe and build certain assumptions into a new world and shape how opportunities, wealth, and knowledge are distributed
Google's Selfish Ledger is an unsettling vision of Silicon Valley social engineering (Savov)
- Epigenetic theories liken user data to a constantly evolving "Lamarckian epigenome," a codified ledger that reflects our actions, preferences, and relationships, serving as a dynamic representation of who we are. - The ledger, drawing from vast behavioral data, uses intimate knowledge to make personalized suggestions, guiding behavior through behavioral sequencing based on multi-generational data patterns.
Informational Persons and Our Information Politics (Koopman)
- Explores Otto Neurath's impact on data, communication, and identity through his Isotype visual language, aimed at transcending disciplinary, cultural, and language barriers. - Koopman highlights the modern concept of "informational persons," where identities are shaped by data, and illustrates this with a thought experiment on how losing personal data affects selfhood - Erasure of information identity = unpresentable in modern society, and helpless and detached from essential organizing systems - Koopman argues that data are not just external attachments but constitutive parts of who we are i.e. our informational selves are deeply integrated into our overall identity; we are "cyborgs" of data + body/mind/soul
Do Artifacts Have Politics? (Winner)
- Exploring the notion that technical artifacts have political qualities i.e. what matters is not technology itself but the social or economic system in which it is embedded (central premise of Social Determinism of Technology) - Technologies can have politics in one of two ways: 1) the invention, design, or arrangement of the technical system or device settles some political issue in a community 2) inherently political technologies, which require or are strongly compatible with certain forms of social order - Examples: Robert Moses's bridges constructed to prevent buses from coming to his parks, enforcing racial segregation of Chicago; McCormick's molding machines destroying the union by replacing skilled laborers with unskilled
Panopticon (Foucalt)
- From the "political dream of the leper" to the "political dream of the plague" → binary divisions (separation) to differential distribution (segmentation) - Compatible projects of exclusion/interment (leper) and discipline/surveillance (plague) → "The first is that of a pure community, the second that of a disciplined society. Two ways of exercising power over men" - The quarantine structure of the plague gave rise to disciplinary projects → intensification and ramification of unitary, omnipresent, hierarchical power - Panopticon = Prison arrangement (central tower, at all times able to observe individual cells) → embodies Bentham's principle power should be visible and unverifiable - Panoptican also automatizes and deindividualizes power - Laboratory of power → "The Panopticon is a privileged place for experiments on men, and for analyzing with complete certainty the transformations that may be obtained from them." - Panoptican organizes power to strengthen social forces, not for authoritarian reasons → "Panopticism is the general principle of a new 'political anatomy' whose object and end are not the relations of sovereignty but the relations of discipline."
OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (Perrigo)
- GPT-3's training data includes biased and toxic content from the internet, making it prone to generating harmful remarks, leading to the need for additional AI-powered safety mechanisms. Workers in the Global South, employed by companies like Sama, are paid low wages to label this data, contributing to the AI industry under exploitative conditions. - Despite Sama's claims of ethical AI, the traumatic nature of the data labeling work led to the early termination of their contract with OpenAI. - AI industry continues to depend on hidden human labor for data labeling, even as this labor remains underpaid and often unrecognized.
Contested Indicators (Davis)
- Global Fund denied Venezuela's request for aid because didn't meet criteria of Eligibility Policy, a policy built on indicators or "tools of global governance" (country's income level according to World Bank and official disease burden data) - Reliance on specific indicators and certain official forms of data being available → outdated, censored, misaligned (GNIpc says nothing about income/wealth distribution; political reasons like government denial preventing official data on stigmatized issues) - Venezuelans with AIDS stuck in "policy vortex" → All the official data they needed to be eligible for medical aid was out of date, censored, missing, or about the wrong people - Activists' response to Fund's denial → community-based, continuous advocacy to overcome the power of data/numbers - Fast-Track approach → Focus on key populations and regions to optimize efforts; judge outcomes by "lives saved" - Indicators and targets (90-90-90 targets and Eligibility Policy) can be slippery and contested, especially when resources limited - Global health indicators → distortive abstractions that create oversimplified rank-ordering of complex phenomena; impose external norms that are ill suited for local needs; fluid and responsive to political and economic pressures; can take on the status of law in their operations, reinforcing unequal postcolonial power relationships between global governance agencies and developing countries - Davis' recommendation → Determine what should be monitored, what can be achieved, and how to measure progress before setting indicators and targets and gathering data
Chapter Five: A Manifesto for Cyborgs (Forlano & Glabau)
- Haraway's "Cyborg Manifesto" repurposes the cyborg metaphor, originally from a scientific and masculine context, to challenge boundaries and binaries, presenting it as anti-essentialist and an ongoing, adaptable project, much like gender. - Critical cyborg literacy disrupts gendered and racialized labor patterns by rejecting them as natural or fixed, promoting the idea that gender is fluid, constructed, and can be reshaped based on individual needs and desires (bridging domains of production, reproduction, and imagination) - How technology, bodies, and culture are intimately linked in modern society, and their interactions reconstruct social order and power - Explores how "the homework economy" i.e. new workplace arrangements in global economy hailed as revolutionary have reinforced class, geographic, racial, and gender divides
Diana's OnLife World (Hildebrandt)
- Hildebrandt's narrative about Diana and her PDA illustrates how pre-emptive computing technologies, always a step ahead, can unexpectedly reconfigure everyday life, subtly regulating behavior in ways we may not foresee. - The term "onlife world" signifies a transformative environment where the distinction between online and offline fades, as data-driven, autonomous systems increasingly shape and influence our lived experiences.
Hail the Maintainers (Vinsel & Russel)
- Historical context in which an emphasis on innovation emerged: shift from problematic "progress" to morally and politically neutral "innovation" after failure of Vietnam War and social turmoil of 1960s - After WWII, new technologies became proxies for social improvement 1970s and 1980s: economic troubles led to "innovation policy" i.e. increase economic growth through technological change - Silicon Valley ("land of almost magical technological ingenuity") shaped perceptions of innovation with its culture of disruption and emphasis on regional pockets of innovation; exemplar of innovation becomes garage stories - Innovation became a meaningless buzzword, treated in contemporary discourse as a value in and of itself → vague enough to avoid offending anyone; bipartisan support - Maintainers = individuals whose work keeps ordinary existence going rather than introducing novel things - Rather than innovation, focus on technology, infrastructures, and labor → recognizes importance of all the old technologies already in use, of those who maintain them, and encourages us to assess whether new technologies are good/bad (not just inherently good because new)
How Eugenics Shaped Statistics (Clayton)
- History of statistics intertwined with eugenics (Francis Galton, Karl Pearson, and Ronald Fisher) - According to Clayton, the founders of mathematical statistics emphasized the objectivity of mathematics to promote eugenics, leading to the development of statistical tools centered on significance testing, which prioritized identifying differences rather than exploring underlying causal mechanisms - Clayton advocates for a future in statistics that involves investigating the causal explanations for differences (asking "why is there a difference" not "is there a difference") and recognizing that statistics should not be viewed as inherently objective
Sorting Things Out: Classification and Its Consequences (Bowker & Starr)
- How classification systems, both formal and informal, shape society by reflecting certain values, often invisibly perpetuating biased worldviews while appearing objective. - Highlights the ethical implications of classification, emphasizing how moments of friction, transition, or ethical dilemmas expose the artificial and socially constructed nature of these systems. - Relates to the invisible work of "making data" and "denaturalizing" it
The Precariousness of the STEM Job (Skrenty)
- Increased layoffs have reduced job security across last few decades, resulting in very low U.S. job security today - Companies turn to "Nikefication" → outsourcing, cutting out middlemen, shortening job ladders - Response of STEM workers → "the end of loyalty" to company; also, less willing to help colleagues or ask for help - Independent contractor status has potential to harm or benefit STEM workers → Can benefit from higher pay, lower investment in firm or company, learning new skills, more varied work; But can be treated unequally, facing daily embarrassments and indignities (designated badges of contractor status; no-go zones) and feeling like it is "second-class status and feel excluded, disrespected, unvalued, and insecure" - Incentives leading companies to rely heavily on contractors → 1) "Budgets and headcount" explanation (look more profitable to investors if less employees; contractors are not factored into these equations), 2) Booms and bust in sectors change demand quickly, companies not wanting to invest in employees and training for short-term work; less benefits to promise
Introduction: A Role for History (Kuhn)
- Kuhn argues that scientific progress is not cumulative but occurs through paradigm shifts during scientific revolutions, challenging traditional views of steady accumulation of facts. - He emphasizes the subjectivity of science, asserting that conclusions are shaped by personal, historical, and accidental factors, and that scientific development requires understanding its historical context - "Normal science" operates within a given paradigm, under the assumption that scientists know what the world is like
Feminist Data Manifest-No
- Presents data as a site of both refusal and commitment → for every refusal, commit to an alternative - Examples: Refusing reductionism & committing to recognize personhood as feminist data value; Refusing consent as one-time Yes/No action & committing to consent as "Freely given, Reversible, Informed, Enthusiastic and Specific" - Anti-colonialism, context-dependency, radical imagination, empowerment, equity, justice - Frames data as an interpretation and in need of interpretation → data as a thing, a process, and relationship we make and put to work - Emphasizes that data comes in many forms, and we have control and flexibility in how we make and use data - Argues that data can and should always resist reduction
"There's software used across the country to predict future criminals. And it's biased against blacks" - ProPublica (Angwin)
- ProPublica claims risk scores are biased and unreliable → Only a small proportion of those predicted by COMPAS to commit violent future crime did so; Black defendants more likely to be falsely predicted as committing crime, while white defendants more likely to be falsely predicted not committing crime - Argues against risk prediction tools because defendants lack an opportunity to challenge their assessments or shown the underlying calculations - Race itself not used, but creators explain that leaving out all items correlated with race would hurt accuracy (poverty, social marginalization, unemployment) - In theory, higher risk scores aren't supposed to result in longer sentences but to help judges determine eligibility for probation or programs - In practice, judges have cited scores in their sentencing decisions
"A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear." - The Washington Post (Feller)
- ProPublica vs. Northpoint debate over racial bias in COMPAS reveals different definitions of algorithmic fairness - ProPublica's argument is that the model is biased since the false positive rates are different among the two races surveyed in the article (more false positives among Black defendants); - -- Northpointe's defense is that across races, defendants assigned the same score have the same rate of reoffending (the risk scores mean the same) - Feller's point: Mathematically incompatible due to discrimination / bias in real world data (higher recidivism rate among Blacks) → to judge fairness, have to look at practical effects and ethical concerns
The Deaths of Effective Altruism (Wenar)
- Problematic to focus only on benefits of charity → 1) Many side effects of charity aid beyond "close-up" effects (political, economic, psychological) 2) Neglects agency of the poor people (for example, the mother who trusts aid worker and uses bed net to save life of her child) - "Making responsible choices, I came to realize, means accepting well-known risks of harm." - If going to intervene to help poverty, should 1) shift power to them and 2) be accountable for effects of our own actions (Wenar recommends "dearest test" and "mirror test" as checks) - By emphasizing expected value, EA is speaking the language of venture capitalists (expected value = everyday, economic tool in VC) - Broadly applying expected value thinking increases potential for groupthink effects, rationalization, and self-serving bias (your own good and the good of the world/poor people come to mean the same thing) - "Longtermism" → ascribing equal moral worth to unborn generations as the living - Invoked by tech billionaires because 1) avoids accountability by removing the risk of being wrong, and 2) justifying investment in something like space travel to save future generations rather than saving children from malaria today
Objectivity and Authority: How French Engineers Reduced Public Utility to Numbers (Porter)
- Quantification in public life grew alongside bureaucratic centralization in the 19th century, with mathematical methods used to justify state actions and enhance the credibility of public works, though it limited public input and reinforced the authority of experts. - While quantification was seen as objective and impartial, it was shaped by political contexts, with French engineers using it to argue for state-run public works over market-driven approaches, emphasizing long-term social benefits over short-term profits.
Digital Natives (Radin)
- The creation and circulation of the Pima Indian Diabetes Dataset (PIDD) demonstrates the value of medical and Indigenous histories to the study of Big Data - Radin adapts the concept of the "digital native" itself for reuse to argue that the history of the PIDD reveals how data becomes alienated from persons even as it reproduces complex social realities of the circumstances of its origin - Invisible labor / shadow work = "functionally necessary to maintain institutions but is either not compensated or undercompensated" (human subjects research i.e. biological labor; using free services like Google which produce "digital exhaust" i.e. free information) - Using PIDD as model organism for machine learning obscures shadow work of indigenous people whose data continues to be circulated and repurposed - Studying PIDD reveals political and economic subjectivity of repurposing data and "provides an approach—grounded in Indigenous practices of refusal as well as self-governance—for resisting or differently engaging with research in an age of Big Data."
Stop the Robot Apocalypse: The New Utilitarians (Srinivasan)
- Response to MacAskill's book "Doing Good Better" which is rooted in values of global capitalism (Maximizing cost/benefit; Quantification; Efficiency; Optimism/Confidence in ability to solve any problem) - Moral algorithm according to MacAskill and other effective altruists: [QUALYs earned] + [consideration of marginal value] + [consideration of counterfactual] = [what you should do] *Marginal = Doctor in the developing world has greater value, since the supply of doctors is lower there *Counterfactual = Instead of being a doctor, could work in VC and donate higher salary to charity - QUALY-thinking frees us from specificity of individual lives we're helping i.e. "universal currency of misery" & counterfactual/marginal thinking frees of from specificity of ourselves - This type of EA thinking justifies diverting resources to hypothetical existential AI risk rather than current global suffering and frames entrepreneurs as most moral actors deserving of power and respect - MacAskill's decision to not donate to Ethiopia Fistula Foundation: greater benefit to donating elsewhere, thus it would be unfair and arbitrary (adopting the "POV of the universe") - Srinivasan argues that ethical action must originate from one's own point of view (think of responsibility, kindness, dignity and moral sensitivity) and the universal POV is just a way of avoiding the messiness of the real world and our place in it
California's Governor Gavin Newsom Vetoes Sweeping A.I. Legislation - The New York Times (Kang)
- SB 1047 → 1) Requires safety testing of large A.I. systems before public release 2) Afforded state's attorney general the right to sue companies over serious harm caused by their technologies 3) Mandated a kill switch to turn off A.I. systems in case of potential biowarfare, mass casualties or property damage - Why Newson vetoed SB 1047 → Focuses only on the most expensive and large-scale models, establishing a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology; Results in stringent standards on basic functions; Needs more input from AI experts and deeper analysis of potential risks - Other opposition → Big tech companies (Meta, Google, etc) and VCs opposed the bill (harms innovation, set US back in international race, hurt Ai start-ups who lack resources to test their systems) + Nancy Pelosi ( "too hypothetical and unnecessarily put safety standards on a nascent technology") - Support for the bill → Lots of prominent Hollywood actors + tech mogul Elon Musk (concerned about lack of congressional action and potential harms)
A Mulching Proposal (Keyes)
- Satirical proposal by Logan-Nolan Industries (LNI) to repurpose elderly into foodstuff to address food security and aging population concerns, using algorithm to identify "mulchees" based on low social connectivity - Keyes applies the Fairness, Accountability and Transparency framework to highlight how it focuses only on technical issues and obscures the larger ethical issue - FAT misses how the issues of algorithmic system represent larger societal issues, ignoring role of context and how systems play into existing inequalities - Keyes' recommendation → Ask critical questions when evaluating and look beyond technical concerns by interrogating notions of "good" to ensure algorithm is non-reformist, critical, and anti-oppressive
Nature and Space (Scott)
- Some forms of knowledge and control require a narrowing of vision → the narrative of scientific forestry illustrates how powerful institutions utilize simplification and abstraction to narrow their vision, focusing solely on quantifiable aspects (e.g., timber yield) while overlooking the complexities and diverse uses of nature, thereby shaping reality to fit their fiscal and commercial interests - This process is mirrored in broader state practices, such as the implementation of the metric system, which promoted administrative centralization and uniformity in measurement as a means to enforce a singular, homogenized view of society, reducing the rich complexities of local practices to convenient, oversimplified categories that serve state power and control. - Logic of the state-managed forest science virtually identical to the logic of commercial exploitation → "Commercial logic and bureaucratic logic were, in this instance, synonymous; it was a system that promised to maximize the return of a single commodity over the long haul and at the same time lent itself to a centralized scheme of management." - German forest as model for imposing scientific categorization on disorderly nature
Introductory Essay (MacKenzie & Wajcman)
- Technical systems are intertwined with everyday life - The partial truth of Technological Determinism is that technologies do matter and do create social change, but it is mistaken to assume that technology impacts society from outside of society - Technology and society → co-produced (two-way street) - "A technological system like an electric light and power network is never merely technical; its real-world functioning has technical, economic, organizational, political, and even cultural aspects."
OCSDNet's "Reimagining Open Science Through a Feminist Lens" (Alejandra)
- The Open and Collaborative Science in Development Network (OCSDNet) is a research network of scientists, development practitioners and community activists based in Global South - Critical of framing "open science" as more productive, efficient and competitive → biased in favor of utilitarian conception of science that incentivizes knowledge production for the sake of innovation and international competitiveness, and ignores equally important functions served by research (addressing social challenges or defending human rights) - Instead, suggest community-based, bottom-up understandings and practices of open and collaborative science - Working directly with community members to facilitate the equitable participation of these actors in formal research processes, and in this way, integrate their knowledge in policy making and development agendas - However, many communities did not want to participate → "Openness" associated with colonialism, authoritarianism, political persecution in different countries based on their respective histories - Their manifesto → First, situate openness by recognizing: (1) Openness is never universally positive or neutral (2) Openness is a mechanism to mobilize power After situating openness, must develop inclusive infrastructures: "tools, platforms, relationships, networks and other socio-technical mechanisms that deliberately allow for multiple forms of participation amongst a diverse set of actors, and which purposefully acknowledge and seek to redress power relations within a given context"
Public Data Center (BAAQMD)
- The work of "making data" → air quality data depends on 1) Networks of air sensors running all the time 2) Local communities generating complaints 3) Air District paying people and posting data - Flares as example: presented as lesser of two evils / the price of modernity, flare stacks burn necessary gasses and release into surrounding area and effect local residents, fueling mistrust of affected communities - Takeaway from flares: useful data about flare emissions misses the larger human context - Like any classification system, data choices are performative & reflect value choices → "What counts is what is counted"
Contextual Integrity, Explained: A More Usable Privacy Definition (Malkin)
- Traditional definitions of privacy are too limited to account for a lot of potential situations, and too vague for operational use - Not useful in novel situations or when something privacy-related is legal but still feels like an invasion - CI provides a way to evaluate the ethical legitimacy of new flows → framework for identifying the strengths/weaknesses of novel flow by comparing w/ the status quo - Nissenbaum's concept of contextual integrity → privacy as the "appropriate flow" of personal info - Useful because acknowledges privacy is fluid i.e info considered private in some contexts but acceptable to share in others (medications w/ doctor; finances w/ accountant) - CI judges appropriateness of information flow based on contextual privacy norms - Norms are key determinants of privacy, not purely about procedure (user clicking 'I accept' might not reflect privacy expectations) - Norms are flexible and varied, but require some consensus; best way to ascertain is asking people about attitudes, beliefs, and expectations - Lessons for applying CI: Think beyond binaries by emphasizing context; Focus on expectations, not checklists (Privacy enhancing technologies (PETs) can mismatch w/ user expectations); Consider context comprehensively; Think about the consequences (Privacy matters not for its own sake, but because it reflects our values) - Limitations of CI: People don't actually make privacy decisions this way (Often rooted in emotion or intuition); Simplifies things (might miss other parameters); Conservative (favors established norms)
Uncanny Valley (Weiner)
- Wiener describes her experience at SV tech startup and the values implicitly and explicitly promoted/discouraged by company culture - Culture promotes problem-solving, loyalty, merging of work and identity, technical expertise → "who to be" = overconfident, creative, laid-back, bossy, energetic, fun, technically savvy - Culture discourages soft skills (changing title from Customer Success to Technical Account Manager causes clients to listen to her) and diversity (non-white, non-man, non-entrepreneur) - Examples: Offhand comments about other people with title of Customer Service; Normative unspoken pressure (Ripstiks in the office); Company-organized activities (drunk scavenger hunt) and open office layout; Direct comments from-above: "my manager criticizes me for being a pleaser" + "I'm told I will be promoted to Solutions Architect if I can build a networked, two-player game of checkers in the next few months"
Brandy, Cigars, and Human Values (Winner)
- Winner argues that "values" is a meaningless concept - How value went from economic, objective term ("the worth of something") to psychological, subjective term ("sum of personal or cultural principles/beliefs/desires motivating behavior") - This objective to subjective shift changes how we think about human action: instead of explaining behavior by value of action, behavior explained by the actor's values - Other implications: Morality becomes subjective; No common ground from which to judge actions; Distracts from trying to understand shared reasons for action - Why value talk is popular among bureaucrats and technocrats: Safe term to use because it is so empty and meaningless that it can't offend or disagree with sources of funding or government agencies trying to avoid controversy + treated as residual concerns after practical business taken care of (values as an afterthought) - Winner's suggestions → 1) seek out terms that are more concrete, more specific (instead of "social values" talk about "consumer preferences") and 2) changing culture in institutions and professional careers so everybody must consider the "soft" philosophical questions