The Reckoning We Didn’t Plan For

On August 1, 2012, a software deployment at Knight Capital Group — one of the largest equity traders in the United States — triggered 45 minutes of autonomous trading carnage that the firm’s own engineers could not stop. A legacy system had been accidentally reactivated during a routine update, and the new algorithmic logic interpreted it as a live instruction stream. Within four minutes of the opening bell, Knight’s system was executing millions of erroneous trades per minute, buying high and selling low at machine speed, hemorrhaging approximately $10 million every 60 seconds. By the time engineers identified the source and killed the process, the damage was irreversible: $440 million in losses, a share price collapse of more than 75 percent, and a firm that had operated for seventeen years pushed to the edge of bankruptcy and ultimately rescued only through emergency acquisition (Securities and Exchange Commission, 2013).

No human being made a single one of those trades. No human being intended any of them. And no human being was able to stop them in time.

That story is the thesis of Part IV in miniature. The previous three parts of this series have traced a particular arc: how the tools we built to generate content have eroded our ability to verify what we see (Part I); how provenance infrastructure is emerging as the technical response to the verification crisis (Part II); and how AI has evolved from a reactive system into an agentic one, capable of setting goals, making plans, and taking real-world action without human approval at every step (Part III). Those are shifts in what AI does. This final installment is about what happens when AI acts — and what we do when it acts wrong.

There is a version of this conversation that devolves quickly into science fiction: superintelligent systems, existential risk, Skynet. That is not this piece. The accountability problem in autonomous AI is not a future hypothetical. It is a present operational reality playing out right now in financial markets, healthcare systems, logistics networks, and — with increasing frequency — in educational institutions. The Knight Capital failure was not an aberration. It was a preview.

Series Arc

Part I asked: If everything can be faked, what counts as proof?

Part II asked: If we can verify content — who verifies the systems?

Part III asked: What changes when AI stops responding and starts acting?

Part IV asks: When autonomous systems cause harm, who is responsible?

What Actually Happens When Machines Act

To understand why autonomous AI failure is different in kind — not just degree — from other kinds of software failure, you need to understand three properties that distinguish agentic systems from their predecessors: speed, scale, and interconnection.

Speed is the first and most disorienting difference. A human making a bad decision operates at the pace of human cognition — seconds, at minimum. An autonomous AI system making a bad decision operates at the pace of its hardware. In the Knight Capital case, the system executed more than four million trades in 45 minutes — a rate that no human monitoring team could track in real time, let alone interrupt. When the intervention lag between an error occurring and a human detecting it is measured in microseconds, the concept of “human oversight” becomes aspirational rather than operational.

Scale compounds the problem. When a single human makes an error in judgment, the consequences are bounded by what one person can do. When an autonomous system makes an analogous error, it executes that error at the scale of its deployment — which, in production AI systems, can mean millions of simultaneous decisions. An AI-based lending algorithm with a flawed risk model doesn’t deny one loan application. It systematically applies that flaw to every application in the queue. An automated academic flagging system with biased pattern recognition doesn’t misidentify one student. It processes thousands of submissions through the same flawed lens before anyone notices.

Interconnection is the third property, and the one most likely to be underestimated. Modern AI systems do not operate in isolation. They call APIs, trigger downstream processes, interact with other automated systems, and generate outputs that feed into further AI pipelines. An error in one node can propagate through the network before any individual node registers an anomaly. The result is what sociologist Charles Perrow, writing in 1984 about nuclear plants and chemical facilities, called a “normal accident” — a failure that emerges not from any single catastrophic component failure but from the interaction effects of a complex, tightly coupled system operating as designed (Perrow, 1984). Perrow’s key insight was that in certain system architectures, catastrophic failure is not a deviation from normal operations. It is a predictable, if unpredictable in timing, feature of them.

Perrow was writing about industrial infrastructure. He could have been writing a design specification for large-scale agentic AI deployment.

Visual 1 The Cascade Effect — How One AI Decision Triggers a Chain of Unintended Consequences
STAGE 1 STAGE 2 STAGE 3 STAGE 4 STAGE 5 AUTONOMOUS DECISION TRIGGERED Downstream System A activates Error Amplified ×10 System B Parallel Trigger System C HUMAN OVERRIDE ATTEMPTED ⚠ TOO SLOW CONSEQUENCE Irreversible harm realized before stop was possible HUMAN RESPONSE WINDOW INTERVENTION TOO LATE t+4s t+18s t+45s t+120s Illustrative model — based on Knight Capital (2012) and 2010 Flash Crash incident timelines
The cascade architecture: a single autonomous decision propagates through interconnected systems faster than human monitoring can detect, assess, and intervene. By the time a human override is attempted, the consequential harm has already materialized.

The 2010 Flash Crash: Cascades at National Scale

The Knight Capital failure was contained within a single firm. Two years earlier, the 2010 Flash Crash demonstrated what cascading AI behavior looks like when it involves an entire market ecosystem. On May 6, 2010, between 2:32 and 3:08 PM Eastern Time, the Dow Jones Industrial Average dropped approximately 1,000 points — roughly ten percent of total value — in a matter of minutes, briefly erasing nearly one trillion dollars in equity value, before recovering almost as rapidly. The joint SEC-CFTC investigation concluded that a large automated trade order interacted with high-frequency trading algorithms in ways that created a feedback loop of liquidity withdrawal, forcing prices into a self-reinforcing collapse that no individual actor had initiated or intended (SEC & CFTC, 2010).

The Flash Crash is important not as a financial anecdote but as a structural demonstration: when autonomous systems interact with each other, the system-level behaviors that emerge are not predictable from the behaviors of the individual components. This is not a software bug. It is a property of complex, tightly coupled systems operating at speeds that exceed human supervisory capacity.

“In complex systems, the defining feature of a serious accident is not that something went wrong — it is that multiple things interacted in ways that were individually normal and collectively catastrophic.”

Charles Perrow, Normal Accidents: Living with High-Risk Technologies (1984)

The Accountability Web

When an autonomous AI system causes harm, one of the first practical questions that arises is the most legally and ethically consequential one: who is responsible? The answer, in current practice, is almost always: it is unclear, contested, and unanswered.

This is not an accident of sloppy thinking. It reflects a genuine structural problem. The chain of agents involved in any modern AI deployment spans at least four distinct layers, and the responsibility for any given failure can plausibly be attributed to any one of them — or distributed across all of them — depending on which aspect of the failure you’re examining.

Visual 2 The Accountability Web — Distributed Responsibility in AI Deployment
WHO IS RESPONSIBLE? ? FOUNDATION MODEL DEVELOPER trained the base model SYSTEM INTEGRATOR built the application DEPLOYING INSTITUTION runs it in production END USER / AFFECTED PARTY bears the consequences REGULATOR / LEGAL SYSTEM arrives after the fact PLATFORM OPERATOR configured the system partial visibility partial control partial oversight no prior recourse Each party holds partial visibility, partial control — no single party holds full accountability
The accountability web: in a typical AI deployment chain, responsibility for any given failure is distributed across at least five parties — each with partial information, partial control, and a plausible case that the failure originated elsewhere. The legal system arrives, if at all, long after the harm.

The Chain of Partial Responsibility

The foundation model developer — the company that trained the underlying AI — made decisions about training data, model architecture, fine-tuning objectives, and safety filtering that shaped what the system is fundamentally capable of and where its failure modes reside. If the model was trained on biased data that produces discriminatory outputs, those outcomes are, in a meaningful sense, traceable to decisions made at this layer. The developer has the deepest technical knowledge. The developer also has the least direct visibility into how the model will be deployed.

The system integrator — the developer or company that built a specific application on top of the foundation model — made decisions about how the AI is configured, what data it accesses, what tools it can use, and what guardrails are applied. If an agentic AI system causes harm because its tool access was too broad or its authorization scope was inappropriately configured, the integrator is the most proximate technical cause. The integrator has access to the model’s documented behaviors but does not control the underlying weights.

The deploying institution — the school, hospital, employer, or government agency that has put the system into production — made procurement decisions, accepted vendor terms, defined use policies, trained or failed to train staff, and created the context in which the AI operates. Institutional deployment choices are often the least technically sophisticated link in the chain and frequently the one with the most direct accountability to the people who are ultimately affected. They also often have the least visibility into what the system is actually doing.

The end user — the student, patient, employee, or citizen whose life is actually affected — had the fewest choices of all, and bears the consequences of choices made by all the parties above. In most current frameworks, they also have the least legal recourse and the least information about how the decision that affected them was made.

Legal scholars Ryan Calo and Danielle Citron have separately documented the profound inadequacy of existing legal frameworks for assigning responsibility in precisely these chains (Calo, 2017; Citron & Pasquale, 2014). Product liability law was designed for physical goods with identifiable defects. Professional malpractice frameworks were designed for individual practitioners exercising professional judgment. Neither maps cleanly onto a distributed pipeline where a harmful output emerges from the interaction of decisions made by multiple parties, none of whom was individually negligent in a traditional sense.

Doshi-Velez et al. (2017), writing for Harvard’s Berkman Klein Center, argued that the concept of explainability is not merely an epistemic nicety in AI systems — it is a precondition for accountability. If no one in the chain can fully explain why a system produced a particular output, then no one in the chain can be held accountable for it in any meaningful legal or ethical sense. The opacity of the decision-making process becomes, in effect, a structural immunity to accountability. This is what Frank Pasquale called the “black box society” — a world in which consequential decisions are made by systems whose workings are invisible to the people they affect (Pasquale, 2015).

The Legal and Ethical Vacuum

Regulatory lag is one of the most consistent features of technological development. It is not a failure of governance so much as a structural constraint: regulation requires observation, deliberation, and legislative process, all of which take time, while deployment requires only a decision to deploy. The automobile existed for decades before seatbelts were required. Financial derivatives existed for decades before the systemic risks they posed were understood well enough to regulate. Social media platforms accumulated billions of users before anyone had seriously attempted to govern them.

AI is accelerating this gap to a degree that makes historical analogies imperfect. The deployment of consequential autonomous AI systems is not proceeding at the pace of automotive or even financial innovation. It is proceeding at the pace of software: near-instantaneous, globally scalable, and largely invisible to the oversight mechanisms designed for physical-world deployments.

Visual 3 The Regulation Gap — AI Capability vs. Governance Framework Development (2016–2028)
LOW MED HIGH V.HIGH MATURITY LEVEL 2016 2017 2018 2020 2022 2024 2026 2028 GAP ≈ 150 pts GPT-3 ChatGPT EU AI Act Biden EO NOW → PROJECTED AI capability / deployment Regulatory / governance frameworks Illustrative model based on published policy milestones and AI capability benchmarks
The gap between AI deployment maturity and regulatory framework maturity has widened every year since 2017 — and is not yet converging. EU AI Act (2024) and the Biden Administration’s Executive Order (2023) represent meaningful first steps, but regulatory coverage of agentic AI specifically remains sparse.

What Governance Actually Exists

The most significant regulatory development to date is the European Union’s Artificial Intelligence Act, passed by the European Parliament in April 2024. The EU AI Act establishes a risk-tiered framework that applies different levels of regulatory obligation depending on the intended use of an AI system — banning certain high-risk applications outright (such as real-time biometric surveillance in public spaces), requiring conformity assessments and human oversight mechanisms for high-risk deployments (including in education, healthcare, and employment), and imposing transparency requirements across the full deployment chain (European Parliament, 2024). It is the most comprehensive AI governance framework yet enacted anywhere in the world, and it has set the terms for how other jurisdictions are beginning to think about the problem.

The Biden Administration’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directed federal agencies to develop sector-specific guidance, established reporting requirements for frontier AI developers, and charged NIST with developing AI safety standards (The White House, 2023). That order’s implementation trajectory was significantly disrupted by the 2024 election cycle, and its successor policy framework remains an open question as of this writing.

What does not yet exist — anywhere — is a comprehensive legal framework that clearly assigns liability when an autonomous AI system causes harm in a deployment chain involving multiple parties. The insurance industry is beginning to grapple with this: Lloyd’s of London issued guidance in 2023 explicitly excluding certain AI-related losses from standard cyber insurance policies, and specialized AI liability products are beginning to emerge in the specialty insurance market. But the actuarial models for pricing AI risk are in their infancy, because the historical loss data does not yet exist at the scale needed to model it. The insurance industry develops its products in response to observable losses — and we have not yet had enough of the right kind of losses to generate the statistical basis for a mature market.

The constitutional dimension deserves particular attention in the context of public institutions. When a government agency uses an automated system to make decisions about benefits, custody, criminal sentencing, or educational placement, the due process requirements of the Fifth and Fourteenth Amendments do not disappear simply because a machine made the decision. Danielle Citron and Frank Pasquale argued more than a decade ago that automated government decision-making triggers due process obligations — including the right to know the basis for a decision and the right to meaningfully contest it — and that most automated systems in public use fail to satisfy those obligations (Citron & Pasquale, 2014). That argument has become more urgent, not less, as the systems making those decisions have become more opaque and more agentic.

Regulation Status Check — 2026

EU AI Act (2024): Tiered risk framework, high-risk sector requirements, transparency mandates — the most comprehensive enacted framework globally. Enforcement timelines rolling in through 2027.

U.S. Federal: Sector-specific agency guidance (FTC, HHS, DOE); no enacted comprehensive AI liability statute as of this writing.

State-level: Patchwork of sector-specific bills in California, Illinois, Texas, Colorado — with limited cross-state harmonization.

Insurance: Emerging specialty products; standard cyber policies increasingly excluding AI-specific losses.

Agentic AI specifically: Largely unaddressed in any current framework as a distinct category.

When Agents Interact With Other Agents

The accountability problems described so far — the cascade effect, the distributed responsibility web, the regulatory vacuum — apply to single autonomous AI systems operating in defined environments. They are challenging. The next development in the deployment landscape is substantially more challenging: multi-agent systems, in which autonomous AI agents interact with, commission, and respond to other autonomous AI agents.

Visual 4 Multi-Agent Interaction — Emergent Behavior in Agent-to-Agent Systems
ORCHESTRATION LAYER HUMAN SUPERVISOR oversight only ORCHESTRATOR AGENT delegates, monitors RESEARCH AGENT A web · data EXECUTION AGENT B forms · APIs COMMS AGENT C email · notify DECISION AGENT D approve · deny ← emergent lateral communications → not explicitly programmed · not fully auditable External API External API External API In multi-agent systems, responsibility diffuses further — emergent behaviors arise from agent interactions no single designer anticipated
In multi-agent architectures, an orchestrator AI commissions specialized sub-agents to handle distinct tasks. Lateral communications between sub-agents generate emergent behaviors that fall outside any individual agent’s documented scope — and outside any individual party’s accountability perimeter.

The multi-agent paradigm is not a future scenario. It is already operating in enterprise automation, financial trading infrastructure, and increasingly in advanced ed-tech platforms. An orchestrator agent receives a high-level goal — resolve this customer complaint, optimize this investment portfolio, prepare this student’s personalized learning plan — and decomposes it into subtasks delegated to specialized sub-agents: one that retrieves relevant data, one that executes transactions, one that generates communications, one that makes approval decisions.

Each individual sub-agent may behave exactly as specified. The harm, if it occurs, emerges from the interactions between them — interactions that were not fully anticipated when any individual agent was designed. Stuart Russell, one of the most widely cited researchers in AI safety, has argued that the fundamental challenge of agentic AI is not that agents will pursue goals destructively out of malevolence, but that they will pursue goals precisely, in ways that produce outcomes their designers did not intend — a problem he terms “misalignment between the objective we specify and what we actually want” (Russell, 2019). In a multi-agent system, that misalignment can compound recursively across every node in the network.

The accountability question does not get easier when you add more agents to the chain. It gets harder by roughly the square of the number of agents involved.

What This Means for Education

If this series has felt, at points, like it was describing a world that exists somewhere other than your school building, this section is the corrective. The accountability crisis in autonomous AI is not a problem for financial regulators and hospital administrators. It is a problem for anyone who has deployed — or is being asked to deploy — an AI system that makes or informs consequential decisions about students.

Consider the accountability chain in a typical ed-tech AI deployment: a foundation model developer builds a large language model. A software company builds an adaptive learning platform on top of it. A district IT department procures the platform through a standard vendor process. A classroom teacher activates the AI tutor feature and assigns students to use it. The AI makes recommendations about which students need intervention, what reading level a student should be assigned, whether a submitted essay shows signs of academic integrity concerns, or whether a student’s behavioral pattern warrants a referral. When one of those recommendations is wrong — and some of them will be — who is responsible?

In most districts operating today, the honest answer is: it has not been determined. The vendor contract almost certainly contains limitation-of-liability clauses that significantly constrain the developer’s exposure. The platform operator may or may not have reviewed those clauses with legal counsel. The teacher who acted on the recommendation may have believed it was more authoritative than it was. And the student who was affected has essentially no recourse through the standard processes that exist.

This is not a hypothetical. Obermeyer et al. (2019) documented precisely this dynamic in healthcare, where a widely deployed commercial algorithm used to identify patients for high-risk care management systematically underestimated the health needs of Black patients — not because it was programmed to discriminate, but because it used healthcare spending as a proxy for healthcare need, and Black patients had historically received less care even when their clinical needs were equivalent. The algorithm was performing exactly as specified. The specification was the problem. And the harm it caused before it was identified and corrected was distributed across thousands of patients and dozens of institutions, none of which had the individual visibility to see the pattern.

The parallel in education is not difficult to draw. An AI system that identifies students for gifted programming, academic intervention, or disciplinary referral — and that was trained on historical data encoding historical inequities — will reproduce those inequities at the speed of software unless someone with the appropriate visibility, the appropriate authority, and the appropriate mandate actively looks for the pattern and intervenes. That someone must be a human being. The question is whether the governance structures exist to ensure that human oversight is genuinely operational rather than merely nominal.

What Teachers Can Do Now

The accountability crisis in AI does not require teachers to become lawyers or engineers. It requires them to become something they are already trained to be: professional practitioners who exercise judgment about the tools and methods they use, and who take responsibility for the decisions they make on behalf of students. Applied to AI, that professional identity means several concrete things.

Treat “AI-recommended” as “AI-suggested.” The most consequential mindset shift available to classroom teachers right now is the distinction between a recommendation and a decision. An AI tool that identifies a student as at risk, flags an essay for academic integrity review, or suggests a differentiation strategy is providing an input to your professional judgment — not substituting for it. Applying the recommendation without active evaluation of whether it is correct in this specific case, for this specific student, in this specific context, is not efficient professional practice. It is the abdication of the judgment for which you are, in fact, accountable.

Ask the explainability question before you act. Before acting on any AI-generated recommendation that affects a student’s educational trajectory, apply a simple test: can you explain — to the student, to a parent, to a principal — why this decision was made in terms that go beyond “the AI said so”? If you cannot, you are not yet in a position to act on that recommendation. “The algorithm flagged this” is not a sufficient professional justification for any consequential educational decision. If the system cannot explain its reasoning in terms you can evaluate and articulate, the decision has not been made; you are in the information-gathering phase.

Document your AI use with the same rigor you apply to other professional decisions. If you use AI tools to inform recommendations about student placement, intervention, or disciplinary review, document what the tool suggested, what you evaluated, what you changed, and why. This documentation is not bureaucratic overhead; it is the evidentiary basis for demonstrating that you exercised professional judgment rather than automated compliance. In a world where accountability frameworks for AI are still being developed, having a contemporaneous record of your decision-making process is your professional protection.

Know what your vendor contracts actually say. The FERPA protections that apply to paper student records do not automatically and equivalently apply to continuous behavioral data generated by ed-tech AI systems. Reidenberg and Schaub (2018) documented the significant gaps between FERPA’s design assumptions and the data practices of modern ed-tech vendors. If you are an instructional leader or department head with any influence over technology procurement, the contractual questions about data ownership, retention periods, algorithmic decision-making disclosure, and liability allocation should be resolved before deployment — not discovered when a problem occurs.

Build your own error-recognition fluency. You do not need to understand the technical architecture of AI systems to recognize when they are failing. You need to develop the habit of asking: does this recommendation make sense given what I know about this student? Is this pattern of recommendations consistent with what I know about this group of students? Is this system producing systematically different outcomes for students from different demographic groups? The teacher who catches an AI error is not the teacher who understands backpropagation; it is the teacher who has cultivated skeptical attention to what the system is actually doing rather than what it is supposed to be doing.

What Leaders Should Be Considering

For school administrators, technology directors, curriculum leaders, and school board members, the accountability crisis in AI demands a specific kind of strategic action that most districts have not yet taken: the deliberate, documented mapping of accountability before AI systems are deployed rather than after something goes wrong.

The single most protective governance step any district can take is to require, for every AI system with consequential decision-making authority, an explicit accountability mapping document that answers five questions: What decisions does this system make or inform? Who in the institution is accountable for reviewing those decisions before they are acted upon? What process exists for a student or family to contest an AI-informed decision? How will the institution monitor for disparate impact across demographic subgroups? And what is the incident response process if the system causes harm? The inability to answer any of these questions before deployment is not a process problem; it is a signal that the district is not yet in a position to deploy the system responsibly.

District counsel should review every AI vendor contract — not just for data privacy provisions, but for limitation-of-liability clauses, arbitration requirements, intellectual property provisions, and algorithmic transparency commitments. The Partnership on AI has published governance frameworks that provide practical guidance for institutional AI deployment (Partnership on AI, 2023). The OECD’s AI Policy Observatory offers comparative policy analysis across jurisdictions (OECD, 2023). Neither of these resources requires specialized technical knowledge to apply. They require institutional will to prioritize governance before deployment.

Risk management and insurance coverage should be reviewed explicitly for AI-related liability. Most institutional insurance policies were not designed to cover the kinds of harm that autonomous AI systems can produce — particularly in domains like educational assessment, behavioral flagging, or personalized learning recommendation. The gap between existing coverage and actual exposure may be substantial, and it is not visible until a loss event requires examination of the policy terms.

Finally, invest in the human expertise required to exercise genuine oversight. The educational institution that deploys AI and then relies on that AI to supervise itself has not solved the accountability problem; it has made it invisible. Genuine oversight requires human professionals with the authority, the access, and the professional capacity to evaluate what AI systems are doing and to intervene when they are doing it wrong. That capacity does not emerge from a software training. It is built through deliberate investment in the professional development of educators as critical evaluators of AI performance.

A Final Reflection: Verifying Behavior

This series began with a question about perception: if everything can be faked, what counts as proof? That question turned out to lead somewhere its framing didn’t initially suggest. The verification crisis was never really about technology. It was about the collapse of a particular social contract — the assumption that shared observation of reality could anchor shared judgment. Deepfakes didn’t create that problem. They made visible a vulnerability that was already there.

Part II extended that logic: if we cannot trust what we see, we are forced to move up the chain to the systems that verify what we see. And those systems — cryptographic provenance standards, institutional authentication layers, trust infrastructure — raise their own question about power and gatekeeping. Trust that used to be distributed across individual perception becomes concentrated in whoever controls the verification layer.

Part III introduced the second major shift: AI has moved from responding to acting. The practical implications of that shift — for workforce, for education, for institutional operations — are enormous and still being worked out.

And here, in Part IV, the question that has been implicit all along becomes explicit: we struggled to verify content. We built systems to verify sources. We are now deploying systems capable of autonomous action. And when those systems cause harm — and they will cause harm, because all powerful tools wielded at scale eventually do — we do not yet have clear answers to the oldest question in ethics: who is responsible?

The parallel between the verification crisis and the accountability crisis is not accidental. Both are crises of judgment: the capacity to evaluate, to reason about reliability, and to take responsibility for conclusions. The deepfake era required us to develop better judgment about what we see. The agentic era requires us to develop better judgment about what we delegate — and what we never delegate.

Stuart Russell, writing about the control problem in AI, argues that “the goal of AI should be to build systems that serve human values, not to build systems that are maximally capable” (Russell, 2019). That distinction is not technical. It is philosophical, and it is, ultimately, the work of education: equipping people to identify what values they hold, to articulate why they hold them, to evaluate whether the tools and systems they are using serve those values, and to accept accountability for the choices they make. The machines are getting better at acting. The irreplaceable human work is getting clearer about what we want them to do — and who answers when something goes wrong.

We struggled to verify information. Now we must verify behavior. The question is not whether AI systems will fail. It is whether the humans responsible for deploying them have done the work to be accountable when they do.

Trust & Autonomy: The Two AI Shifts Reshaping 2026

References

  1. Calo, R. (2017). Artificial intelligence policy: A primer and roadmap. UC Davis Law Review, 51, 399–435.
  2. Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1–33.
  3. Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Scott, K., Schieber, S., Waldo, J., Weinberger, D., Weller, A., & Wood, A. (2017). Accountability of AI under the law: The role of explanation. Berkman Klein Center Working Paper.
  4. European Parliament. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence. Official Journal of the European Union. https://eur-lex.europa.eu
  5. Hadfield-Menell, D., Milli, S., Abbeel, P., Russell, S., & Dragan, A. (2017). Inverse reward design. Advances in Neural Information Processing Systems, 30.
  6. Kirilenko, A., Kyle, A. S., Samadi, M., & Tuzun, T. (2017). The flash crash: High-frequency trading in an electronic market. Journal of Finance, 72(3), 967–998. https://doi.org/10.1111/jofi.12498
  7. National Transportation Safety Board. (2020). Collision between car operating with automated vehicle control systems and a crash attenuator, Mountain View, California, March 23, 2018. NTSB/HAR-20/01. ntsb.gov
  8. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
  9. OECD. (2023). OECD AI policy observatory: Trends and data. Organisation for Economic Co-operation and Development. https://oecd.ai
  10. Partnership on AI. (2023). About AI: Governance frameworks for responsible AI deployment. Partnership on AI. https://partnershiponai.org
  11. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
  12. Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Basic Books.
  13. Reidenberg, J. R., & Schaub, F. (2018). Achieving big data privacy in education. Theory and Research in Education, 16(3), 263–279.
  14. Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.
  15. Securities and Exchange Commission & Commodity Futures Trading Commission. (2010). Findings regarding the market events of May 6, 2010. Joint Advisory Committee on Emerging Regulatory Issues. sec.gov
  16. Securities and Exchange Commission. (2013). In the matter of Knight Capital Americas LLC: Administrative proceeding. File No. 3-15570. sec.gov
  17. The White House. (2023, October 30). Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. Executive Office of the President. whitehouse.gov

Additional Reading

  1. Brundage, M., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Future of Humanity Institute, Oxford. https://arxiv.org/abs/1802.07228
  2. Floridi, L., et al. (2018). AI4People — An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
  3. Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30, 411–437.
  4. Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2).
  5. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.