A disclosure note, placed here rather than at the end: the author is developing a framework — Steerable — that attempts to implement what this article proposes. That context is relevant to how the argument should be read. Sections one through nine are the argument; section ten is the disclosure. The argument does not depend on the framework, but the reader should know the framework exists before deciding how much weight to give the argument.
1.
A few years ago, a widowed woman in her early sixties brought her retirement savings to a financial advisor. He documented her profile, built a structure explicitly conservative in spirit, and added one element designed to soften drawdowns over time: a blended position combining intermediate government bonds with a broadly diversified equity component. The construction was textbook. She trusted him and signed.
Two years later, the widow and her son were in a regulatory complaint proceeding. Her health had declined and she needed the money sooner than the structure allowed. The case ran for over a year, ended in a partial settlement, and resolved nothing that mattered.
Here is what was on the table:
- A client profile that read "conservative" in plain language, signed and dated.
- A recommendation aligned with that profile and with industry standard practice.
- A blended position whose specific allocation came from a portfolio structuring tool the advisor had used throughout 2022, optimized across roughly fifteen parameters he had reviewed but could not independently verify in any reasonable amount of time. No advisor in 2022 reasonably could.
- An exogenous shock — the simultaneous loss of value across bonds and equities in 2022, in a way that had not occurred in decades — that struck precisely the assumption the tool's optimization had relied on.
- A proceeding in which the advisor explained what he could, cited the historical data, demonstrated standard practice, and kept his license. The family received a settlement that did not approach the loss.
- A question that was raised, examined, and never cleanly answered: who actually designed the allocation that failed?
The proceeding described here follows the American regulatory model, in which individual advisor liability and licensing are the primary enforcement mechanisms; in Germany or Austria the equivalent route would run through institutional liability under MiFID II conduct obligations, but the structural question the proceeding surfaced — who designed the allocation that failed — is the same.
This article is about that question. It has no established name in regulatory or professional language, no clean answer in any current legal or compliance framework. And if the pattern of the last twenty-five years holds, it is the question that is about to be asked far more often than it is being asked today.
2.
The phenomenon needs a name before it can be addressed. Call it Ghost Ownership.
The term borrows from the vocabulary of absent presence — and requires one disambiguation. In financial crime frameworks, 'ghost ownership' refers to concealed beneficial ownership: the shell company, the nominee director, the arrangement designed to hide who actually controls an asset. That is a different problem with a different literature.8 The phenomenon described here appears in academic work as the 'attributability gap' — the dissolution of identifiable human authorship when decisions are substantially shaped by AI outputs.9 Ghost Ownership is the more evocative public term; attributability gap is the more precise regulatory and academic one. Both are used in what follows.
Ghost Ownership describes the state in which a decision is formally made by a human being but so substantially shaped by a machine-generated output that the actual authorship can no longer be cleanly assigned. The human signed. The system contributed. Neither alone produced the decision, and neither can be cleanly separated from it afterwards.
Three conditions, taken together, mark the phenomenon:
- The human in the chain could not, in any reasonable amount of time, have produced a decision of comparable quality without the system.
- Neither the human nor anyone outside the situation can reconstruct, with reasonable effort and the documentation available, which parts of the decision came from human judgment and which came from the system.
- The chain of authorship does not terminate. Earlier waves of professional tooling produced outputs that could, with effort, be traced back to identifiable human authors. Modern AI-generated outputs are different in kind: their internal logic is not authored by any single human, the training data is too vast to attribute, and the outputs themselves emerge from processes no participant in the chain can fully reconstruct.
When all three hold simultaneously, you are looking at Ghost Ownership. When only one or two hold, you are looking at ordinary tool use — the kind that has accompanied every profession since calculators replaced slide rules. The distinction matters because the response to ordinary tool use is professional judgment and standard practice. The response to Ghost Ownership has not yet been invented.
Two clarifications. First, "the chain" refers to the sequence of humans and systems that contributes to a single decision — for a financial advisor, this might be the advisor, the structuring tool, the compliance reviewer, and the client; for a physician using AI-assisted diagnosis, it might be the physician, the diagnostic system, the radiologist who validated training data, and the patient. The composition of the chain varies by domain. The structural property — that authorship dissolves somewhere along it — does not.
Second, this is not a moral failure on anyone's part. It is a structural property of the technology, and it is the reason Ghost Ownership cannot be resolved by asking professionals to be more careful or more diligent. The carefulness is already there. The diligence is already there. What is missing is a framework that recognizes the problem exists.
3.
Ghost Ownership is easy to confuse with three adjacent problems, and the confusion is the first reason it has not been named.
It is not the black box problem. The black box problem is about systems whose internal logic cannot be inspected from outside. Ghost Ownership is about the human in front of the system. A perfectly transparent algorithm whose every step could be audited would still produce Ghost Ownership the moment a human acted on its output without being able to reconstruct, afterwards, which parts of the decision were theirs.
It is not an explainability gap. Explainability tells you what the system was thinking. Ghost Ownership is about what the human was thinking, and whether that thinking can still be called the human's own once it has been so deeply shaped by the system's output. Even a perfectly explainable system, used by a human who understood every step, would still produce Ghost Ownership if the human could not have reached the same conclusion without it.
It is not ordinary tool dependency. Surgeons use imaging systems they did not design, pilots use autopilots they cannot rebuild from memory, architects use structural analysis software whose internal calculations no single person verifies in detail. None of these constitute Ghost Ownership, because in each case the chain of authorship still terminates in identifiable human decisions: someone built the imaging system, someone certified the autopilot, someone signed off on the structural model. The tool extends human capability without dissolving the question of who decided what.
Each of these adjacent problems has its own emerging response. Black-box research has produced interpretability techniques. Explainability has produced model cards and reasoning traces. Tool dependency has produced certification frameworks and professional liability insurance. None of these responses address Ghost Ownership, because none were designed to. They answer different questions.
The structure of the problem — responsibility that cannot be assigned to any single actor in a human-machine chain — has appeared before in the literature on autonomous systems, where it is called the moral responsibility gap.10 The difference between that framing and Ghost Ownership is one of degree, not kind: where autonomous systems act without human input, AI-augmented decisions involve a human who is present but whose contribution has been reduced to a degree where attribution becomes, at minimum, contested. The earlier literature focused on systems that act; Ghost Ownership focuses on systems that shape. The shaping version is harder to see and easier to rationalize away, which is why it has taken longer to name.
The concept of 'meaningful human control' — developed in the autonomous weapons literature to specify that human oversight requires both a tracking condition (the system responds to the relevant reasons of its designers) and a tracing condition (outcomes can be attributed to identifiable humans along the chain of design and operation) — describes precisely what Ghost Ownership erodes.11 The human is present. The button can be pressed. Both conditions have dissolved.
4.
So why is this question becoming urgent now?
Four developments are converging in 2026 that were not converging before. On August 2, 2026, the EU AI Act enters full enforcement for high-risk AI systems — a date now less than four months from the writing of this article and directly applicable to AI tools used in investment advice and portfolio structuring.12 None alone would force the question. Together, they make it unavoidable.
The first is at the top of the market. In February 2026, researchers at one of the largest US investment banks published a generative foundation model trained on billions of trade events across more than nine thousand equities.1 The model predicts the next trade in the way large language models predict the next token. It generalizes across asset classes without recalibration, reproduces the statistical signatures of real markets, and operates in a closed loop where its own predictions influence the market state that conditions its next prediction. When systems of this kind enter the workflows of trading desks and asset managers, the gap between what they produce and what any individual human can verify becomes permanent.
The second is one layer down, where these outputs reach the people who actually make decisions for clients. Platforms now exist that wrap institutional-grade quantitative models into recommendations passed through to wealth managers, brokers, and ultimately to end clients.2 Each handoff in this chain introduces a new opportunity for authorship to dissolve, and the chain is getting longer, not shorter.
The third is happening in plain sight. A discipline has emerged in the last year called Answer Engine Optimization.3 Its premise is that when AI systems become the primary way people find information, the question is no longer how to rank highly in search results but how to be cited as a source by the AI itself. Companies now actively engineer their content so that large language models treat them as authoritative. The inputs to the systems that will shape recommendations are being shaped by parties whose interest is to be cited, not necessarily to be correct.
The fourth is the explicit acknowledgment from the AI industry itself. In January 2026, the chief executive of one of the leading AI research companies published a thirty-eight-page essay describing the current moment as a rite of passage for which humanity has, in his own words, no governance model, no legal framework, and barely a shared vocabulary.4 The essay was not written by a critic of the technology. It was written by someone whose company is building it, whose company benefits from its advance, and who therefore has every commercial reason to play down the risks. He did not. He named the absence of a vocabulary as one of the central problems we face.
These four developments converge on the same place: they make Ghost Ownership a present problem, not a future one.
There is one more thing to note, and it is what turns convergence into pattern. Three times in the last twenty-five years, a wave of technological change in financial services has been followed, with a delay of a few years, by a wave of legal aftermath. After 2000, claims emerged against advisors who used the new online platforms.5 After 2008, claims emerged against advisors who recommended structured products whose internal mathematics no one in the chain had fully understood.6 After 2022, claims emerged against firms whose algorithmic portfolio products had been marketed in ways their actual behavior did not support.7
Each wave followed the same shape. A platform-level change introduced a new verification asymmetry. The asymmetry was tolerated because it was the new standard practice. Then a shock arrived that struck precisely where the asymmetry was hidden, and the legal aftermath followed not against the unusual but against the typical.
Three data points are not a proof. They are enough to be noticed. If the fourth wave follows the same pattern, it will not be against advisors who were careless. It will be against advisors who, in 2026, used the AI tools that everyone else was using.
5.
A reasonable reader will assume that regulation must already address this somewhere. Financial services is one of the most heavily regulated sectors in the world, and AI is one of the most actively legislated technologies of the decade. Surely between MiFID II in Europe, the EU AI Act, the GDPR's automated decision-making provisions, the SEC's guidance on AI in advisory services, and the various national supervisory frameworks, the question of authorship has been covered.
It has not. What has been covered is adjacent.
MiFID II requires that investment firms assess the suitability of their recommendations and document the assessment. But suitability is a property of the recommendation, not of its authorship. A recommendation can be perfectly suitable and still have been generated by a process no human in the chain can reconstruct. Suitability asks whether the outcome fits the client. It does not ask who decided.
The EU AI Act requires human oversight of high-risk AI systems. This is closer, but oversight as currently defined means that a human must be in a position to intervene, override, or stop the system. It does not require that the human be the identifiable author of the decision the system contributes to. The oversight requirement is satisfied when a human can press a button. Article 13 of the Act does move closer: it requires that high-risk AI systems be designed to ensure transparency sufficient for deployers to interpret outputs and use them appropriately.14 Recital 48 reinforces this by requiring that persons assigned human oversight have the necessary competence, training, and authority to carry out that role — language aimed at preventing oversight that is nominal rather than substantive.15 Article 86 goes further still, granting affected persons the right to obtain from the deployer clear and meaningful explanations of the role the AI system played in any decision that produces legal effects or similarly significant impact.21 ESMA's 2024 public statement on AI in investment services extends comparable obligations to the advisory context.16 These provisions are closer than anything that came before. They require that the human be able to understand the output, and that the affected person be able to demand an explanation of the system's role. They do not require that the human's own response to that output — the judgment formed, the confidence held, the alternatives considered and rejected — be preserved as a record. The system's role must be explainable. The human's role need not be. The authorship question is untouched.
The most operationally immediate framework is also the one least discussed in the context of AI authorship. The Digital Operational Resilience Act (DORA), in force since January 2025, requires financial institutions to maintain full auditability of third-party ICT services under Articles 28 and 30 — which includes AI tools licensed from external vendors.13 When an AI-generated recommendation contributes to a client outcome that is later disputed, DORA establishes the institutional reporting and audit trail obligations. What DORA does not establish is the individual authorship record that Ghost Ownership requires: it addresses operational resilience and third-party risk, not the question of who decided what, with what reasoning, against what alternatives.
GDPR Article 22 grants individuals the right not to be subject to decisions based solely on automated processing. This is the closest any current regulation gets to the authorship question — and it is the place where the gap is sharpest. Article 22 protects against decisions made entirely by machines. It does not protect against decisions made jointly by a machine and a human in a way that dissolves the joint authorship. The moment a human signs off on an AI-generated recommendation, Article 22 no longer applies. The decision is, formally, no longer "solely" automated. Whether it is meaningfully human, the regulation does not ask.
The SEC's guidance on AI in advisory services, and its national equivalents, focus on disclosure: clients should be informed when AI is used. Disclosure tells the client that AI is involved. It does not tell anyone who, after the AI was involved, can be identified as the author of the decision.
The pattern is the same across all of these frameworks. Each recognizes that something has changed. Each addresses some aspect of that change. None addresses the specific question of where authorship terminates when AI-generated outputs flow into human decisions. The frameworks were built for an earlier problem and have been extended, with care and effort, to cover as much of the new terrain as their original architecture allowed. The OECD's Due Diligence Guidance for Responsible AI, published in February 2026, is the most recent institutional attempt to establish accountability standards for AI across business sectors including financial services — and confirms the pattern: governance processes, stakeholder obligations, and risk frameworks are specified; the individual authorship question is not.17 Ghost Ownership lies outside those limits.
This is not a criticism of regulators. The frameworks were drafted at a time when the question was not yet visible. It is hard to legislate against a phenomenon that has no name, and it is the absence of the name, more than any individual gap in any framework, that explains why the regulatory response has been adjacent rather than direct. You cannot build a rule around a thing you cannot point to.
6.
There is a counterintuitive feature of Ghost Ownership that needs to be stated directly: the problem does not get smaller as the technology gets better. It gets larger.
The instinct most people have is that this is a transitional problem. Systems today are imperfect, the reasoning goes, and when they become more reliable the question of who decided will fade. The human will still sign, the system will still contribute, but if the contribution is consistently good then authorship becomes academic. We do not, after all, agonize over the authorship of a calculation produced by a calculator.
This reasoning is wrong, and the wrongness matters.
Authorship does not depend on whether the recommendation is good. It depends on whether the human in the chain could have reached the same conclusion without the system. As systems become more capable, that condition becomes harder to satisfy, not easier. A system that produces recommendations beyond what any human could independently verify is, by definition, a system whose output the human cannot author.
There is a second turn of the screw. As systems become more reliable, trust in them grows. As trust grows, the human becomes less likely to question the output, less likely to seek verification, and less likely to maintain the kind of internal review that would let them reconstruct what they actually decided. Better systems produce more confident humans. More confident humans produce less verification. Less verification produces deeper Ghost Ownership.
A specific objection deserves direct engagement. Explainable AI — the research program that produces model cards, reasoning traces, and feature attribution maps — is real progress on a related problem. It makes the system's reasoning legible. It does not make the human's reasoning legible. Even a perfectly explainable system, used by a human who understood every step of its output, would still produce Ghost Ownership if the human could not have reached the same conclusion independently. Explainability is a property of the system. Authorship is a property of the human. A system can be fully explainable while the human who uses it is still a ghost author — present, signing, unable to reconstruct what they actually decided.
The implication is uncomfortable but unavoidable. The framework that addresses Ghost Ownership cannot be a transitional measure. It has to be a permanent part of how AI-augmented decisions are made, because the gap it addresses will widen as AI capability grows. And it cannot be a framework that depends on humans being able to verify the system's output, because by the time the framework matters most, that verification will no longer be possible.
What is needed is something else: a record of the chain of authorship that does not depend on recoverability of the underlying system. Not a certification that the decision was correct — an attribution that documents who decided, on what basis, with what stated confidence, and with what alternatives considered and rejected.
Such a record does not yet exist in any standardized form. The pieces exist scattered across audit trails, model cards, decision logs, and compliance documentation, but they are not assembled into something a regulator could point to or a court could rely on. The gap is not merely conceptual. The RegTech market reached nearly fifteen billion dollars in 2025 and is projected to exceed one hundred billion by 2035, built largely on transaction surveillance, anomaly detection, and automated compliance monitoring.18 None of the leading platforms in that market produces, as a standard artifact, a record that captures the reasoning of the human who acted on an AI-generated recommendation — the stated basis, the confidence, the alternatives considered. The assembly is the work that has not yet been done.
7.
If a framework for Ghost Ownership has to be assembled, six things would need to be recorded for every AI-augmented decision.
Who. Not the institution, not the team, but the specific human being whose name attaches to the decision and who can be identified, after the fact, as its responsible author. Current systems often record only the institution, or only the most visible role. A framework for Ghost Ownership needs each human in the chain to be identifiable, with their specific contribution captured.
On what basis. Not just what data was available, but what was actually consulted in the moment of decision, and which inputs from the system were treated as authoritative. In a Ghost Ownership scenario, the system's output is one of the inputs — often the dominant one. Recording which inputs the human relied on is the only way to reconstruct what the human's actual contribution was.
With what stated confidence. Confidence is the load-bearing element of any honest decision under uncertainty. A recommendation made with eighty percent confidence is a different decision from the same recommendation made with fifty percent confidence, even when the recommendation is identical. Current systems frequently record the recommendation without recording the confidence, which makes it impossible to distinguish between a decision made carefully and one made hopefully.
Against what alternatives. Every meaningful decision is a choice among options, and the rejected options carry as much information as the chosen one. In a Ghost Ownership scenario, alternatives are often generated by the system itself — and if the human cannot be shown to have considered them, the system's choice becomes the human's choice by default.
As a forcing function in the moment, not just a record after the fact. This is the part that current discussions of AI documentation tend to miss. A Ghost Ownership framework is not only a forensic tool to be retrieved when something goes wrong. It is a reflective tool that, by requiring the human to record these things at the moment of decision, forces them to confront the question they would otherwise skip: what am I actually deciding here, and on what basis? The record matters retrospectively. The act of recording matters in the present.
In a self-contained form. The framework cannot assume that the AI tool will still be available in three years when a complaint proceeding asks what happened. It has to capture enough about the decision-making moment to stand on its own as a documentary record, even if the system has been retired, replaced, or substantially modified.
The depth of documentation these six components require should be proportionate to the complexity and consequence of the decision: a standard rebalancing within an established mandate calls for less than a first-time structured product recommendation for a client entering the drawdown phase of retirement.
These six components are the minimum specification. They are not exotic — each is partially implemented somewhere in the financial system. What is missing is their assembly into a single artifact that does all six at once, that travels with the decision, and that can be produced on demand when the question of authorship is finally asked. The artifact does not yet have a standardized name. What it would do is clear enough.
8.
Why has none of the existing AI governance programs been extended to do this? The question deserves a careful answer because each of these programs is doing serious work.
Constitutional AI trains models against a central document of values that the model is meant to internalize and apply across every task. It is sophisticated and has produced measurable improvements in model behavior. But it operates on the model, not on the human who uses the model. A perfectly constitutionally-trained system will still produce Ghost Ownership the moment a human accepts its recommendation without being able to author it.
Model cards and system cards describe what a system was trained on, what its known failure modes are, and what its intended uses are. They describe the system, not the decision. A model card tells you what the model is. It does not tell you who used it, on what occasion, with what input, to produce what recommendation that a specific human then acted on.
ISO 42001, the international standard for AI management systems, requires organizations to establish governance processes and assess risks. Its focus is the organization, not the individual decision. An organization can be fully compliant while still producing decisions whose individual authorship cannot be reconstructed. NIST's AI Risk Management Framework has the same property and the same limit.
Audit trails and decision logs are the closest existing analog. They record events, timestamps, and sequences of actions. But audit trails capture what happened, not who was the author of what happened. An audit trail can show that a recommendation was generated by a tool, reviewed by an advisor, and accepted by a client. It cannot, on its own, answer the question of which of those three was the author.
Of these, ISO 42001 comes closest. Beyond its organizational governance requirements, its Annex A controls require organizations deploying AI to establish human oversight mechanisms and maintain records demonstrating that AI outputs were reviewed before action was taken.19 This is the nearest any current standard comes to a formal requirement for human authorship proof. But it is not near enough. It requires evidence of review — a record that a human was present. It does not require evidence of reasoning — what the human actually decided, on what basis, with what confidence, and against what alternatives. The gap between those two requirements is the gap this article is about.
Each of these programs is valuable. None is wrong in any way that matters. They simply do not, individually or together, answer the question Ghost Ownership poses, because they were not built to. The work of building something that does has to start from the question itself.
9.
This article has named a phenomenon, distinguished it from adjacent problems, traced four developments that make it urgent in 2026, identified a historical pattern that suggests the urgency will grow, examined the regulatory frameworks and shown why they do not yet address it, described the structural paradox that prevents better technology from solving it, specified what a framework that did address it would have to do, and explained why none of the existing programs can be extended to do it.
If the argument has been persuasive, the next step is not for any single party to build the framework. The next step is for the question to enter the discussion in a form that can be argued about, refined, and corrected.
That is what naming is for. A phenomenon without a name cannot be debated, because each speaker has to reconstruct it from scratch every time they refer to it. A phenomenon with a name, even a provisional one, can be the subject of disagreement and progress. The name does not have to be final. It only has to be precise enough that two people can use it without misunderstanding each other.
Ghost Ownership is offered in that spirit. It is a working name for a problem that needs one. Other names may turn out to be better. Other formulations of the conditions may be more accurate. The historical pattern may not hold. None of that would invalidate the underlying point: the question of authorship in an era of AI-augmented decisions has to be asked with a precision the current vocabulary does not yet support.
For practitioners, there is a concrete first step that does not require any new framework, any new standard, or any new tool. Begin documenting, in your own decision-making, which recommendations came from which AI systems, what your reason was for accepting them, what your stated confidence was, and what alternatives you considered. Do it for yourself, not for compliance. Do it because the act of writing it down will tell you, in real time, which of your decisions you are still authoring and which have quietly become someone else's. That is the smallest possible version of the framework, and it can be started tomorrow.
For practitioners in institutional settings, the first step looks different in form but is identical in substance. The question is not whether to document but whether your organization's current advisory workflow contains any point at which the human's actual reasoning — not merely their sign-off — is captured. If your CRM records the recommendation but not the advisor's stated basis for accepting the AI tool's output, your institution is producing Ghost Ownership at scale. The corrective is architectural rather than individual, and it begins with locating that gap in the workflow. The World Economic Forum's Responsible AI Playbook for Institutional Investors (2024) and the OECD's Due Diligence Guidance for Responsible AI (2026) both point toward the governance layer where this work belongs.20
What is not an option, for much longer, is to leave the question unasked. The next wave of complaints will make the asking unavoidable. The question is whether the vocabulary will already be in place when the asking begins, or whether it will have to be invented under pressure, in courtrooms and complaint proceedings, with all the disadvantages of defining a problem after the harm has occurred.
10.
A note on disclosure. The author of this article is working on a framework that attempts to do what the previous sections describe. It is called Steerable, and it is being developed first for certified financial planners in the German-speaking world — starting where regulatory pressure is most concrete and where practitioners most directly feel the gap, intended to scale into enterprise compliance and, eventually, institutional governance over systems like the ones described in section four.
The framework is one attempt among what should be many. The name Ghost Ownership is offered as a contribution to the vocabulary, not as a proprietary term. Anyone who wants to use it, improve it, or replace it with something better is welcome to do so without permission. The framework itself is a separate matter; those who want to know more can find it at steerable.org.
That is the entire pitch. The article is the work that mattered.