Introducing Agentic Instrument Sovereignty (ATS)
The EU AI Act regulates AI programs via pre-market conformity (for high-risk programs) and role-based obligations; a reasonably static compliance mannequin that assumes fastened configurations with predetermined relationships.
Nevertheless, real-life AI programs are not often fastened or predetermined.
To know this declare, one should first perceive that cloud computing suppliers – that are the spine of most net companies together with APIs, databases, serps, and so on – exist throughout jurisdictions and borders, leaving “much less certainty for the client by way of the situation of knowledge positioned into the Cloud and the authorized foundations of any contract with the supplier”.
Authorized scholarship has lengthy since famous that the Web permits EU regulation and non-EU authorized orders to collide throughout borders, creating “frequent authorized conflicts” in a “fragmented and international” atmosphere. Likewise, EU coverage evaluation emphasises a lack of management over information and infrastructure, and the dominance of non-EU cloud suppliers, elevating sovereignty issues.
One should additionally perceive that AI Brokers might be outlined as “goal-oriented assistants”, designed to behave autonomously with minimal enter. That’s, they’re “not mere instruments, however actors” that train decision-making.
AI brokers can invoke third-party instruments (which embody APIs and net searches), and even different AI programs (workflows) – companies supplied by the aforementioned cloud computing suppliers – which will not be recognized earlier than runtime and should function below totally different jurisdictional regimes and in several geographic areas.
Authorized scholar Giovanni Sartor has argued that we could attribute legally related intentional states to AI brokers and recognise their capability to behave on customers’ behalves.
This straight challenges the style through which they combine into the static, predetermined compliance mannequin of the EU AI Act.
I name this problem “Agentic Instrument Sovereignty” (ATS); the (in)capability of states and suppliers to take care of lawful management over how their AI programs autonomously invoke and use cross-border instruments. The place digital sovereignty issues management over one’s digital infrastructure, information, and applied sciences, ATS extends this concern to the runtime conduct of AI programs themselves; their capability to behave, select, and combine instruments past any single jurisdiction’s efficient attain.
Fifteen months after the Act entered power, no steerage addresses this hole, whereas €20 million in GDPR fines for cross-border AI violations (OpenAI €15 m, Replika €5 m) sign how regulators would possibly reply when brokers’ autonomous software use inevitably triggers related breaches. The disjunction between the AI Act’s static compliance mannequin and brokers’ dynamic software use creates an accountability vacuum that neither suppliers nor deployers can navigate.
Think about a hypothetical state of affairs: An AI recruitment system in Paris autonomously invokes a US psychometric API, UK verification service, Singapore expertise platform, and Swiss wage software, all in lower than 5 seconds. Three months later, 4 regulators concern violations. The deployer lacked visibility into information flows, audit trails proved inadequate, and the agent possessed no geographic routing controls.
Defining ATS’ Dimensions
The query of ATS arises from the strain between agent autonomy on the one hand, and cross-border information flows with digital sovereignty on the opposite. The authorized frameworks that we are going to take into account (The EU AI Act and GDPR) assume static relationships, predetermined information flows, and unified management; assumptions incompatible with brokers’ runtime, autonomous, cross-jurisdictional software invocation.
ATS has technical, authorized, and operational dimensions, which is likely to be summarised thusly:
Technically, brokers would possibly dynamically choose instruments from constantly-updating hubs/registries (digital ‘catalogues’ itemizing the accessible instruments), making the import jurisdiction unknown till runtime.
Legally, when brokers autonomously switch information throughout borders, jurisdiction turns into ambiguous.
Operationally, accountability disperses throughout mannequin suppliers, system suppliers, deployers, and gear suppliers, with no actor possessing full visibility or management into the agent’s determination tree, information flows, or compliance posture in the intervening time of software invocation.
Gartner predicts that by 2027, 40% of AI-related information breaches will end result from cross-border generative AI misuse. But the AI Act offers no mechanism to constrain the place brokers execute, attest their runtime conduct, or keep accountability as management leaves the unique perimeter.
The AI Act’s Structural Failures
Substantial Modification Ambiguity
Article 3(23) defines “substantial modification” as modifications “not foreseen or deliberate within the preliminary conformity evaluation”.
However does runtime software invocation represent such modification?
Associated authorized scholarship reveals these ambiguities are structural reasonably than transitional. Even when builders make intentional modifications to AI programs utilizing documented approaches, “it’s unlikely that upstream builders will be capable of predict or deal with dangers stemming from all potential downstream modifications to their mannequin”. If predictability fails for recognized, deliberate modifications, it turns into inconceivable for autonomous runtime software invocation; suppliers can’t foresee which instruments brokers will choose from constantly-updating registries, what capabilities these instruments possess, or what dangers they introduce.
If instruments had been documented throughout conformity evaluation, accountability doubtless stays with the unique supplier. If software choice and use was unanticipated or basically alters capabilities, Article 25(1) could set off, reworking the deployer right into a supplier. But the “substantial modification” threshold requires figuring out whether or not modifications had been “foreseen or deliberate”; a dedication that turns into structurally inconceivable when brokers autonomously choose instruments that didn’t exist on the time of conformity evaluation.
Article 96(1)(c) mandates Fee steerage on substantial modification, however agentic programs stay excluded and no steerage from The AI Workplace has been forthcoming.
Put up-Market Monitoring Impossibility
Article 72(2)
Article 72(2) requires post-market monitoring (of high-risk programs) to “embody an evaluation of the interplay with different AI programs”. Whereas this offers the strongest textual foundation for monitoring exterior software interactions, it nonetheless raises additional questions, specifically:
do “different AI programs” embody non-AI instruments and APIs? Most exterior instruments that brokers invoke are typical APIs, not AI programs; whereas others is likely to be ‘black bins’ that aren’t outwardly interfaced-with as AI programs however internally function as such;
how can suppliers monitor third-party companies past their management? Suppliers lack entry to software suppliers’ infrastructure, can’t compel disclosure of knowledge processing places, and haven’t any mechanism to audit software behaviour; that is very true if software suppliers reside outdoors of the EU.
Tutorial evaluation of the AI Act’s post-market monitoring framework acknowledges this structural problem by noting that post-market monitoring turns into particularly difficult for “AI programs that proceed to be taught, i.e. replace their inside decision-making logic after being deployed on the market”. Agentic AI programs with dynamic software choice capabilities fall below this class.
Moreover, the Act assumes that monitoring logs “can both be managed by the consumer, the supplier, or a 3rd occasion, as per contractual agreements”, however this assumption breaks down fully when brokers invoke instruments from suppliers unknown earlier than runtime and with whom no contractual relationship exists, creating visibility gaps that render Article 72(2)’s monitoring obligations inconceivable to fulfil.
Article 25(4)
Article 25(4) requires suppliers and third-party suppliers (of high-risk programs) to specify “needed data, capabilities, technical entry and different help” by written settlement. Nevertheless, this assumes pre-established relationships that can’t exist when brokers choose instruments at runtime from constantly-updating hubs/registries.
The Many Arms Drawback
Duty diffuses throughout the AI worth chain. Mannequin suppliers construct foundational capabilities. System suppliers combine and configure. Deployers function in particular contexts. Instrument suppliers (typically even unknowingly) provide exterior capabilities. Every actor possesses partial visibility and management, but accountability frameworks assume unified accountability.
The Act offers no mechanism to compel software suppliers to reveal information processing places, implement geographic restrictions, present audit entry, or keep compatibility with compliance programs. When an agent autonomously selects a software that transfers private information to a non-adequate jurisdiction, who decided the switch? The mannequin supplier who enabled tool-use capabilities? The system supplier who configured the software registry? The deployer who authorised autonomous operation? Or the software supplier who processed the information?
This distributed accountability drawback is well-documented in AI governance scholarship. Conventional authorized frameworks for ascribing accountability “deal with machines as instruments which are managed by their human operator primarily based on the belief that people have a sure diploma of management over the machine’s specification” but “as AI depends largely on ML processes that be taught and adapt their very own guidelines, people are not in management and, thus, can’t be anticipated to at all times bear accountability for AI’s behaviour”.
When utilized to agentic software invocation, this accountability hole multiplies: ML programs can “exhibit vastly totally different behaviours in response to nearly similar inputs” making it inconceivable to foretell which instruments will probably be invoked or the place information will stream. The Act assumes unified management that not exists.
Additional questions come up when an AI agent selects an online service as a software when stated service by no means envisaged or authorised themselves for use as a part of an AI agent’s operation. They’ve successfully grow to be a software with out even figuring out it.
Moreover, the Act presents no mechanism to compel cooperation from software suppliers chosen at runtime (ie, not recognized upfront). Recital 88 merely encourages software suppliers to cooperate, however creates no binding obligation absent contractual preparations. Article 25(4) does impose written agreements, however solely between suppliers and suppliers in pre-established relationships. Neither provision subsequently addresses the problems raised by runtime software choice from ephemeral sources.
This vagueness is probably going not unintentional however by design. Lawmakers are “deterred from outlining particular guidelines and duties for algorithm programmers to permit for future experimentation and modifications to code” however this method “offers room for programmers to evade accountability and accountability for the system’s ensuing behaviour in society”.
The AI Act typifies this trade-off: particular guidelines would constrain innovation, however normal guidelines create accountability vacuums. ATS exists exactly on this vacuum; the area between enabling autonomous software use and sustaining authorized management over that autonomy.
The GDPR Stress
The intersection with GDPR Chapter V creates elementary tensions. Customary Contractual Clauses below Article 46require particular importer identification and case-by-case adequacy assessments per Schrems II. Once more, these mechanisms assume pre-established relationships and intentional switch choices; once more structurally incompatible with dynamic software invocation.
Turning our gaze once more to constantly-updating software hubs/registries; in lots of circumstances the particular software (and certainly its existence) is unknown till runtime. Agentic choices happen too quickly for authorized evaluation, and relationships are ephemeral reasonably than contractual. Article 49’s derogations can’t assist systematic enterprise operations in response to EDPB Pointers 2/2018.
Tutorial evaluation commissioned by the European Parliament acknowledges this structural pressure: GDPR’s “conventional information safety rules—objective limitation, information minimisation, the particular remedy of ‘delicate information’, the limitation on automated choices” basically battle with AI programs’ operational realities, involving “the gathering of huge portions of knowledge regarding people and their social relations and processing such information for functions that weren’t totally decided on the time of assortment”.
When brokers autonomously invoke cross-border instruments, they create information flows that fulfill neither the predetermined switch mechanisms of Chapter V (which require particular importer identification) nor the purposeful assortment rules of Chapter I (which assume decided functions at assortment time). The Act requires figuring out why and the place information flows; agentic programs decide this autonomously at runtime.
When brokers autonomously choose instruments that switch private information, the established controller-processor framework relationships break down; the software supplier isn’t appearing below the deployer’s directions, but neither are they independently figuring out functions and means.
This implies some kind of joint controllership maybe. The CJEU’s Vogue ID determination establishes joint controller accountability the place events collectively decide functions and means. However can organisations keep the required “management” if unaware of the agent’s runtime choices? EDPB Pointers 05/2021 on the interaction between Article 3 and Chapter V make no touch upon autonomous AI agent choices.
In opposition to this backdrop, suppliers discover themselves caught between Scylla and Charybdis: Pre-approve restricted software units (eliminating agentic flexibility), implement geographic restrictions (the identical concern by way of one other constraint), or function in non-compliance.
Authorized scholarship confirms that “the present patchwork of rules is insufficient to handle the worldwide nature of AI applied sciences” significantly when “AI programs function throughout borders and have an effect on a number of jurisdictions concurrently” rendering “unilateral regulatory approaches inadequate”.
The problem is conceptual; conventional “information sovereignty” focuses on territorial management over information inside jurisdictions, however agentic programs make autonomous cross-border choices that transcend any single jurisdiction’s authority. The AI Act (a unilateral regional method) can’t constrain brokers that autonomously invoke instruments working below totally different jurisdictional regimes, to totally different ranges of conformity, in real-time.
ATS thus calls for a elementary reconceptualisation: Sovereignty should shift from static territorial boundaries to dynamic governance over autonomous actions themselves.
A Name for Runtime Governance
Fifteen months after the AI Act entered power, the AI Workplace has printed no steerage particularly addressing AI brokers, autonomous software use, or runtime conduct. In September 2025, MEP Sergey Lagodinsky formally requested the Fee to make clear “how AI brokers will probably be regulated”. On the time of writing, no public response has been issued.
The Future Society’s June 2025 report confirmed that technical requirements below growth “will doubtless fail to totally deal with dangers from brokers”. This regulatory hole will not be solely technical however conceptual too; current regulation embeds sovereignty in territory and information residency, whereas agentic programs require embedding sovereignty in runtime behaviour.
Till steerage emerges, suppliers face ambiguities which are extremely troublesome to resolve:
whether or not software invocation constitutes substantial modification;
fulfill Article 72(2)’s monitoring obligations for third-party companies;
whether or not GDPR switch mechanisms can apply to ephemeral, agent-initiated relationships.
Deployers of profitable agentic AI programs with software use should additionally keep human oversight (per Article 14) whereas enabling the system’s autonomous operation, which is – on the face of it – a compliance impossibility.
Latest authorized scholarship on AI brokers confirms that sufficiently subtle programs “may interact in a variety of conduct that will be unlawful if carried out by a human, with penalties which are no much less injurious” but current frameworks present solely a “weak safeguard in opposition to critical hurt” via ex submit legal responsibility.
The runtime governance hole is thus not merely technical however elementary: AI brokers can autonomously carry out advanced cross-border actions (together with software invocation that triggers information transfers) that will violate GDPR and the AI Act if carried out by people with the identical information and intent.
But neither framework imposes real-time compliance obligations on the programs themselves. Put up-facto fines can’t undo millisecond-duration transfers to non-adequate jurisdictions; conformity assessments can’t predict which of hundreds of constantly-updating instruments an agent will autonomously choose. The Act’s enforcement mannequin assumes human decision-making timescales, however agentic operations happen too quickly for human oversight to be something greater than theatrical.
ATS subsequently begs a elementary reconceptualising of how we view digital sovereignty; not static jurisdictional boundaries, however dynamic guard-rails on autonomous actions. This would possibly require mechanisms to probably constrain which instruments brokers can invoke, attest the place execution happens, and keep accountability as management disperses.
With out such mechanisms, suppliers face sanctions for contraventions that the Act’s personal structure renders unavoidable.
Lloyd Jones researches regulatory challenges of agentic AI on the intersection of tech and regulation. With 15+ years constructing tech and AI programs, he brings deep technical perception to AI governance. He is a member of SCL and pursuing superior authorized research.
















