mood global services mgs logo
Legal aspects of Artificial Intelligence
AI
Artificial Intelligence
Regulation

Legal implications of using AI Agents to create generative content on blockchain: ownership and rights

Giovanni Piccirillo

June 18, 2025

21 min read

Illustration of a humanoid robot alongside a copyright document, a blockchain symbol featuring the Bitcoin logo, and a judge’s gavel. The scene represents the legal debate around AI-generated content on blockchain. Below, bold text reads: "Who owns AI-generated content on blockchain? Explore the legal challenges of authorship, copyright, and liability in the age of creative machines."
Introduction

Who owns a work generated by artificial intelligence? And what changes if this work is not only created by an AI agent, but is also registered and distributed on a blockchain? These questions open up complex and often contradictory scenarios that combine the issues of intellectual property, legal responsibility, and technological decentralization. In a world where the boundaries between human and artificial are blurring and where digital content is increasingly automated and immutably registered, law is called upon to redefine the very concept of authorship.

The advent of Generative Artificial Intelligence has multiplied the cases in which creativity is decoupled from direct human intentionality. In parallel, blockchain offers powerful technological tools to certify the origin of such content and ensure traceability, distribution and automatic monetization. But while technology runs, the law struggles to keep up. The lack of a harmonized legal framework at international level opens the way to conflicting interpretations, which make it difficult to understand what is legally protected and what is not.

This article aims to explore in depth the legal implications of generative creation through agent AI in the context of blockchain, with particular attention to the issues of ownership, rights and liability. Each section will address a critical issue, building a path between technology, comparative jurisprudence and possible future regulatory solutions.

Creative AI as a new authorial frontier

Artificial intelligence, in its generative version, represents one of the greatest revolutions in the digital world today. An AI agent is much more than a simple algorithm: it is a software entity capable of interacting with external inputs, learning from data sets and generating complex content in the form of text, visuals, music or even code. Large Language Models (LLM) such as GPT-4 or multimodal models such as DALL E and Sora do not simply replicate data, but produce new combinations that often exceed the creative capabilities of a single individual.

What distinguishes an AI agent from a passive tool is its autonomous decision-making capacity. When human intervention is reduced to a minimum – as in the case of generic prompts or auto-prompting systems – the attribution of the work to a human being becomes questionable. In a world where a code generates codes, an image produces a thousand others, and narratives emerge from stochastic calculations, the very concept of authorship enters into crisis.

Algorithmic creativity thus raises radical questions: can a machine have creative intentionality? And if not, is it right to consider its output devoid of authorial value? Legal philosophers and jurists are divided between those who support a “neo-romantic” model centered on the human, and those who propose a functionalist vision based on utility and investment in technology.

This tension is reflected in the first regulatory attempts to shape this emerging reality. In many cases, only those who demonstrate an original creative contribution continue to be considered “authors.” But originality, in the age of AI, is an increasingly ambiguous notion.

Blockchain as a Register of Digital Identity and Creative Authorship

Blockchain, with its decentralized and immutable nature, offers itself as an ideal platform to record and certify the creation of generative content. Each content, once transformed into a non-fungible token (NFT), can be associated with metadata that attests to its origin, date, and author – human or otherwise. Cryptographic hashes and timestamps offer concrete and verifiable proof of authorship and originality.

The evidentiary value of blockchain has already been the subject of attention by numerous courts, especially in Asia and in some American states. Some sentences, despite the lack of a specific rule, have recognized legal value to the timestamp generated by a distributed network. This creates an important precedent for future disputes related to AI content.

However, blockchain does not assign rights on its own. It records what is provided to it: if a work is generated by an AI and is tokenized, it does not necessarily mean that there is a legally recognized copyright. However, on-chain registration becomes a legally relevant tool in litigation, especially where traditional law is lacking or evolving. In addition, smart contracts can automate the granting of licenses, the distribution of royalties and the conditional use of the work, making the system more transparent and less dependent on intermediaries.

Insight Box – Legal Oracles as a Bridge Between Blockchain and Regulatory Reality

Legal oracles are tools that provide the blockchain with verified and structured data from the off-chain legal world. They can be used to validate regulations, contractual clauses, or legislative updates within a smart contract.

In the context of AI and generative content, legal oracles can:

The current limit is the reliability of the source and the need for human supervision for the most complex cases. However, the integration between blockchain, AI and legal tech is set to grow, paving the way for new forms of automatic governance of legality.

Who is the author? Man, machine or code?

An important point of reference in the Italian jurisprudential panorama is the case RAI vs. Author of the digital image(Cassazione, 2023), in which an image used as a backdrop for the Sanremo Festival was recognized as protected by copyright, although generated with the aid of software. The Court clarified that the use of technological tools does not exclude the possibility of protecting the work, provided that there is a significant human creative contribution. This precedent confirms the centrality of human intervention in the legal qualification of authorship.

On the international front, the caseGetty Images vs. Stability AI(UK, 2023) has shaken the generative AI industry. Getty has accused Stability AI of using millions of copyrighted images to train the Stable Diffusion model, without any license. The controversy is based on the lack of consent and the systemic risk of algorithmic plagiarism, and could set a precedent for defining the legal obligations of AI model providers towards the owners of the original content.

In the United States, the caseThe New York Times vs. OpenAI(2024) adds a relevant piece: the newspaper has started a lawsuit for alleged copyright infringement, since the NYT articles were allegedly used to train language models. OpenAI defends itself by invoking fair use, arguing that the use is transformative. A decision is expected that could redefine the application of fair use in the context of AI training.

Finally, the European regulatory framework, represented by Directive 2019/790 on copyright in the digital market (CDSM), underlines how only a work resulting from human creativity can be protected. However, the growing diffusion of AI agents makes it urgent to rethink these legal criteria in an evolutionary way.

The most debated issue is that of authorship. According to classical copyright, only a natural person can be considered an author. This principle is the basis of most European and Anglo-Saxon legislation. The European Union CDSM Directive emphasizes the centrality of human creative contribution. In the United States, the U.S. Copyright Office has repeatedly reiterated that a work entirely created by artificial intelligence cannot be protected by copyright.

However, what if the AI ​​was trained by a human through detailed prompts? Or what if the programmer wrote the code that allows the agent to generate content? Authorship could then be attributed to the prompt engineer, the programmer, the owner of the AI ​​model, or even the owner of the device that performed the generation. The ambiguity is amplified when the entire process is automated, with AI agents that self-train, learn, and generate without direct human input.

We are facing a systemic transformation: no longer a single author, but a chain of contributors. Traditional legal solutions, centered on the unique and identifiable figure of the author, struggle to adapt to this distributed plurality.

Artificial Intelligence, Originality and Copyright

One of the most controversial aspects of generative artificial intelligence is the possibility of recognizing — or denying — copyright protection to the content it produces. Copyright law was conceived to protect human creativity: originality, in its legal sense, presupposes the imprint of the author’s personality. But what happens if the author is a software, or rather, an autonomous agent programmed to generate texts, images, music or code?

Different legal systems have provided conflicting answers. In the United States, the caseNaruto v. Slater(2018) — famous for the monkey that took a selfie — established that only humans can own copyright, setting a precedent that is now also applied to AI content. Similarly, the US Copyright Office has rejected the registration of works generated by AI, as in the caseStephen Thaler v. Perlmutter(2023), reiterating that protection applies exclusively to creations with substantial human intervention.

In Europe, the CDSM Directive (2019/790) does not explicitly address AI, but requires that the work be the result of the “intellectual creativity of the author”, interpreted by the EU Court of Justice as the result of free and creative choices. Consequently, works entirely generated by AI would be excluded from protection, while those co-created (e.g. with refined prompts or human editing) can access copyright, as long as human intervention is significant.

However, this position clashes with the growing sophistication of AI agents, capable of producing original content on an autonomous basis, starting from minimal data sets and inputs. In this scenario, future jurisprudence will have to answer new questions: is the author the person who wrote the prompt? Who trained the model? Or the developer of the AI ​​platform?

Insight Box – Data Provenance and the Traceability of Creative Origin

In the absence of clear human paternity, the concept ofdata provenancetakes on central importance. This is the ability to trace the origin and the chain of transformations that led to the generation of an AI content.

An effective provenance system, recorded on blockchain, allows you to:

This approach does not guarantee copyright, but offers a form of “computational property”, useful for evidentiary, reputational and contractual purposes. The integration between smart contracts, AI and provenance thus opens the way to new models of rights management, in which transparency and traceability become the new source of legitimacy.

Prompt, template, and output: three levels of ownership

The AI ​​creation chain can be broken down into three phases: the prompt, the model, and the output. The prompt can be a simple sentence or an entire sequence of instructions. Some scholars are starting to consider it a creative work in itself, especially when it requires complex thinking and an articulated linguistic structure.

The AI ​​model, on the other hand, represents a legal object that is still undefined: who owns it? The creator, the company that manages it, or the open-source community that maintains it? The answer directly affects the ownership of the output. And finally, the content generated: an image, a text, a song. Can it be considered original? Or is it just a remix of pre-existing data?

The conflict emerges more clearly when analyzing practical cases: for example, Midjourney has banned the commercial use of its outputs for free users, claiming a right deriving from the ownership of the model. But this claim clashes with the absence of a clear legal framework. This chapter explores these tensions and proposes criteria to distinguish between the different degrees of creative contribution.

Tokenization and Licensing: what is the legal value of an AI-based on-chain content?

In the Web3 world, tokenizing AI-generated content means giving it a unique and marketable identity. Transforming it into NFTs allows you to assign an economic value, make the content collectible, and introduce royalty mechanisms. However, tokenization does not guarantee legal rights if not supported by a regulatory framework.

The main problem is the disconnect between off-chain law and on-chain dynamics. While a smart contract can impose terms of use, there is often no authority to enforce them. Furthermore, many AI contents are generated by models that have learned on unlicensed datasets, with a risk of infringing third-party rights.

A detailed analysis of emerging solutions—such as embedded NFT licenses, legal oracles, and decentralized identity frameworks—shows how the foundations for a new regulatory ecosystem are being created. But the challenge is twofold: avoiding a contractual jungle while ensuring that the artist, human or otherwise, is protected.

Insight Box – NFT Licensing: Between Art, Code and Law

NFT licenses are an attempt to fill the regulatory void on the ownership and use of tokenized digital content. Some projects adopt custom licenses integrated directly into the smart contract, which define rights of use, reproduction and sale. The main categories include:

However, the legal validity of these licenses depends on the jurisdiction and the ability to demonstrate the actual acceptance of the contract by the buyer. The challenge remains to integrate on-chain rules with recognized off-chain legal principles, creating hybrid models of digital protection.

Civil and criminal liability: if AI generates harmful content, who pays?

When an AI enters our daily lives—suggesting content, making decisions, negotiating orders, making diagnoses, or even acting autonomously—a question arises that is anything but technical: Who is responsible if something goes wrong? Who is held accountable if an AI Agent makes a wrong, discriminatory, dangerous, or economically damaging decision?

It is a question that has its roots in the heart of civil and criminal law, but which today, with the spread of autonomous agents, takes on new, nuanced, sometimes unprecedented contours. Because, unlike a traditional machine, the AI ​​Agent is often designed to adapt, learn, act in complex environments, interact with dynamic data and, above all, produce real effects on the world, even without a human giving it direct instructions at all times.

Traditionally, liability for damage caused by a technology falls on the subject who controls it or puts it into circulation: a principle that dates back to the Roman logic of “dominus instrumenti”, and which we find in the liability of the producer for defects in the product, or in that of the employer for the actions of the employee. But with artificial intelligence the line of demarcation between “instrument” and “autonomous agent” becomes thinner, and it is precisely this ambiguity that puts classical legal models into crisis.

Let’s take for example an AI Agent used by a bank to evaluate loan applications. The system is trained with historical datasets, refines its predictive capabilities, and begins to automatically reject certain credit applications, in a seemingly objective way. However, over time it emerges that the model has absorbed and replicated discriminatory biases against ethnic or gender minorities, even though it was never explicitly programmed to do so. Who is liable for the damage caused to the unjustly excluded citizen? The software supplier, who sold a formally neutral algorithm? The developer, who managed the training phase? The bank, which chose to use it without supervising it?

This regulatory uncertainty is one of the central issues in the regulation of AI in Europe. The European legislator, with the Regulation on Artificial Intelligence (AI Act) and the proposed Directive on civil liability for AI, is trying to build a balance between the need to protect users and encourage innovation. The goal is not to block development, but to clarify the liability framework, making the attribution of risk predictable, and therefore also insurable, manageable, and contractually negotiable.

In general, what is emerging is a principle of distributed responsibility, in which each actor in the supply chain — from the developer to the platform provider to the end user — can be held accountable based on the degree of effective control exercised over the system, the knowability of the risks, and the role played in determining the agent’s behavior. In essence, no one will be able to hide behind the technical complexity of the system, or invoke AI autonomy as an absolute shield: operational autonomy never equates to legal autonomy.

A particularly relevant aspect concerns the difficulty of proving the causality of the damage. Algorithms, especially those based on deep learning, work like “black boxes”, producing results that are difficult to explain even by those who designed them. In the event of an accident, it then becomes complex to reconstruct which factor caused the error: was it an anomaly in the data? A defect in the training? Incorrect use by the user? Or an unpredictable combination of all these elements? Law, by its nature, needs to identify a cause, a responsible party, a remedy. But AI, by its structure, makes this logical sequence opaque.

This is why there is increasing talk of the need to introduce, alongside traditional liability, models of objective liability or presumptions of fault. In practice, if damage arises from a high-risk AI Agent (such as those that affect credit, health, safety), the owner of the system could be called to answer even in the absence of intent or fault, unless proven otherwise. This shift in the burden of proof — already known in the environmental or health sectors — could become the key to balancing technological power and legal protection.

On a practical level, many companies are starting to adopt detailed contractual agreements, which define responsibilities, service levels, security requirements and limits on the use of AI models. These are called AI liability clauses, inserted in SLAs (Service Level Agreements), Terms of Use and software license agreements. But these clauses, however sophisticated, do not replace the obligation of diligent design and preventive risk assessment. No contractual exemption can ever cover damage that arises from a structural lack of governance, transparency or auditability.

Finally, the question of responsibility is not just a technical or legal question: it is also, and perhaps above all, a cultural question. The real challenge is not only to establish who is responsible, but to make it clear that someone must be responsible. That artificial intelligence cannot be a faceless subject, an elusive entity that acts without anyone controlling it. On the contrary, every time an AI system makes decisions that affect people — and today it happens every day, in a thousand contexts — there must be a clear, traceable, accessible chain of responsibility. Not to punish a posteriori, but to build safer, fairer and more reliable systems upstream.

Only in this way will we be able to prevent the “blame of the algorithm” from becoming the new alibi for irresponsibility. And only in this way will it be possible to affirm that technological innovation, to be truly progress, must walk hand in hand with law and justice.

Examining the practical cases of AI liability – from offensive music generated by algorithms to deepfake videos or defamatory content created by autonomous bots – allows us to understand the depth of the problem. Without accountability, we risk a dangerous legal vacuum.

Liability is one of the most intricate issues. A work generated by AI can contain defamatory, discriminatory or even illegal material. If such content is tokenized and distributed on blockchain, it becomes virtually indelible. Who is legally responsible? The creator of the prompt? The owner of the AI ​​agent? The platform that hosts the content?

The European AI Act introduces the concepts of systemic risk and shared responsibility between developers, deployers and users. But the legal framework is still fragmented. The Digital Services Act tries to intervene on the platform side, but does not directly address decentralized cases. DAOs, collective entities based on smart contracts, introduce further complexities: who controls the content generated within an autonomous DAO?

Insight Box – DAO and Distributed Accountability

DAOs (Decentralized Autonomous Organizations) represent collective structures that govern themselves through smart contracts. In them, decisions and operations – including the generation and distribution of content – ​​are the result of automated voting and codified rules.

The legal liability of a DAO is a matter of debate. In the absence of recognized legal personality, any wrongdoing remains difficult to attribute. Some emerging approaches include:

DAOs that use AI agents to generate content face an additional layer of complexity: who triggered the agent? Who decided on the dataset or generative logic? Transparent and auditable governance remains the only viable way to mitigate emerging legal risks.

Liability is one of the most intricate issues. A work generated by AI can contain defamatory, discriminatory or even illegal material. If such content is tokenized and distributed on blockchain, it becomes virtually indelible. Who is legally responsible? The creator of the prompt? The owner of the AI ​​agent? The platform that hosts the content?

The European AI Act introduces the concepts of systemic risk and shared responsibility between developers, deployers and users. But the legal framework is still fragmented. The Digital Services Act tries to intervene on the platform side, but does not directly address decentralized cases. DAOs, collective entities based on smart contracts, introduce further complexities: who controls the content generated within an autonomous DAO?

Examining the practical cases of AI liability – from offensive music generated by algorithms to deepfake videos or defamatory content created by autonomous bots – allows us to understand the depth of the problem. Without accountability, we risk a dangerous legal vacuum.

Towards a new legal framework: do we need a copyright for machines?

The debate is open: some scholars propose the institution of an “algorithmic copyright”, where AI can be considered co-author and the right is divided between the subjects involved. Others, instead, suggest a model of non-exclusive use licenses, in which the work is freely usable but with the obligation of citation and/or compensation for the creators of the model.

A third way could be that of “human-conditioned rights”, in which the degree of human participation in the generation of content is assessed on a case-by-case basis. In all cases, the need for a broad and systemic legislative reform emerges, which takes into account the specificities of the decentralized ecosystem.

The future of intellectual property will probably pass through the convergence of contract law, smart contracts and new forms of digital attestation of creativity. A new legal language is needed, capable of recognizing that creation, today, is often the result of a complex interaction between man and machine.

Regulatory summary section – The regulatory framework between gaps and prospects

Over the past few years, national and supranational institutions have begun to systematically address the legal challenges posed by artificial intelligence and blockchain, but regulatory action remains fragmented and often lags behind technological innovation.

At European level, theDirective (EU) 2019/790 on copyright in the digital single market (CDSM)states that only works with a human creative contribution can be protected, implicitly excluding content generated entirely by machines. However, there is still no harmonized discipline on the recognition of content co-created by AI and humans.

The Artificial Intelligence Regulation (AI Act)approved in 2024 establishes a classification of AI systems based on risk and introduces obligations for providers and users. Generative systems often fall into high-risk categories, and must ensure transparency, traceability and human oversight.

The Digital Services Act (DSA)and theDigital Markets Act (DMA)Instead, they regulate digital platforms and intermediary service providers, imposing control and liability obligations for illicit content, with indirect implications also for NFT marketplaces and decentralized Web3 tools.

On the international level:

There is still considerable uncertainty about the legal validity of NFT licenses, the applicability of copyright to AI-generated content, and the liability of DAOs. The path forward requires an integrated approach: interoperability between legal codes and smart contracts, clarity in the criteria of originality, and a supranational governance capable of protecting rights without hindering innovation.

Conclusions between creative utopia and legal challenge

The intersection of generative AI and blockchain represents one of the most complex and challenging frontiers of contemporary law. Far from being a purely technical problem, it is a question that touches on the philosophy of law, the ethics of creation, and the sustainability of economic value in the era of digital replicability.

Law will have to learn to dialogue with the algorithm, to understand decentralization and to accept that creativity is no longer an exclusive prerogative of man. Only then will it be possible to guarantee rights, responsibilities and value to those – human or artificial – who contribute to the construction of the new digital world.


Share on