Artificial Intelligence

Artificial Intelligence, Latin America / GRULAC

200 Bills and Counting: AI Legislation in the Brazilian Congress

Artificial Intelligence (AI), and Generative AI in particular, is transforming the way we create and challenging fundamental concepts of copyright law — including authorship, originality, and the very notion of a “protected work.” As AI tools become increasingly embedded in creative processes, they also raise concerns among creators and intellectual workers about potential job displacement and the risk that AI-generated outputs may undermine creative markets. These outputs are only possible because Generative AI systems are typically trained on human-authored works — a practice that has already prompted lawsuits in several parts of the world. As part of research activities carried out by the Global Expert Network on Copyright User Rights, researchers from the Centre on Knowledge Governance and the Brazilian Copyright Institute (IBDAutoral) mapped all legislative proposals currently under discussion in the Brazilian National Congress that address AI, including those at the intersection of AI and copyright.  Methodology We mapped the databases of legislative proposals from the Federal Senate and the Chamber of Deputies in search of bills addressing issues related to Artificial Intelligence. The searches were conducted between January and February 2025 on the websites of the Chamber of Deputies and the Federal Senate. On the Chamber of Deputies website, a subject-based search was performed using the platform’s built-in tool. The first query used the term “artificial intelligence,” with “bill” selected as the proposal type and no restrictions applied to the status field — meaning all propositions were included. This returned 173 bills, some of which were incorrectly classified as AI-related, as discussed below. A second search, using the same parameters but adding the term “copyright,” returned 13 results, also with some misclassifications. After removing duplicates, the total number of bills mapped in the Chamber of Deputies came to 175. On the Federal Senate website, the search was conducted under the “Search – Senate Portal” tab using the free-text term “artificial intelligence,” with the filters “Bills and Subject Matters – Propositions” and “PL – Bills” applied. This returned 25 records, some of which overlapped with bills already identified in the Chamber of Deputies search. No time restrictions were applied at any stage, in order to obtain the broadest possible view of AI-related legislation currently under consideration in the National Congress. Once the bills were identified, we collected and categorized key information about each one — including bill number, date of introduction, authorship, party affiliation, affected legislation, rapporteur, and current status. In addition, a short description of each bill was prepared based on its summary and full text. Preliminary Findings The initial mapping, after removing duplicates, identified 200 bills related to artificial intelligence. Of these, 10 were found to be incorrectly classified as AI-related or did not feature AI as a meaningful element. In terms of when bills were introduced, a modest increase was observed in 2019, with 10 proposals filed that year. The real surge, however, came between 2022 and 2023, when the number of bills rose from 15 to 53, and again in 2024, with 82 bills introduced. This acceleration is understood to be tied to the widespread diffusion of generative AI systems — such as ChatGPT — beginning in the second half of 2022. Regarding subject matter, the most prevalent theme across the mapped bills is criminal law (33 bills), followed by labor law (17) and consumer protection (17). Next come bills of a more general or principles-based nature addressing the development and use of AI (16), and then those specifically dealing with copyright (14). It is worth noting that a single bill may be classified under more than one category. Main Themes Across AI Legislative Proposals A preliminary analysis of the 14 bills addressing the intersection of generative AI and copyright reveals a strong focus on two issues: recognizing the use of protected works for AI training as an act subject to exclusive rights — or even as grounds for creating a new exclusive right (6 bills) — and establishing remuneration obligations when protected works are used for training purposes (6 bills). Other frequently addressed topics include civil and criminal sanctions for copyright infringement (5 bills) and transparency obligations imposed on AI developers and operators (5 bills). By contrast, the equally important debate around limitations and exceptions — particularly regarding the use of works for AI training in research and educational contexts, or for certain text and data mining purposes — has received considerably less legislative attention, appearing in only 1 bill. That bill is No. 2338/23, which has already been approved by the Federal Senate and is currently under review in the Chamber of Deputies. Most Frequent Topics in AI and Copyright Legislative Proposals The full mapping of bills under consideration in the National Congress will be made publicly available on the website of the Observatório Nacional de Direitos Autorais, an initiative created by the Brazilian Copyright Institute in 2022. The Observatory’s main objective is to provide open access to a wide range of materials — including judicial decisions, theses and dissertations, draft laws, legislation, and international treaties — on copyright law, serving as a reference resource for researchers, legal practitioners, and anyone seeking to understand and deepen their knowledge of the subject.

Artificial Intelligence

Centre announces a policy agenda on ‘Just AI’

In today’s world, research in fields ranging from health, education and agriculture to economics, social sciences and humanities relies on computational methods, and in particular artificial intelligence tools.  Policy makers and public interest advocates around the world are beginning to formulate a policy agenda for the promotion of Just AI.  The concept of Just AI combines the desires for promoting public accountability and accessibility in AI infrastructure advocated by “Public AI” advocates with additional human rights concerns, including the moral and material interests of creators, the stewardship of traditional knowledge, cultural expressions, and genetic resources by communities, and the developmental priorities of the Global South.  Many of the core elements of a Just AI vision require the implementation or alteration of copyright and related knowledge governance policies (including, e.g., privacy law, data governance, competition law, etc.). These areas of law are often shaped and informed by international treaties and policies being implemented and reformed in International Geneva.    At the Centre on Knowledge Governance we are working with a network of 100 scholars in 30 countries (through the User Rights Network) and with representatives of governments in multilateral organisations in Geneva to help define a policy agenda on Copyright, the Right to Research and Just AI.  To read more about our vision for just AI, see our full concept note below. For case studies on Just AI, visit our focus area page on Just AI

Artificial Intelligence, WIPO

WIPO Launches Artificial Intelligence Infrastructure Interchange

WIPO launched its Artificial Intelligence Infrastructure Interchange (AIII) on March 17, which was described as having the goal of supporting the development of AI technology that supports the livelihoods of creators and innovators. The goal has two aspects – making AI tools available to creators to help their work, while at the same time assuring that the works used to create such tools support the moral and material rights of authors.  The key focus is on “infrastructure” that can technically identify AI creations and promote models for creators to use AI as a tool. Assistant Director General Ken Natsume explained that “the answer lies in various tools: Watermarks, metadata, digital ID, authentication tools, digital distribution frameworks.” The AIII’s launch page similarly defines the “IP infrastructure” of its focus as composed of “watermarks, authentication tools, standards, metadata, digital identifiers, rights management and content recognition systems, and digital distribution frameworks … developed by rightsholders and creators to build new business models that safeguard their rights.”  This definition of AI infrastructure is quite different than the broader sense embraced by Public AI advocates. That approach proposes “treating AI as public infrastructure, emphasising democratic governance, broad accessibility, and accountability to the communities that AI systems serve.” The concept of “Just AI” used by the Centre on Knowledge Governance and others is largely congruent with the goals of Public AI, but also raises additional human rights concerns, including the moral and material interests of creators. In this sense, the WIPO AIII focus on tools to enable remuneration and creator opt outs in AI Tools can be seen as promoting some but not all aspects of a Just AI vision.  At the launch event, participants described the goal of AIII as providing a neutral forum for creators, rights holders, developers, and experts to share information on the development and use of such tools, including tools that can be used in the creation process. Music and voice or actor simulation models are a core focus of the project. These are areas where AI tools have the potential to create content that competes with the works used to train them. In such areas, the justification for using highly licensed tools and giving creators maximum ability to opt out of their content being used in training is at its apex.    The WIPO project has created a “Technical exchange network (TEN)” where technical experts from the private sector, including academics and civil society, will share information on the development and use of content identification tools. There will also be an annual public meeting of the project and a government expert group that will share information with policy makers about such infrastructure and exchange on national developments.

Artificial Intelligence, Blog

The Moratorium the AI Industry Cannot Afford to Lose

The WTO’s 14th Ministerial Conference (MC14) starts in Yaoundé, Cameroon, next week with a packed agenda and real stakes. Buried in the long list of negotiations is a decision that will have a significant impact beyond trade: whether to renew the moratorium on non-violation complaints under the TRIPS Agreement. The outcome will help determine whether the TRIPS flexibilities and exceptions, particularly copyright exceptions, which have recently become the backbone of the AI economy, can be challenged at the WTO. Two Moratoriums, One Bargain Since 1998, WTO members have supported a temporary moratorium on customs duties on electronic transmissions, including software downloads, streamed content, and digital services. That moratorium has been extended at every Ministerial Conference since. It is up for renewal again at MC14, where the United States (US) is pushing to make it permanent. The moratorium originated at the 1998 WTO Ministerial in Geneva, where members adopted a Work Program on E-commerce and committed to “continue their current practice of not imposing customs duties on electronic transmissions” (WTO 1998). Critically, the term “electronic transmissions” was never defined. That ambiguity allowed the scope of the moratorium to expand alongside the digital economy, covering an ever-wider range of digital content and services without any fresh multilateral agreement. Since then, the US has been embedding the moratorium in its bilateral free trade agreements. The US-Jordan FTA in 2000 was the first agreement to include a binding commitment not to impose customs duties on electronic transmissions. Recent agreements on reciprocal trade (ARTs) go further and require countries to support multilateral adoption of a permanent moratorium on customs duties on electronic transmissions at the WTO. All these efforts build a web of bilateral obligations that formalize the current push for a permanent multilateral moratorium at MC14. Less discussed but just as consequential is a second moratorium: the freeze on non-violation and situation complaints under the TRIPS Agreement. The moratorium on the TRIPS non-violation and situation complaints (NVC) has also been extended at each Ministerial Conference since 1995.  Under TRIPS Article 64, a WTO member can file a non-violation complaint even when no TRIPS rule has been broken, claiming only that expected benefits have been “nullified or impaired” by another member’s measures. Non-violation claims create a significant IP weapon: they mean that a country’s copyright exceptions, fair use, limitations for research and education, patentability requirements, and compulsory licenses could, in principle, be challenged at the WTO not for violating TRIPS but for frustrating the commercial expectations of foreign rightsholders.  Any TRIPS measure that allegedly nullifies or impairs benefits under TRIPS may, under certain conditions, be challenged through a non-violation complaint (e.g., on the theory that it frustrates a member’s legitimate expectations). In principle, this creates a pathway to challenge a wide range of legitimate public-interest policies that affect rightsholders. Such policies could include, among others, rules on patentability, compulsory licensing, and copyright limitations and exceptions, including the US fair use doctrine. US copyright law includes a variety of specific exceptions, but fair use is the oldest and the most broadly applicable of all US exceptions to copyright infringement. As IP scholar Frederick Abbott warned as early as 2003, “non-violation causes of action could be used to threaten developing Members’ use of flexibilities inherent in the TRIPS Agreement and intellectual property law more generally. Thus, for example, Members that adopt relatively generous fair use rules in the fields of copyright or trademark might find that they are claimed against for depriving another.”  The two moratoriums have been traded as a package. Developing countries seeking the TRIPS NVC moratorium, which protects domestic policy space in health, access to knowledge, education, and technology transfer, have had to support the e-commerce moratorium, which benefits US digital platforms. Each Ministerial Conference is, in effect, another round of that exchange. If the e-commerce moratorium becomes permanent at MC14, as the US proposes, the key question is what developing countries receive in return, particularly on the TRIPS NVC side. Significance of Copyright Exceptions Many key internet functions rely on copyright limitations and exceptions. Search engines cache and index content without negotiating individual licensing agreements; search previews display short snippets; CDNs buffer and transmit protected works; cloud services store user-uploaded copyrighted files.  According to the CCIA’s 2025 report, fair use industries now account for 18 percent of US GDP, $4.9 trillion in value added, and $10.2 trillion in revenues in 2023, employing one in seven American workers. Within that broader figure, AI-related fair use industries alone generated $1.7 trillion in revenues in 2023, up 78 percent since 2017. The AI industry has added a new dimension. Training large language models requires access to vast quantities of text, books, articles, web pages, and code repositories. Much of that access has been broadly justified under fair use, which is transformative and serves a new purpose. In that sense, AI companies and the broader data economy are the newest dependents on copyright exceptions. If those limitations and exceptions can be challenged through non-violation complaints at the WTO, bypassing the question of whether they infringe TRIPS, the legal foundation for AI training could become globally contestable.  The Buenos Aires Lesson At the Buenos Aires Ministerial Conference in December 2017, during Donald Trump’s first term, the renewal of both moratoria on the e-commerce and TRIPS NVC was uncertain. Both moratoria were eventually extended. That Buenos Aires episode revealed, or at least made visible, that the fair use and safe harbor exceptions underpinning internet commerce were potentially vulnerable to non-violation challenges. There was a growing awareness among US tech industry stakeholders of how much the TRIPS NVC moratorium mattered to their legal operating environment. The two moratoriums were treated as a package. That understanding should be stronger today. AI companies are actively navigating copyright litigation in domestic courts, whose outcomes are still unresolved. Exposure via non-violation complaints at the WTO would add a second front. What was at stake in 2017 is now more visible and more significant. What’s Next The argument is pretty straightforward. If the US

Artificial Intelligence

Public AI Launch, and Some Thoughts on Copyright

I attended the exciting launch of a series of papers and reflections on “Public AI” at the EU Parliament this week. The core of the idea is that the non-US/China world needs more public directed and open source AI related resources — from computational capacity to open data sets (like EU’s “data spaces”) — to build both commercial and non-commercial AI tools delinked from big tech. There is an important copyright issue at its core. To build AI infrastructure, including to support the development of frontier and foundation models that may be themselves non-profit but can serve as the base for other (including commercial) developers, Public AI model builders need legal certainty as to what material they can use for training. If they don’t have the same right as Chinese and US developers, they won’t be able to succeed. Some developers are working with only openly licensed and public domain sources, but they tend to be trained on much smaller data sets then. Cultural heritage organizations want to help, but they also need certainty as to whether they can curate and share data with model builders. Article 3 of the EU CDSM (2019) provides some cover, but publishers are claiming it is not for training AI but rather only for traditional academic pursuits. Most developing countries lack even an Art. 3 type leg to stand on. In this context, the future of Public AI appears to depend a lot on the definition of the right to research within modern copyright laws. Proposals to apply remuneration requirements, if any, only after a specific application (“output)”) of a foundation model proves to have copyright relevant effects (e.g. commercial substitution) may be one path forward. See Senftleben, Martin, Generative AI and Author Remuneration (June 14, 2023). International Review of Intellectual Property and Competition Law 54 (2023), pp. 1535-1560.

Artificial Intelligence, Blog, Centre News

Centre Announces Short Course on Intellectual Property and Artificial Intelligence

The Centre on Knowledge Governance is pleased to announce a new short course on AI and IP to take place in Geneva from September 29-30, 2026. COURSE DESCRIPTION  This intensive two-day course provides a comprehensive, comparative analysis of the evolving legal and policy landscape at the intersection of Intellectual Property (IP) and Artificial Intelligence (AI). Participants will explore pressing legal challenges, including the copyright protection for AI training data, the patentability and copyright of AI-generated outputs, and the balance between proprietary interests and the public interest in research (Text and Data Mining and computational research) and the development of “Public AI.”  The course will feature in-depth comparative analysis of legal frameworks and policy proposals across the European Union (EU), United States (USA), India, Brazil, Singapore, Japan, and in international forums, such as the World Intellectual Property Organization, World Trade Organization and other agencies.  The learning experience will culminate in a practical role-play exercise in which students will draft a model international legal instrument aimed at ensuring fair remuneration for creators while safeguarding the rights of researchers and public interest organizations developing AI infrastructure. This legal instrument will focus on  a range of factors to be used in distinguishing research and public interest uses of AI from commercial competitive uses. LEARNING OBJECTIVES Upon completion of this course, participants will be able to: WHO IS THIS PROGRAMME FOR? This programme is particularly relevant for mid- to senior level practitioners from various organisations working at the intersection of intellectual property and AI policy or scholarship, such as: LECTURERS The Course will be directed by Sean Flynn and Ben Cashdan of the Centre on Knowledge Governance, Geneva Graduate Institute. Guest lecturers will participate in person or online to bring comparative expertise from jurisdictions such as India, Brazil and China and the African continent, in addition to the US and EU. SCHOLARSHIPS 10 scholarships will be available for highly motivated government delegates from developing countries or representatives of public interest organizations who participate in multilateral policy processes on copyright, AI and the rights of researchers. You can apply below: APPLICATION FOR COURSE To enroll for the course itself please use the online form on this page. If you have also applied for a scholarship please not this when you enroll. Thanks.

Africa: Copyright & Public Interest, Artificial Intelligence, TDM Cases

Case Studies of AI for Good and AI for Development

Today the Geneva Centre on Knowledge Governance presents a series of Case Studies on AI for Good in Africa and the Global South. These grew out of our work on Text and Data Mining and our policy work in support of the Right to Research. Researchers in the Global South are responding to local and global challenges from health and education to language preservation and mitigation of climate change. In all these case computational methods and Artificial Intelligence (AI) play a leading role in finding and implementing solutions. A common thread that runs through all the cases is how intellectual property laws can support innovation and problem solving in the public interest, whilst protecting the interests of creators, communities and custodians of traditional knowledge. In addition several practitioners are looking at how to redress data imbalances, where large companies in the Global North have much greater access to works, for historical, legal and economic reasons. The cases include: Each of our case studies in written up in the form of a report, combined with a video exploration of the case study in the words of its leading practitioners.

Artificial Intelligence, Blog, Latin America / GRULAC

INTELIGENCIA ARTIFICIAL, DERECHOS DE AUTOR Y EL FUTURO DE LA CREATIVIDAD: APUNTES DE LA FERIA INTERNACIONAL DEL LIBRO DE PANAMÁ

Por Andrés Izquierdo Durante la segunda semana de agosto, fui invitado a hablar en la Feria Internacional del Libro de Panamá, un evento organizado por la la Oficina del Derecho de Autor de Panamá, el Ministerio de Cultura y la Asociación Panameña de Editores con apoyo de la Organización Mundial de la Propiedad Intelectual (OMPI). Mi presentación se centró en la cada vez más compleja intersección entre las leyes de derechos de autor y la inteligencia artificial (IA), un tema ahora en el centro del debate legal, cultural y económico mundial. Esta publicación resume los argumentos principales de esa presentación, basándose en litigios recientes, investigaciones académicas y desarrollos de políticas, incluyendo el informe de mayo de 2025 de la Oficina de Derechos de Autor de EE. UU. sobre IA generativa. ¿Cómo deberían responder las leyes de derechos de autor al uso generalizado de obras protegidas en el entrenamiento de sistemas de IA generativa? El análisis sugiere que hay debates emergentes en varias áreas clave: los límites del uso justo y las excepciones, la necesidad de derechos de remuneración aplicables, y el papel de la concesión de licencias y la supervisión regulatoria. El artículo se desarrolla en cinco partes: comienza con una visión general del contexto legal y tecnológico en torno al entrenamiento de IA; luego revisa propuestas académicas para recalibrar los marcos de derechos de autor; examina decisiones judiciales recientes que ponen a prueba los límites de la doctrina actual; resume el informe de 2025 de la Oficina de Derechos de Autor de EE. UU. como respuesta institucional; y concluye con cuatro consideraciones de política para la regulación futura. UN ESCENARIO LEGAL Y TECNOLÓGICO EN TRANSFORMACIÓNLa integración de la IA generativa en los ecosistemas creativos e informativos ha expuesto tensiones fundamentales en la ley de derechos de autor. Los sistemas actuales ingieren rutinariamente grandes volúmenes de obras protegidas —como libros, música, imágenes y periodismo— para entrenar modelos de IA. Esta práctica ha dado lugar a preguntas legales no resueltas: ¿Puede la ley de derechos de autor regular de manera significativa el uso de datos de entrenamiento? ¿Se extienden las doctrinas y disposiciones legales existentes—como el uso justo, o excepciones y limitaciones—a estas prácticas? ¿Qué remedios, si los hay, están disponibles para los titulares de derechos cuyas obras se utilizan sin consentimiento? Estas preguntas siguen abiertas en todas las jurisdicciones. Si bien algunos tribunales y agencias reguladoras han comenzado a responder, una parte sustancial del debate está siendo moldeada ahora por la investigación académica  jurídica y por los litigios, cada uno proponiendo marcos para conciliar el desarrollo de la IA con los compromisos normativos del derecho de autor. Las siguientes secciones examinan este panorama evolutivo, comenzando con propuestas académicas recientes. PERSPECTIVAS ACADÉMICAS: HACIA UN EQUILIBRIO RENOVADOAl revisar la literatura académica, han emergido varios temas claros. Primero, algunos autores concuerdan en que deben fortalecerse los derechos de remuneración para los autores. Geiger, Scalzini y Bossi sostienen que, para garantizar verdaderamente una compensación justa para los creadores en la era digital, especialmente a la luz de la IA generativa, la ley de derechos de autor de la Unión Europea debe ir más allá de las débiles protecciones contractuales y, en su lugar, implementar derechos de remuneración robustos e inalienables que garanticen ingresos directos y equitativos a autores e intérpretes como cuestión de derechos fundamentales. Segundo, varios académicos subrayan que la opacidad técnica de la IA generativa exige nuevos enfoques de remuneración para los autores. Cooper argumenta que, a medida que los sistemas de IA evolucionen, será casi imposible determinar si una obra fue generada por IA o si una obra protegida específica se utilizó en el entrenamiento. Advierte que esta pérdida de trazabilidad hace que los modelos de compensación basados en atribución sean inviables. En cambio, aboga por marcos alternativos para garantizar que los creadores reciban una compensación justa en una era de autoría algorítmica. Tercero, académicos como Pasquale y Sun sostienen que los responsables de formular políticas deberían adoptar un sistema dual de consentimiento y compensación: otorgar a los creadores el derecho a excluirse del entrenamiento de IA y establecer un gravamen sobre los proveedores de IA para asegurar el pago justo a aquellos cuyas obras se utilizan sin licencia. Gervais, por su parte, defiende que los creadores deberían recibir un nuevo derecho de remuneración, asignable, por el uso comercial de sistemas de IA generativa entrenados con sus obras protegidas por derechos de autor; este derecho complementaría, pero no reemplazaría, los derechos existentes relacionados con reproducción y adaptación. También hay un consenso creciente sobre la necesidad de modernizar las limitaciones y excepciones, en particular para educación e investigación. Flynn et al. muestran que una mayoría de los países del mundo no tienen excepciones que permitan la investigación y enseñanza modernas, como el uso académico de plataformas de enseñanza en línea. Y en Science, varios autores proponen armonizar las excepciones de derechos de autor internacionales y domésticas para autorizar explícitamente la minería de texto y datos (TDM) para investigación, permitiendo el acceso lícito y transfronterizo a materiales protegidos sin requerir licencias previas. En la OMPI, el Comité Permanente sobre Derecho de Autor y Derechos Conexos (SCCR) ha tomado medidas en este ámbito aprobando un programa de trabajo sobre limitaciones y excepciones, actualmente en discusión para el próximo SCCR 47. Y en el Comité de Desarrollo y Propiedad Intelectual (CDIP), está aprobado un Proyecto Piloto sobre TDM para Apoyar la Investigación e Innovación en Universidades y Otras Instituciones Orientadas a la Investigación en África – Propuesta del Grupo Africano (CDIP/30/9 REV). Mi propio trabajo, al igual que el de Díaz & Martínez, ha enfatizado la urgencia de actualizar las excepciones educativas latinoamericanas para dar cuenta de usos digitales y transfronterizos. Eleonora Rosati sostiene que el entrenamiento con IA no licenciada queda fuera de las excepciones de derechos de autor existentes en la UE y el Reino Unido, incluidas el Artículo 3 (TDM para investigación científica) de la Directiva DSM, el Artículo 4 (TDM general con exclusiones) y el Artículo 5(3)(a) de la Directiva InfoSoc (uso para enseñanza o investigación

Artificial Intelligence, Blog, Latin America / GRULAC

AI, Copyright, and the Future of Creativity: Notes from the Panama International Book Fair

AI, Copyright, and the Future of Creativity: Notes from the Panama International Book FairDuring the second week of August, I was invited to speak at the Panama International Book Fair, an event hosted by the World Intellectual Property Organization (WIPO), the Panama Copyright Office, the Ministry of Culture, and the Panama Publishers Association. My presentation focused on the increasingly complex intersection between copyright law and artificial intelligence (AI)—a topic now at the center of global legal, cultural, and economic debate. This post summarizes the core arguments of that presentation, drawing on recent litigation, academic research, and policy developments, including the U.S. Copyright Office’s May 2025 report on generative AI. How should copyright law respond to the widespread use of protected works in the training of generative AI systems? The analysis suggests there are emerging discussions around several key areas: the limits of fair use and exceptions, the need for enforceable remuneration rights, and the role of licensing and regulatory oversight. The article proceeds in five parts: it begins with an overview of the legal and technological context surrounding AI training; it then reviews academic proposals for recalibrating copyright frameworks; it examines recent court decisions that test the boundaries of current doctrine; it summarizes the U.S. Copyright Office’s 2025 report as an institutional response; and it concludes by outlining four policy considerations for future regulation. A Shifting Legal and Technological LandscapeThe integration of generative AI into creative and informational ecosystems has exposed foundational tensions in copyright law. Current systems routinely ingest large volumes of copyrighted works—such as books, music, images, and journalism—to train AI models. This practice has given rise to unresolved legal questions: Can copyright law meaningfully regulate the use of training data? Do existing doctrines and legal provisions—fair use, or exceptions and limitations—extend to these practices? What remedies, if any, are available to rightsholders whose works are used without consent? These questions remain open across jurisdictions. While some courts and regulatory agencies have begun to respond, a substantial part of the debate is now being shaped by legal scholarship and litigation, each proposing frameworks to reconcile AI development with copyright’s normative commitments. The following sections examine this evolving landscape, beginning with recent academic proposals. Academic Perspectives: Towards a New Equilibrium In reviewing the literature, several clear themes have emerged. First, some authors agree that remuneration rights for authors must be strengthened. Geiger, Scalzini, and Bossi argue that to truly ensure fair compensation for creators in the digital age, especially in light of generative AI, EU copyright law must move beyond weak contractual protections and instead implement strong, unwaivable remuneration rights that guarantee direct and equitable revenue flows to authors and performers as a matter of fundamental rights. Second, some scholars highlight that the technical opacity of generative AI demands new approaches to author remuneration. Cooper argues that as AI systems evolve, it will become nearly impossible to determine whether a work was AI-generated or whether a particular copyrighted work was used in training. He warns that this loss of traceability renders attribution-based compensation models unworkable. Instead, he calls for alternative frameworksto ensure creators are fairly compensated in an age of algorithmic authorship. Third, scholars like Pasquale and Sun argue that policymakers should adopt a dual system of consent and compensation—giving creators the right to opt out of AI training and establishing a levy on AI providers to ensure fair payment to those whose works are used without a license. Gervais, meanwhile, argues that creators should be granted a new, assignable right of remuneration for the commercial use of generative AI systems trained on their copyrighted works—complementing, but not replacing, existing rights related to reproduction and adaptation. There is also a growing consensus on the need to modernize limitations and exceptions, particularly for education and research. Flynn et al. show that a majority of the countries in the world do not have exceptions that enable modern research and teaching, such as academic uses of online teaching platforms. And in Science, several authors propose harmonizing international and domestic copyright exceptions to explicitly authorize text and data mining (TDM) for research, enabling lawful, cross-border access to copyrighted materials without requiring prior licensing.  At WIPO, the Standing Committee on Copyright and Related Rights (SCCR) has been taking steps in this area by approving a work program on L&E´s, under current discussions for the upcoming SCCR 47. And in the Committee on Development and Intellectual Property (CDIP), there is a Pilot Project approved on TDM to Support Research and Innovation in Universities and Other Research-Oriented Institutions in Africa – Proposal by the African Group (CDIP/30/9 REV). My own work, as well as that of Díaz & Martínez, has emphasized the urgency of updating Latin American educational exceptions to account for digital and cross-border uses.  Eleonora Rosati argues that unlicensed AI training falls outside existing EU and UK copyright exceptions, including Article 3 of the DSM Directive (TDM for scientific research), Article 4 (general TDM with opt-outs), and Article 5(3)(a) of the InfoSoc Directive (use for teaching or scientific research). She finds that exceptions for research, education, or fair use-style defenses do not apply to the full scope of AI training activities. As a result, she concludes that a licensing framework is legally necessary and ultimately unavoidable, even when training is carried out for non-commercial or educational purposes. Finally, policy experts like James Love warn that “one-size-fits-all” regulation risks sidelining the medical and research breakthroughs promised by artificial intelligence. The danger lies in treating all training data as equivalent—conflating pop songs with protein sequences, or movie scripts with clinical trial data. Legislation that imposes blanket consent or licensing obligations, without distinguishing between commercial entertainment and publicly funded scientific knowledge, risks chilling socially valuable uses of AI. Intellectual property law for AI must be smartly differentiated, not simplistically uniform. Litigation as a Site of Doctrinal Testing U.S. courts have become a key venue for testing the boundaries of copyright in the age of artificial intelligence. In the past two years, a growing number of cases

Artificial Intelligence, Blog

A first look into the JURI draft report on copyright and AI

This post was originally published on COMMUNIA by Teresa Nobre and Leander Nielbock Last week we saw the first draft of the long-anticipated own-initiative report on copyright and generative artificial intelligence authored by Axel Voss for the JURI Committee (download as a PDF file). The report, which marks the third entry of the Committee’s recent push on the topic after a workshop and the release of a study in June, fits in with the ongoing discussions around Copyright and AI at the EU-level. In his draft, MEP Voss targets the legal uncertainty and perceived unfairness around the use of protected works and other subject matter for the training of generative AI systems, strongly encouraging the Commission to address the issue as soon as possible, instead of waiting for the looming review of the Copyright Directive in 2026. A good starting point for creators The draft report starts by calling the Commission to assess whether the existing EU copyright framework addresses the competitive effects associated with the use of protected works for AI training, particularly the effects of AI-generated outputs that mimic human creativity. The rapporteur recommends that such assessment shall consider fair remuneration mechanisms (paragraph 2) and that, in the meantime, the Commission shall “immediately impose a remuneration obligation on providers of general-purpose AI models and systems in respect of the novel use of content protected by copyright” (paragraph 4). Such an obligation shall be in effect “until the reforms envisaged in this report are enacted.” However, we fail to understand how such a transitory measure could be introduced without a reform of its own. Voss’s thoughts on fair remuneration also require further elaboration, but clearly the rapporteur is solely concerned about remunerating individual creators and other rightholders (paragraph 2). Considering, however, the vast amounts of public resources that are being appropriated by AI companies for the development of AI systems, remuneration mechanisms need to channel value back to the entire information ecosystem. Expanding this recommendation beyond the narrow category of rightholders seems therefore crucial. Paragraph 10 deals with the much debated issue of transparency, calling for “full, actionable transparency and source documentation by providers and deployers of general-purpose AI models and systems”, while paragraph 11 asks for an “irrebuttable presumption of use” where the full transparency obligations have not been fully complied with. Recitals O to Q clarify that full transparency shall consist “in an itemised list identifying each copyright-protected content used for training”—an approach that does not seem proportionate, realistic or practical. At this stage, a more useful approach to copyright transparency would be to go beyond the disclosure of training data, which is already dealt with in the AI Act, and recommend the introduction of public disclosure commitments on opt-out compliance. A presumption of use—which is a reasonable demand—could still kick in based on a different set of indicators. Another set of recommendations that aims at addressing the grievances of creators are found on paragraphs 6 and 9 and include the standardization of opt-outs and the creation of a centralized register for opt-outs. These measures are very much in line with COMMUNIA’s efforts to uphold the current legal framework for AI training, which relies on creators being able to exercise and enforce their opt-out rights. Two points of concern for users At the same time that it tries to uphold the current legal framework, the draft report also calls for either the introduction of a new “dedicated exception to the exclusive rights to reproduction and extraction” or for expanding the scope of Article 4 of the DSM Directive “to explicitly encompass the training of GenAI” (paragraph 7). At first glance, this recommendation may appear innocuous—redundant even, given that the AI Act already assumes that such legal provision extends to AI model providers. However, the draft report does not simply intend to clarify the current EU legal framework. On the contrary, the report claims that the training of generative AI systems is “currently not covered” by the existing TDM exceptions. This challenges the interpretation provided for in the AI Act and by multiple statements by the Commission and opens the door for discussions around the legality of current training practices, with all the consequences this entails, including for scientific research. The second point of concern for users is paragraph 13, which calls for measures to counter copyright infringement “through the production of GenAI outputs.” Throughout the stakeholder consultations on the EU AI Code of Practice, COMMUNIA was very vocal about the risks this category of measures could entail for private uses, protected speech and other fundamental freedoms. We strongly opposed the introduction of system-level measures to block output similarity, since those would effectively require the use of output filters without safeguarding users rights. We also highlighted that model-level measures targeting copyright-related overfitting could have the effect of preventing the lawful development of models supporting substantial legitimate uses of protected works. As this report evolves, it is crucial to keep this in mind and to ensure that any copyright compliance measures targeting AI outputs are accompanied by relevant safeguards that protect the rights of users of AI systems. A win for the Public Domain One of the last recommendations in the draft report concerns the legal status of AI-generated outputs. Paragraph 12 suggests that “AI-generated content should remain ineligible for copyright protection, and that the public domain status of such works be clearly determined.” While some AI-assisted expressions can qualify as copyright-protected works under EU law —most importantly when there’s sufficient human control over the output—many will not meet the standards for copyright protection. However, these outputs can still potentially be protected by related rights, since most have no threshold for protection. This calls into question whether the related rights system is fit for purpose in the age of AI: protecting non-original AI outputs with exclusive rights regardless of any underlying creative activity and in the absence of meaningful investment is certainly inadequate. We therefore support the recommendation that their public domain status be asserted in those cases. Next steps Once the draft report is officially published and presented in JURI on

Scroll to Top