Giovanni Piccirillo
Apr 15, 2026
27 min read

In recent years, the European Union has built a narrative of global regulatory leadership based on its ability to anticipate and regulate technological transformations through innovative and binding legislative instruments. The General Data Protection Regulation, the Digital Services Act, the Digital Markets Act, and, most recently, the Artificial Intelligence Act have represented the cornerstones of an ambitious regulatory model, geared towards protecting fundamental rights and building a European digital ecosystem based on principles of transparency, accountability, and proportionality. This approach, often referred to as the "Brussels effect," has significantly influenced legislation outside Europe and helped shape global reference standards, despite attracting criticism for the burdensome obligations imposed on economic operators and the risk of stifling technological innovation.
However, in the early months of 2025, this paradigm underwent a significant shift. The European Commission, led by Ursula von der Leyen, presented a proposal to substantially amend the temporal and sanctioning scope of application of the AI Act, introducing a grace period for companies, a deferral of sanctions, and a relaxation of obligations related to high-risk systems. This decision, formally motivated by the need to support the competitiveness of European companies and avoid an overly rigid approach to innovation, is part of a geopolitical and economic context characterized by intense pressure from the United States, widespread lobbying by major technology platforms, and a growing internal debate on the balance between regulation and industrial development.
This paper aims to critically analyze this shift, framing it within the broader context of European regulatory sovereignty and transatlantic dynamics, assessing its systemic effects on national legal systems, and examining its long-term implications for the European integration project. The central question is not merely the effectiveness of a regulatory instrument, but concerns the very meaning of the Union's regulatory capacity in an era of global technological competition and growing asymmetry of power between public institutions and transnational private actors.
The Artificial Intelligence Act, adopted with Regulation (EU) 2024/1689 of June 13, 2024, constitutes the first comprehensive and binding global regulation on artificial intelligence. The regulation is based on a risk-based approach, classifying AI systems into four categories—unacceptable, high, limited, and minimal risk—and subsequently tailoring prohibitions and obligations based on the potential impact on fundamental rights, safety, and health.
The focus of the regulation is on high-risk systems used in sensitive areas such as critical infrastructure, education, employment, public services, justice, and law enforcement. These systems are subject to stringent compliance requirements, including risk management, data governance, technical documentation, transparency, human oversight, and robustness and security requirements, with a predominantly preventative approach.
The AI Act is part of a broader European digital regulatory framework that includes, among others, the GDPR, the Data Act, and the Digital Services Act, helping to shape a governance model aimed at balancing innovation and the protection of rights. From this perspective, the regulation represents a central instrument in the European Union's digital sovereignty strategy, understood as the ability to autonomously define the rules of the technology market, despite a global context characterized by strong industrial and geopolitical asymmetries.
Regarding its entry into force, the Regulation (EU) 2024/1689 provides for gradual implementation, divided into several timeframes, to allow economic operators and national authorities to gradually adapt to the imposed obligations. This timeframe reflects the technical complexity of the matter and the need to ensure orderly implementation, avoiding regulatory shocks that could jeopardize the functionality of markets and the continuity of services.
The regulation entered into force twenty days after its publication in the Official Journal of the European Union on 12 July 2024, and began to apply as from 2 August 2024. The first provisions to produce legal effects were those relating to prohibited artificial intelligence systems, i.e., those that pose an unacceptable risk to fundamental rights. These include real-time biometric recognition systems in public spaces for law enforcement purposes, subject to limited exceptions, biometric categorization systems based on sensitive characteristics, predictive policing systems based on individual profiling, and subliminal behavioral manipulation systems.
Starting February 2, 2025, the obligations relating to general-purpose AI models, including the latest-generation foundation models, became applicable. These models, characterized by the ability to adapt to multiple tasks and a high degree of autonomy, are subject to transparency, documentation, and risk management requirements commensurate with their computational power and systemic impact. August 2, 2025, marks the application date for the obligations relating to high-risk systems already placed on the market before the entry into force of the regulation, while August 2, 2027, is the deadline for the full application of the provisions relating to all high-risk systems, including those used in particularly sensitive areas such as critical infrastructure and the administration of justice.
This timeline was designed to balance the need for early protection of rights with the need to provide operators with sufficient time to adapt their technical, organizational, and documentation systems. However, it immediately raised concerns in the industrial world, particularly among small and medium-sized companies, which complained about the burdensome nature of the required compliance and the risk of market exclusion due to the inability to sustain compliance costs. These concerns were amplified by large technology platforms, which highlighted the risk of a slowdown in innovation and a loss of competitiveness for Europe compared to the United States and China, where AI regulation is less pervasive or at least oriented towards different logics.
The proposed amendment to the AI Act cannot be understood without analyzing the context of external pressure that has characterized the previous months. Major US technology platforms, including Google, Meta, Microsoft, Amazon, and OpenAI, have conducted a widespread and highly structured lobbying effort, targeting not only European institutions but also national governments, trade associations, and the general public. This effort has employed a variety of tools, including the publication of position papers, the organization of events and conferences, direct engagement with the European Commission and European Parliament, and the mobilization of networks of experts and academics in favor of a less restrictive approach to AI regulation.
The core argument of the positions supported by Big Tech rests on three main lines. First, it is argued that excessive regulation risks stifling innovation, slowing the development of new technologies and compromising Europe's ability to compete with the United States and China in the artificial intelligence sector. Second, it is highlighted that the obligations imposed by the AI Act, particularly those relating to high-risk systems, entail high costs and technical difficulties that disproportionately penalize smaller companies, paradoxically favoring larger operators with greater resources. Third, it is emphasized that European regulation risks creating barriers to market entry for non-European companies, generating trade tensions and fueling the risk of retaliatory measures from other jurisdictions.
These arguments have found significant support in the US administration, which has expressed concern about the AI Act's implications for its own economic operators and its ability to influence global technological regulatory standards. Transatlantic relations on digital governance are riddled with a structural tension between two different regulatory models: on the one hand, the European model, based on the primacy of fundamental rights and ex ante regulation; on the other, the US model, focused on protecting freedom of enterprise, ex post regulation, and trust in market self-regulation mechanisms. This divergence was already evident with the adoption of the GDPR and the Digital Services Act, and has become more acute with the AI Act, perceived by some US observers as a European attempt to export its regulatory model globally.
External pressure intensified throughout 2024, coinciding with the regulation's implementation deadlines approaching. Representatives of major tech platforms met repeatedly with European Commission leaders, expressing concerns about the impact of compliance obligations on the innovation capacity and competitiveness of companies operating in the European market. At the same time, think tanks and research centers funded by these platforms published studies highlighting the risks of an overly rigid approach to AI regulation, suggesting the need for a grace period that would allow operators to adapt gradually and without immediate penalties.
In this context, the European Commission presented, in early 2025, a proposal to amend the AI Act, which includes the introduction of a grace period for companies, the deferral of sanctions, and the relaxation of certain obligations relating to high-risk systems. This proposal, set out in an official communication and accompanied by a series of technical guidelines, is formally part of a broader package of measures called Digital Omnibus, a proposal for a regulation presented in November 2025 that aims to streamline and simplify the complex architecture of the European Union's digital regulation—including, among others, the GDPR, DMA, DSA, AI Act, and NIS2—by reducing overlaps and administrative costs for businesses and citizens, harmonizing existing rules, and fostering innovation, particularly in the fields of data, artificial intelligence, and cybersecurity, through a more pragmatic and flexible approach.
The grace period, lasting twelve months from the date of application of the obligations relating to high-risk systems, allows economic operators to continue placing AI systems on the market and putting them into service without incurring immediate penalties, provided they demonstrate that they are committed to an adaptation process and have taken reasonable measures to gradually comply with the Regulation's requirements. During this period, competent national authorities are encouraged to adopt a dialogue- and collaboration-oriented approach with operators, prioritizing technical assistance and advisory services over the imposition of financial penalties.
The postponement of sanctions specifically concerns violations of obligations relating to high-risk systems, which constitute the most stringent core of the regulation. The sanctions provided for by the AI Act are particularly severe, reaching up to 7% of a company's annual global turnover for the most serious violations, in line with the sanctioning model adopted by the GDPR. The postponement of the application of these sanctions meets the stated objective of avoiding a disproportionate impact on companies, particularly small and medium-sized ones, and to allow for a gradual and smooth implementation of the regulation.
The relaxation of requirements translates into a revision of the technical guidelines on conformity assessment, technical documentation, and risk management. Specifically, the Commission has provided for the possibility of adopting simplified solutions for certain categories of high-risk systems, reducing the level of detail required in the documentation and allowing forms of self-assessment in place of third-party certification procedures. This flexibility, however, does not apply to the most critical AI systems, such as those used for law enforcement, migration management, and justice, for which the most stringent requirements remain.
The Commission's proposal was presented as a pragmatic response to the concerns expressed by economic operators and as an attempt to reconcile the protection of fundamental rights with the need to support the competitiveness of European industry. However, it immediately sparked criticism from various institutional bodies and civil society, who denounced a retreat from the regulation's original objectives and a cave-in to pressure from major tech platforms.
The core of the proposed amendment lies in the postponement of the full application of the obligations relating to high-risk systems, which represent the category of greatest regulatory relevance and have the greatest impact on fundamental rights. This postponement does not constitute a formal suspension of the Regulation's provisions, but rather a deferral of their effective application through the provision of a transitional period during which operators are not subject to sanctions and can benefit from accompanying measures and technical support from national authorities.
The decision to focus intervention on high-risk systems reflects a rationale for proportionality and the gradual nature of regulatory intervention. These systems, by definition, pose the most complex challenges in terms of compliance, as they require significant investments in technical infrastructure, specialized skills, organizational processes, and documentation. The categories affected include systems used in particularly sensitive areas, such as critical infrastructure (transport, energy, water, healthcare), education and vocational training (student assessment, admission to educational institutions), employment and worker management (selection, promotion, performance evaluation), access to essential public and private services (creditworthiness assessment, insurance premium determination), law enforcement (crime risk assessment, predictive policing), migration, asylum, and border control management, and the administration of justice and democratic processes (assistance in researching and interpreting facts and law).
The postponement of the implementation of obligations relating to these systems has significant implications for the protection of fundamental rights. In particular, it raises concerns regarding the right to personal data protection, the right to non-discrimination, the right to a fair trial, the right to freedom of expression and information, and the right to human dignity. These concerns are amplified by the fact that many of the high-risk systems are already operational and have tangible impacts on people's lives, influencing decisions that impact access to employment, education, essential services, justice, and social protection.
The postponement essentially translates into an extension of the regime of non-specific regulation for high-risk systems, which continue to operate under the applicable general rules, such as the GDPR, consumer protection provisions, sector-specific regulations, and general principles of EU law. However, these rules do not provide comprehensive and specific regulation for artificial intelligence, limiting themselves to general requirements that do not address the technical peculiarities and specific risks associated with the use of machine learning algorithms, biometric recognition systems, and automated decision-making technologies.
The proposed amendment to the AI Act has sparked immediate and vocal criticism from a broad spectrum of stakeholders, including civil society organizations, data protection authorities, members of the European Parliament, digital law experts, and consumer groups. The criticisms focus on three main aspects: the departure from the regulation's original objectives, the risk of compromising fundamental rights, and the perceived capitulation to pressure from major tech platforms.
On the first point, it is emphasized that the grace period and the postponement of sanctions risk undermining the regulation's meaning, indefinitely postponing the effective implementation of the most stringent requirements and creating a regime of substantial impunity for violations committed during the transitional period. This regime contradicts the preventative and precautionary approach that characterizes the AI Act, which is based on the idea that the risks associated with artificial intelligence should be managed ex ante, through stringent compliance obligations, rather than ex post, through sanctions applied after the damage has already occurred.
On the second issue, it is emphasized that the postponement of the implementation of obligations relating to high-risk systems entails the continuation of practices that may violate fundamental rights protected by the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights. Of particular concern are facial recognition systems used in public spaces, credit assessment systems based on opaque algorithms, personnel selection systems that may discriminate against vulnerable groups, and predictive policing systems that risk reinforcing existing prejudices and violating the principle of the presumption of innocence.
Third, the Commission's decision is criticized as a victory for the lobbying strategies of the major tech platforms, which have exerted constant and widespread pressure to soften the AI Act's regulatory stringency. This perception is fueled by the timing of the discussions between the Commission's top management and Big Tech representatives and the presentation of the proposed amendment, as well as by the fact that some of the solutions adopted faithfully reflect the positions expressed by the major platforms in their position papers and public communications.
The European Data Protection Supervisor, in a public statement released in March 2025, expressed serious concerns about the proposed amendment's compatibility with the GDPR and the principles enshrined in the Charter of Fundamental Rights. Specifically, the Supervisor emphasized that the grace period cannot constitute a legal basis for legitimizing personal data processing that does not meet the GDPR's requirements of lawfulness, fairness, and transparency, and that national data protection authorities retain the power to intervene to sanction violations of the GDPR regardless of the application of the AI Act.
The proposed amendment to the AI Act has highlighted significant fractures within European institutions and among member states, revealing profound divergences regarding the balance between the protection of fundamental rights and economic competitiveness, between regulatory ambition and political pragmatism, and between regulatory sovereignty and technological dependence. These fractures are part of a broader context of institutional tensions characterizing the current phase of the European integration process, marked by increasing difficulties in reaching consensus on issues of strategic importance and the reemergence of intergovernmental approaches to the detriment of the community method.
The European Parliament, which had played a central role in the trilateral negotiations that led to the adoption of the AI Act, expressed strong criticism of the Commission's proposal, denouncing a violation of the political balance painstakingly achieved during the legislative process. Numerous MEPs, particularly those from the Socialists and Democrats, Greens, and Left groups, accused the Commission of succumbing to pressure from major tech platforms and of betraying its commitments to the protection of fundamental rights. In a resolution adopted in February 2025, the European Parliament called on the Commission to withdraw its proposed amendment and ensure the full and timely implementation of the regulation, emphasizing that any substantial amendment must be subject to the ordinary legislative procedure and cannot be unilaterally adopted by the Commission through delegated or implementing acts.
The European Commission, for its part, defended its proposal, arguing that it responds to concrete needs expressed by economic operators and national authorities and that it does not entail a substantial amendment to the regulation, but merely an adjustment to the methods and timing of its implementation. President von der Leyen, in a speech before the European Parliament in March 2025, emphasized that European competitiveness in the AI sector is a strategic priority and that an overly rigid approach to regulation risks compromising Europe's ability to attract investment, retain talent, and develop an autonomous technology industry. According to this view, the grace period and the easing of obligations are necessary tools to prevent the AI Act from becoming an obstacle to innovation rather than an enabler.
Member States expressed diverging positions. Some states, particularly those in Northern and Western Europe, expressed support for the Commission's proposal, emphasizing the importance of supporting domestic businesses and avoiding excessive regulatory rigidity that could undermine economic competitiveness. Other states, particularly those in Southern and Eastern Europe, expressed concerns about the risk of a backlash in the protection of fundamental rights and emphasized the need to ensure rigorous enforcement of the regulation, even at the cost of higher short-term economic costs. This divergence reflects structural differences in Member States' economic development models, legal traditions, and political priorities, making it particularly challenging to reach a common position.
The issue of postponing the AI Act cannot be understood without placing it within the global economic context of artificial intelligence, characterized by intense competition between the United States, China, and the European Union for technological leadership, massive investments in research and development, and the growing concentration of economic power in the hands of a few large technology platforms. This context creates a profoundly asymmetric landscape, in which the European Union finds itself in a position of structural weakness compared to its main global competitors.
The United States leads the sector thanks to a mature technology ecosystem, the presence of major big tech companies, and private investments that exceeded $100 billion in 2024, particularly focused on general-purpose AI models. China, for its part, has made AI a national strategic priority, supporting industrial development through public programs, broad access to data, and a less restrictive regulatory approach.
The European Union, despite possessing high-level scientific expertise and a vast internal market, suffers from fragmented capital markets, a lower appetite for risk, and the absence of global-scale technology platforms. European investments remain lower than those of its main competitors, and dependence on non-EU technologies represents a growing strategic vulnerability.
This asymmetry presents Europe with a structural dilemma: to exercise global regulatory leadership through high standards of protection of rights, or to prevent excessive regulation from hindering innovation and competitiveness. The debate on postponing and adapting the AI Act fits precisely into this tension, reflecting the attempt to make the regulatory framework more sustainable for businesses without abandoning the founding values of the European legal system. However, the question remains whether this rebalancing is sufficient to build true technological sovereignty in the long term.
One of the paradoxical outcomes of the European regulatory strategy could be the creation of a Europe equipped with a sophisticated and rigorous regulatory framework, but lacking an autonomous technology industry and therefore dependent on technologies developed and controlled by non-European players. This scenario poses the risk of a "regulated but dependent Europe," in which the ability to define regulatory standards does not translate into an effective capacity to control technological innovation processes and guide them according to European interests and values.
Europe's technological dependence manifests itself in multiple dimensions. First, almost all general-purpose AI models used in the European market are developed by US or Chinese companies, resulting in the transfer of the data used to train the algorithms and the knowledge generated through the use of these systems outside Europe. Second, the cloud computing and data processing infrastructures required to operate AI systems are largely controlled by non-European operators, with significant implications for security, privacy, and data sovereignty. Third, the technical expertise needed to develop, implement, and manage advanced AI systems is concentrated in a few geographical areas, primarily the United States and China, necessitating European companies to import talent or relocate research and development activities.
This dependence entails economic, political, and strategic risks. Economically, it translates into an outflow of value abroad, a loss of employment opportunities, and a reduced ability to capture the economic benefits of technological innovation. Politically, it limits Europe's ability to effectively influence global technical standards and ensure that technologies comply with the values and principles enshrined in European law. Strategically, it exposes Europe to the risk of dependence on technologies controlled by foreign powers, with potentially serious implications for national security, decision-making autonomy, and resilience in the face of geopolitical crises.
The postponement of the AI Act risks exacerbating this situation, as it abandons the use of regulation as a lever to steer technological development in the direction of European interests. While the United States and China combine massive investments in research and development with active industrial policies and strategies to support national businesses, Europe appears to be limiting itself to regulating without investing, regulating without supporting, and standardizing without creating the conditions for an autonomous technology industry. This approach risks producing the worst possible outcome: a regulatory framework that discourages innovation without generating European technological alternatives, and that generates costs for businesses without generating benefits in terms of technological sovereignty.
The postponement of the AI Act has significant implications for national legal systems, which had initiated legislative and organizational adjustments in preparation for the regulation's implementation. The Italian case offers a prime example of this dynamic, as Italy adopted, with Law No. 132 of 23 September 2025, a comprehensive framework for artificial intelligence. This framework complements and integrates the AI Act and assigns significant tasks to the Agency for Digital Italy and the Italian Data Protection Authority.
Italian law establishes a multilevel governance system, which provides for cooperation between national authorities, European authorities, and technical standardization bodies. It establishes a National Authority for Artificial Intelligence, with supervisory, advisory, promotional, and sanctioning functions, and provides for the adoption of technical guidelines, sector-specific codes of conduct, and voluntary certification mechanisms. This regulation is based on the assumption that the AI Act constitutes the reference regulatory framework and that national intervention should be limited to integrating, specifying, and implementing the provisions of the European regulation, without introducing additional or divergent obligations that could jeopardize the unity of the internal market.
The postponement of the AI Act's implementation presents Italian national authorities with a complex dilemma. On the one hand, they could choose to apply the national legislation anyway, exercising the supervisory and sanctioning powers provided by law even in the absence of effective implementation of the European regulation. This choice would ensure regulatory continuity and the protection of fundamental rights, but it would risk creating divergences with respect to other Member States and penalizing Italian companies compared to their European competitors. On the other hand, national authorities could choose to align themselves with the European postponement, postponing the implementation of the national legislation and adopting an approach geared towards collaboration and technical assistance rather than sanctions. This choice would ensure consistency with the European level, but it would risk undermining existing legislative efforts and creating a regulatory vacuum in areas of fundamental importance.
This issue is particularly relevant in the most sensitive sectors, such as public administration, healthcare, justice, and education, where Italy has launched ambitious digitalization processes that make extensive use of artificial intelligence systems. In these sectors, the lack of effective regulation risks compromising the protection of citizens' rights, fueling mistrust in digital technologies, and hindering the development of an efficient, transparent, and respectful public administration model.
As previously highlighted, the proposed amendment to the AI Act is part of the broader Digital Omnibus package, presented by the European Commission on November 19, 2025, with the stated goal of simplifying and streamlining the European digital regulatory framework. The Digital Omnibus includes amendments to several legislative acts, including the Digital Services Act, the Digital Markets Act, the Data Act, the Data Governance Act, and the AI Act. It aims to reduce administrative burdens for businesses, eliminate regulatory overlaps, clarify interpretative ambiguities, and foster technological innovation.
The rhetoric of regulatory simplification is not new to the European experience. It has accompanied various phases of the integration process, responding to recurring criticisms regarding excessive regulation, the complexity of EU law, the burdensome nature of bureaucratic procedures, and the risk of stifling economic initiative. However, simplification can take on very different meanings depending on the context and the objectives pursued. In some cases, it translates into an effective rationalization of the regulatory framework, through the elimination of duplication, the harmonization of procedures, the reduction of merely formal requirements, and the digitalization of administrative processes. In other cases, it takes the form of a disguised deregulation exercise, resulting in a decline in the protection of significant public interests and a reduction in standards of protection.
In the case of the Digital Omnibus, the risk of simplification resulting in dismantling appears particularly high. The proposed changes, while presented as technical interventions aimed at improving the effectiveness and efficiency of regulation, in several cases entail a substantial reduction in obligations for businesses, a weakening of the powers of supervisory authorities, and a limitation of procedural guarantees protecting the rights of data subjects. This trend is particularly evident with regard to the AI Act, where the grace period, the deferral of penalties, and the reduction of obligations constitute a significant step backwards from the original framework of the regulation.
This issue is particularly relevant because the Digital Omnibus was not subject to the ordinary legislative procedure, but was adopted through an accelerated procedure that limited the role of the European Parliament and the Member States. This procedural choice raises questions about the democratic legitimacy of the proposed amendments and compliance with the principle of institutional balance enshrined in the Treaties. Some MEPs have challenged the legal legitimacy of using this procedure for substantive amendments to legislative acts adopted under the ordinary procedure and have announced their intention to bring an action before the Court of Justice of the European Union to enforce Parliament's prerogatives.
The postponement of the AI Act opens up multiple scenarios and raises questions of fundamental importance for the future of the European Union and the global digital governance model. Three main scenarios can be outlined, each with different implications for the protection of fundamental rights, economic competitiveness, and European regulatory sovereignty.
A first scenario envisages that the grace period turns into an indefinite postponement of the effective implementation of the AI Act, resulting in the regulation being undermined and the emergence of a self-regulatory model driven by large technology platforms. In this scenario, the European Union would de facto renounce its ambition of global regulatory leadership, limiting itself to adopting technical standards established elsewhere and legitimizing their application through soft law mechanisms. This outcome would lead to a significant setback in the protection of fundamental rights, a loss of credibility for European institutions, and a weakening of the Union's ability to influence global balances.
A second scenario envisages that the postponement constitutes a genuine transitional phase, during which companies gradually adapt to the Regulation's requirements and national authorities develop the expertise and infrastructure necessary to ensure rigorous application of the rules. In this scenario, the grace period would be an opportunity to strengthen collaboration between public institutions and economic operators, develop sectoral best practices, and foster the emergence of a competitive European AI industry that respects fundamental rights. This outcome, however, would require significant investments in research and development, training, digital infrastructure, and innovation support policies, as well as rigorous application of the Regulation's provisions after the transitional period.
A third scenario envisions a bifurcation of the global AI market, with the emergence of diverging regulatory standards across different geographical regions. In this scenario, the European Union would maintain a rigorous approach to AI regulation, the United States would favor a market-oriented approach, and China would develop a model based on state control and data centralization. This fragmentation would entail high economic costs, difficulties with technical interoperability, and global security risks, but it could also foster the emergence of alternative models and stimulate virtuous competition between regulatory systems.
From an iure condendo perspective, several issues require attention. First, it appears necessary to clarify the conditions for the lawfulness of processing personal data for algorithm training purposes, ensuring a balance between innovation and privacy protection. Second, it is necessary to strengthen transparency and accountability mechanisms for artificial intelligence systems, enabling data subjects to understand the algorithms' decision-making logic and challenge automated decisions affecting them. Third, it is essential to invest in the creation of a European artificial intelligence industry, through active industrial policies, support for basic research, advanced skills development, and the creation of public digital infrastructures. Fourth, it appears appropriate to strengthen international cooperation, promoting convergence towards shared global standards and countering the risk of regulatory fragmentation that would penalize all stakeholders.
The AI Act's postponement represents a litmus test for the European project, revealing profound tensions and unresolved contradictions that transcend the technical dimension of artificial intelligence regulation and affect the very meaning of European integration. It highlights the Union's difficulty in reconciling regulatory ambition and technological autonomy, balancing the protection of fundamental rights and economic competitiveness, and exercising regulatory sovereignty in a context of technological dependence.
The postponement, formally motivated by the need to support innovation and avoid disproportionate burdens on businesses, essentially represents a response to pressure from major US technology platforms and an implicit recognition of the asymmetry of power between European institutions and global economic players. This choice carries the risk of a decline in the protection of fundamental rights, a weakening of the credibility of European institutions, and a worsening of Europe's technological dependence.
However, the situation also presents opportunities. It forces the European Union to confront its structural weaknesses and question the conditions necessary to build true digital sovereignty, which cannot be based solely on the ability to adopt rigorous regulations but requires massive investments in research and development, active industrial policies, advanced skills development, and the creation of public digital infrastructures. It highlights the need to move beyond a purely defensive conception of regulation, aimed at limiting risks and regulating what already exists, and embrace a proactive approach, aimed at guiding technological development toward democratically defined collective objectives.
The AI Act, beyond its immediate consequences, represents a significant attempt to assert the primacy of law and fundamental rights in an era of accelerated technological transformation. Its fate will depend on the European Union's ability to overcome its contradictions, invest in its technological autonomy, and reaffirm the centrality of the founding values of the European project. In this sense, the issue of postponing the AI Act is not merely a technical regulation or sectoral concern, but concerns the very future of Europe as a space of democracy, rights, and human dignity.