close
close

Effects of the EU Artificial Intelligence Act on Video Game Developer

This blog post provides a brief overview of the effects of the regulation (EU) 2024/1689 of June 13, 2024 in order to determine the harmonized for artificial intelligence (“AI Act”) on video game developers. AI systems are increasingly integrating into their video games, including the production of backgrounds, non-player characters (NPCs) and the history of objects that can be found in the video game. Some of these applications are regulated under certain circumstances and generate obligations according to the AI ​​law.

The AI ​​law was put into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI ​​Act mainly depends on two factors: the role of the video game developer and the KI risk.

The role of the video game developer

Article 2 of the AI ​​Act limits the scope of the regulation and state who is subject to the AI ​​law. Video game developers could fall especially in two of these categories:

  • Provider of AI systems that are developer of AI systems that you place on the EU market or put the AI ​​system into service under your own name or trademark, be it for payment or free of charge (Article 3 (3) AI Act).
  • Preductions of AI systems that are user of AI systems in the course of a professional activity if they are defined in the EU or have users of the AI ​​system on the EU (Article 3 paragraph 4) AI Act).

Therefore, video game developers are regarded as (i) providers if they develop their own AI system, and they are regarded as (II) provisions if they are integrated into their video games by a third party.

The AI ​​risk level and the associated obligations

The AI ​​Act classifies AI systems in four categories based on the associated risk (Article 3 (1) AI law). The commitments to business operators vary depending on the risk level as a result of the AI ​​systems used:

  • AI systems With unacceptable risks are prohibited (Article 5 AI Act). In the video game sector, the most relevant prohibitions are the provision or use of AI systems that use manipulative techniques or take advantage of humans and therefore cause considerable damage. For example, it is forbidden to use AI -generated NPCs, which would manipulate the players in the direction of increased expenses in a game.
  • AI systems with high risk (Article 6, 7 and Appendix III AI AI) triggers strict obligations for providers and to a lesser extent for the provider (sections 2 and 3 AI). The relevant relevant AI systems used in video games with high risk are those that are a significant risk of health, security or fundamental rights of natural persons, in view of their intended purpose and in particular the AI ​​systems that are used for emotional recognition (Appendix III (1) (c) AI law. Design, which leads to strong emotions among players who feel real empathy, compassion or even anger towards virtual characters.
    • The list of obligations for providers of AI systems with high risk includes the implementation of quality and risk management systems, suitable data management and management practices as well as technical documentation, which maintain transparency and information on the deployments, the documentation and ensure resistance to non-authorized changes or cooperation with competent authorities.
    • Provisions of high-risk-KI systems may primarily operate the system in accordance with the specified instructions, ensure the supervision of humans, monitor the operation of the high-risk AI system or inform the provider and the responsible market surveillance authority of an incident or a risk to health, security or fundamental rights of persons.
  • AI systems With a specific risk of transparency Enter chatbots, AI systems that generate synthetic content or deep counterfeits or emotion detection systems. They trigger more limited obligations listed in Article 50 AI Act.
    • Chatbots providers must ensure that the latter are developed in such a way that the players are informed that they interact with a AI system (unless this is obvious for a reasonably well informed person). Providers of the content-generating AI must ensure that the expenses of the AI ​​system are marked in a machine-readable format and artificially generated.
    • Provision of emotion detection systems must inform players about the operation of the system and process the personal data in accordance with the regulation 2016/679 (GDPR), which applies in addition to the AI ​​law. Provision of Tieffakes-generating AI must disclose that the content has been artificially generated or manipulated.
  • AI system with minimal risk are not regulated according to the AI ​​law. This category contains all other AI systems that do not fall into the categories mentioned above.

The EC said that in principle AI-capable video games are not obliged according to the AI ​​Act, but companies could voluntarily apply additional behavioral skills (see AI Act | Design of the digital future of Europe). However, it should be taken into account that in certain cases such as the AI ​​law described in this section will apply. In addition, the obligation to literate the AI ​​applies regardless of the risk of the system, including minimal risk.

The AI ​​alphabetization obligation

The obligation to literate the AI ​​applies to both providers and for use (Article 4 AI Act), regardless of the risk of the AI. AI alphabetization is defined as skills, knowledge and understanding that enable providers, areas and those affected to carry out a sound use of AI systems and to raise awareness of the opportunities and risks of AI and possible damage.

The ultimate purpose is to ensure that the developer of the developer of video games are able to make sound decisions in relation to AI, their technical knowledge, their experience, training and training and the context in which the AI ​​system is to be used, and taking into account the people or groups of people for whom the AI ​​system is to be used.

The AI ​​law does not describe how providers and providers of the AI ​​alphabetization should comply with. In practice, various steps can be taken to achieve AI alphabetization:

  • Determination of how and which employees currently use or develop AI in the near future;
  • Evaluation of the current AI knowledge of the employees to identify gaps (e.g. through surveys or quiz sessions);
  • Provision of training activities and materials for employees who use AI for AI bases, and at least the relevant concepts, rules and obligations.

Diploma

The regulation of AI systems in the EU may have a significant impact on video game developers, depending on how AI systems are used within certain video games. It is early for the AI ​​law and we observe this room carefully, especially when the AI ​​law develops to adapt to new technologies.

Listen to this post