Home
All Resources
Videos & Webinars

Webinar - Six “Supply Chain Roles” in the EU AI Act: Be Ready for Your Close Up

Written by Annie Malloy

Updated: Aug 14, 2024

Authors
More from the author

Six “Supply Chain Roles” in the EU AI Act: Be Ready for Your Close Up

The EU Artificial Intelligence Act (Act) will initially enter into force this summer, with various implementation and enforcement deadlines set over the next three years, and with the Act comes new obligations for organizations involved with AI systems.  It governs the placing on the market, putting into service, or use of AI systems, and organizations will be subject to the obligations of the Act if they’re established or located within the EU or if the output of the AI system in question will be used inside the EU.

The Act aims to ensure safety and the protection of people’s fundamental rights, while still promoting innovation, and it strikes this balance, in part, by pegging organizations’ obligations to their role in the AI supply chain.  The Act defines a number of roles an organization might occupy within the AI supply chain, and determining your organization’s role is necessary to understand your organization’s new obligations around AI systems.

Supply Chain Roles in the Act

There are six AI supply chain roles defined by the Act:

  • Provider (those making AI systems)
    • “a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge” [emphasis added]
  • Deployer (those using AI systems)
    • “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity” [emphasis added]
  • Authorized Representative (those representing those making AI systems)
    • “a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation” [emphasis added]
  • Importer (those reselling foreign AI systems)
    • “a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country” [emphasis added]
  • Distributor (those reselling domestic AI systems)
    • “a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market” [emphasis added]
  • Operator (all of the above)
    • “a provider, product manufacturer, deployer, authorised representative, importer or distributor”

Supply Chain Roles in the Legal Industry

In the legal industry, unless you are a software developer, your organization is most likely to find itself as a deployer of an AI system.  Deployers are those organizations that have officially chosen to use an AI system.  In law departments and law firms, this might be a general-purpose system like Microsoft CoPilot, a legal research assistant like CoCounsel, or generative AI features integrated into the document review platform you use for discovery and investigations.

In all of these cases, your organization would be officially deploying an AI system for internal use, thus qualifying as a deployer under the Act.  In contrast, the software companies and service providers creating those tools for your use would qualify as providers under the Act.  It is possible, however, for a deployer to become a provider if they sufficiently modify a high-risk system or begin relabeling it as their own.

Obligations of AI System Deployers

The first and broadest deployer obligation is to ensure adequate AI literacy of those using the system:

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account…the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

Beyond that, the remaining deployer obligations are contingent on the type of AI system you are deploying.  For example, if you deploy an AI system for emotion recognition or biometric categorization you have to provide notice to people, or if you deploy a system that “generates or manipulates image, audio or video content” you have to disclose that artificiality or manipulation.  As generative AI legal tools spread, the requirement to disclose generated content will become increasingly relevant in legal practice.

The largest number of obligations apply to AI systems that are classified as “high-risk.”  High-risk systems require mandatory rights impact assessments and a range of governance, oversight, and documentation obligations.  AI systems can be classified as high-risk if they are for use in (or safety components of other products for use in) certain sensitive contexts, including:

  • Critical infrastructures for water, gas, and electricity
  • Medical devices
  • Access to educational institutions or for recruiting people
  • Law enforcement, border control, administration of justice, and democratic processes
  • Biometric identification, categorization, and emotion recognition systems

Thankfully, most common legal industry use cases seem unlikely to qualify as high-risk and trigger these heightened obligations, but they theoretically could.  Imagine, for example, an AI agent being deployed to assess the large number of potential claimants in a class-action lawsuit and make determinations about who to include in the class.  That system might qualify as high-risk under the “administration of justice” category.

Be Ready for Your Close Up

As discussed, the Act defines a number of roles an organization might occupy within the AI supply chain, and your organization’s role, in part, dictates its obligations under the Act.  In the legal industry, unless you are a software developer, your organization is most likely to find itself as a deployer of an AI system.

Over the coming months, any law department or law firm that thinks it may be subject to the Act as a deployer should undertake steps to evaluate the Act’s applicability and any obligations they may have under it:

  1. Identify any AI systems your organization has deployed.
  2. Determine if the output of those systems is used in the EU.
  3. If so, implement an AI literacy plan for those using the systems.
  4. Evaluate the systems to see if any are high-risk or other special cases.
  5. If any are, seek guidance on fulfilling the associated compliance obligations.

The EU AI Act: Practical Steps to Prepare

For more guidance on this topic, please download our complimentary whitepaper, or you may view our webinar on the same topic.

Six “Supply Chain Roles” in the EU AI Act: Be Ready for Your Close Up

The EU Artificial Intelligence Act (Act) will initially enter into force this summer, with various implementation and enforcement deadlines set over the next three years, and with the Act comes new obligations for organizations involved with AI systems.  It governs the placing on the market, putting into service, or use of AI systems, and organizations will be subject to the obligations of the Act if they’re established or located within the EU or if the output of the AI system in question will be used inside the EU.

The Act aims to ensure safety and the protection of people’s fundamental rights, while still promoting innovation, and it strikes this balance, in part, by pegging organizations’ obligations to their role in the AI supply chain.  The Act defines a number of roles an organization might occupy within the AI supply chain, and determining your organization’s role is necessary to understand your organization’s new obligations around AI systems.

Supply Chain Roles in the Act

There are six AI supply chain roles defined by the Act:

  • Provider (those making AI systems)
    • “a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge” [emphasis added]
  • Deployer (those using AI systems)
    • “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity” [emphasis added]
  • Authorized Representative (those representing those making AI systems)
    • “a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation” [emphasis added]
  • Importer (those reselling foreign AI systems)
    • “a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country” [emphasis added]
  • Distributor (those reselling domestic AI systems)
    • “a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market” [emphasis added]
  • Operator (all of the above)
    • “a provider, product manufacturer, deployer, authorised representative, importer or distributor”

Supply Chain Roles in the Legal Industry

In the legal industry, unless you are a software developer, your organization is most likely to find itself as a deployer of an AI system.  Deployers are those organizations that have officially chosen to use an AI system.  In law departments and law firms, this might be a general-purpose system like Microsoft CoPilot, a legal research assistant like CoCounsel, or generative AI features integrated into the document review platform you use for discovery and investigations.

In all of these cases, your organization would be officially deploying an AI system for internal use, thus qualifying as a deployer under the Act.  In contrast, the software companies and service providers creating those tools for your use would qualify as providers under the Act.  It is possible, however, for a deployer to become a provider if they sufficiently modify a high-risk system or begin relabeling it as their own.

Obligations of AI System Deployers

The first and broadest deployer obligation is to ensure adequate AI literacy of those using the system:

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account…the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

Beyond that, the remaining deployer obligations are contingent on the type of AI system you are deploying.  For example, if you deploy an AI system for emotion recognition or biometric categorization you have to provide notice to people, or if you deploy a system that “generates or manipulates image, audio or video content” you have to disclose that artificiality or manipulation.  As generative AI legal tools spread, the requirement to disclose generated content will become increasingly relevant in legal practice.

The largest number of obligations apply to AI systems that are classified as “high-risk.”  High-risk systems require mandatory rights impact assessments and a range of governance, oversight, and documentation obligations.  AI systems can be classified as high-risk if they are for use in (or safety components of other products for use in) certain sensitive contexts, including:

  • Critical infrastructures for water, gas, and electricity
  • Medical devices
  • Access to educational institutions or for recruiting people
  • Law enforcement, border control, administration of justice, and democratic processes
  • Biometric identification, categorization, and emotion recognition systems

Thankfully, most common legal industry use cases seem unlikely to qualify as high-risk and trigger these heightened obligations, but they theoretically could.  Imagine, for example, an AI agent being deployed to assess the large number of potential claimants in a class-action lawsuit and make determinations about who to include in the class.  That system might qualify as high-risk under the “administration of justice” category.

Be Ready for Your Close Up

As discussed, the Act defines a number of roles an organization might occupy within the AI supply chain, and your organization’s role, in part, dictates its obligations under the Act.  In the legal industry, unless you are a software developer, your organization is most likely to find itself as a deployer of an AI system.

Over the coming months, any law department or law firm that thinks it may be subject to the Act as a deployer should undertake steps to evaluate the Act’s applicability and any obligations they may have under it:

  1. Identify any AI systems your organization has deployed.
  2. Determine if the output of those systems is used in the EU.
  3. If so, implement an AI literacy plan for those using the systems.
  4. Evaluate the systems to see if any are high-risk or other special cases.
  5. If any are, seek guidance on fulfilling the associated compliance obligations.

The EU AI Act: Practical Steps to Prepare

For more guidance on this topic, please download our complimentary whitepaper, or you may view our webinar on the same topic.

Fill out the form below to download the complete insight.

Montserrat
Morocco
Mozambique
Myanmar
Namibia
Nauru
Nepal
Netherlands
New Caledonia
New Zealand
Nicaragua
Niger
Nigeria
Niue
Norfolk Island
Northern Mariana Islands
Norway
Oman
Pakistan
Palau
Palestinian Territory, Occupied
Panama
Papua New Guinea
Paraguay
Peru
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Sign up for Consilio updates

Sign up now to be added to our mailing list.
Thank you! Your submission has been received!
By clicking Subscribe you are confirming that you agree with our Privacy Policy
Oops! Something went wrong while submitting the form.