Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI: New entries in SafetyRiskAssessmentType to accommodate risk levels in EU AI Act #650

Open
bact opened this issue Feb 21, 2024 · 7 comments
Milestone

Comments

@bact
Copy link
Contributor

bact commented Feb 21, 2024

SPDX 3.0 AI Profile has safetyRiskAssessment [1] for level of risk posed by an AI software.
Its type is safetyRiskAssessmentType [2] which can have one of these values:

  • serious: The highest level of risk posed by an AI software.
  • high: The second-highest level of risk posed by an AI software.
  • medium: The third-highest level of risk posed by an AI software.
  • low: Low/no risk is posed by the AI software.

These values are from EU General Risk Assessment Methodology [3].

EU AI Act (Draft 26 Jan 2024) [4] has four levels of risk:

  • Unacceptable
  • High
  • Limited
  • Minimal

pyramid_7F5843E5-9386-8052-931F5C4E98C6E5F2_75757

Different risk level comes with different obligations.
An AI system that posed an unacceptable risk is prohibited in the EU.
See summary in [5].

While there are similarities between risk levels in SPDX 3.0 and EU AI Act, they are not exactly the same.

  • EU AI Act Minimal may use SPDX 3.0 low
  • Both SPDX 3.0 serious and high could fall into EU AI Act High
  • There's no equivalence of EU AI Act Unacceptable and Limited in SPDX 3.0
  • Arguably the risk levels in EU AI Act is not a spectrum of risks, but a spectrum of obligations (which is derived from a risk)

In order to accommodate EU AI Act risk levels, we may need to either:

  1. Extend enumeration in safetyRiskAssessmentType; or
  2. Allow safetyRiskAssessment to have another type (in addition to safetyRiskAssessmentType), where that new type will have a list of EU AI Act four levels of risk/obligations

Other possibilities?

References

[1] https://github.com/spdx/spdx-3-model/blob/main/model/AI/Properties/safetyRiskAssessment.md
[2] https://github.com/spdx/spdx-3-model/blob/main/model/AI/Vocabularies/SafetyRiskAssessmentType.md
[3] Page 5 https://ec.europa.eu/docsroom/documents/17107/attachments/1/translations/en/renditions/pdf
[4] https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf
[5] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

@kestewart
Copy link
Contributor

We took our definitions of the Risk levels from: https://ec.europa.eu/docsroom/documents/17107/attachments/1/translations/en/renditions/pdf
Where they are fairly precise about what they mean.

The terminology section (2.1) introduces the risk level terms we've used.
Table 2 on p. describes the abstract levels definitions, that correspond to the defined risk levels in 2.1.
Table 4 makes it explicit when each of the defined risk levels should be use.

In the EU AI act is there such a table for defining when unacceptable, high, limited, and minimal should be used?

My guess at this point is
Unacceptable == serious
High = high
Limited = medium
Minimal = low

Not sure why they didn't align with the EU risk definition, and created their own terms.

That being said - we need to clean up our definition in the specification to be closer to those in Table 2 I think, so it's not so ambiguous to just have keywords on their own.

@bact
Copy link
Contributor Author

bact commented Feb 22, 2024

Thanks Kate. I will try to provide some further information here so people can give more of their thoughts.

  • There is no comparable table like one from EU General Risk Assessment Methodology (Figure 4) in the EU AI Act.

  • Risk level in EU General Risk Assessment Methodology is a combination of 1) severity of harm and 2) probability (likelihood of harm)

  • EU AI Act draft takes a slightly different approach for the "calculation" of risk.

  • EU AI Act, like some other EU legislation, is based on precautionary principle. Under this principle, even the likelihood of harm is low (or unknown), but if the severity of harm is high enough (in the view of EU values), it can be considered as an unacceptable risk.

    • See European Parliament Framework of ethical aspects of artificial intelligence, robotics and related technologies [2020/2012(INL)] Article 3, Article 67 https://www.europarl.europa.eu/doceo/document/TA-9-2020-0275_EN.html (thanks this paper for the pointer)
      • "3. [..] should be in line with the precautionary principle that guides Union legislation and should be at the heart of any regulatory framework for AI; [..]"
      • "67. Considers that technologies which can produce automated decisions [..] should be treated with the utmost precaution, notably in the area of justice and law enforcement;"
  • For example, a negative-effect social scoring (severity "4") of 448 people in 448 million people EU (likelihood "1/1,000,000") will be:

    • "medium risk" according to Figure 4 of the EU General Risk Assessment Methodology (p. 13); but it will be
    • "unacceptable risk" according the EU AI Act
  • Risk levels in EU AI Act are based on 1) its use [for example, Article 5] 2) intended purpose [Article 6] or 3) its design [Article 52a(2)]

  • Some of the categorisations are list-based, the other are criteria-based. (See Risk level categorisation section below)

  • An AI system or an AI model will fall automatically in one of the risk levels based on the list or the criteria.

    • A system or a model can be moved to a lower risk level, if it can be demonstrated through a risk assessment that it does not pose a significant risk of harm. This is on case-by-case basis. See an example in Article 6 (2b).
  • A summary of risk level categorisation is shown in the section below.

Risk level categorisation

(Page numbers in this section are based on the most recent draft [dated 26 Jan 2024] of the EU AI Act, available publicly at https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf )

  • General principle for risk-based requirements and obligations are in Recital 14
    • “In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.” pp. 23-24

Unacceptable risk

  • Article 5 - Prohibited Artificial Intelligence Practices pp. 106-110
    • (1)(a) “deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour [..] cause that person, another person or group of persons significant harm” p. 106
    • (1)(b) “exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective to or the effect of materially distorting the behaviour of that person or a person pertaining to that group [..] cause that person or another person significant harm” p. 106
    • (1)(ba) “biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.” (with exception for law enforcement) p. 106
    • and more, including social scoring, see the full list in Article 5

High-risk

  • Article 6 - Classification rules for high-risk AI systems pp. 111-113
    • (1) AI system that "is intended to be used as a safety component of a product" or "is itself a product"; AND that product "is required to undergo a third-party conformity assessment" according to EU legislation listed in Annex II. p. 111
    • (2) “AI systems referred to in Annex III” p. 111
    • (2a) subpara 1, AI systems in (2) “shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons” (a list of criteria is given) p. 111
    • (2a) subpara 2, AI system shall always be considered high-risk if “performs profiling of natural persons” p. 112
    • (2b) “A provider who considers that an AI system” in (2) “is not high-risk shall document its assessment” p. 112

Limited risk

  • Article 52 - Transparency obligations for providers and users of certain AI systems and GPAI models pp. 164-166
    • (1) “intended to directly interact with natural persons” p. 164
    • (1a) “generating synthetic audio, image, video or text content” p. 164
    • (2) “an emotion recognition system or a biometric categorisation system” p. 165
    • (3) subpara 1, “generates or manipulates image, audio or video content constituting a deep fake” p. 165
    • (3) subpara 2 “generates or manipulates text which is published with the purpose of informing the public on matters of public interest” p. 165
  • Article 52a - Classification of general purpose AI models as general purpose AI models with systemic risk pp. 166-167
    • (1)(a) “has high impact capabilities” p. 166
    • (2) “cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25” p. 167

Minimal or no risk

  • No explicit definition in the draft of the Act. Basically anything that is not of unacceptable, high-risk, or limited risk.
  • European Commission AI Act portal page states that "The AI act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category." https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

@goneall goneall added this to the 3.1 milestone Mar 4, 2024
@bact
Copy link
Contributor Author

bact commented Mar 6, 2024

Discussed in AI Profile WG meeting 2024-03-06.
No conclusion yet. But the meeting agree in that the AI Profile should be generic and if there's a need for jurisdiction-specific, a subprofile may be possible.

@kestewart
Copy link
Contributor

Let's discuss this in the meeting. Possibly we should adjust 3.0's risk to be "General Risk", so we leave a spot for "AI Risk" to emerge in future, without being a breaking change? Thoughts?

@bact
Copy link
Contributor Author

bact commented Mar 21, 2024

Agree.

We can keep the 4 risk types (levels) as they are now. And probably rename the property to generalRiskAssessment for 3.0.

@bennetkl
Copy link

@bact @kestewart After re-reading Arthit's detailed explanation, I can see an issues for obtaining EU AI Act compliance in an easy manner since there isn't a direct mapping. If I wanted to scan an AI BOM to audit for a specific country regulations then a generic risk level isn't going to help with that process. I'm going to raise this issue with EU Project Office. Ideally we need them to unify the definitions. But for the short term, maybe we have two fields in SPDX AI Profile, one with name of useRiskAssessment to capture the EU AI (Risk levels in EU AI Act are based on 1) its use [for example, Article 5] 2) intended purpose [Article 6] or 3) its design [Article 52a(2)]) . or we can different types of risk options, ie. AIAct_medium, AIAct_restricted. or anyone else have an idea?

@bact bact changed the title [3.1] [AI] New entries in SafetyRiskAssessmentType to accommodate risk levels in EU AI Act AI: New entries in SafetyRiskAssessmentType to accommodate risk levels in EU AI Act Mar 21, 2024
@bact
Copy link
Contributor Author

bact commented Mar 23, 2024

PR #675 is open to make it more explicit in the description of safetyRiskAsssement property that the current categorization is according to EU General Risk Assessment Methodology, and not the EU AI Act. As agreed in 20 March 2024 AI Team meeting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants