Fully automated decision making AI systems: the right to human intervention and other safeguards

Wired UK Gov | Information Commissioner's Office | Aug 12, 2019

Automated AI decisions - Fully automated decision making AI systems: the right to human intervention and other safeguardsReuben Binns, our Research Fellow in Artificial Intelligence (AI), and Valeria Gallo, Technology Policy Adviser, discuss some of the key safeguards organisations should implement when using solely automated AI systems to make decisions with significant impacts on data subjects.

This post is part of our ongoing Call for Input on developing the ICO framework for auditing AI. We encourage you to share your views by leaving a comment below or by emailing us at AIAuditingFramework@ico.org.uk.

The General Data Protection Regulation (GDPR) requires organisations to implement suitable safeguards when processing personal data to make solely automated decisions that have a legal or similarly significant impact on individuals. These safeguards include the right for data subjects:

  • to obtain human intervention;
  • to express their point of view; and
  • to contest the decision made about them.

See:  How Data-driven Strategies Can Improve Impact Investing Outcomes

These safeguards cannot be token gestures. Guidance published by the European Data Protection Board (EDPB) states that human intervention involve

a review of the decision, which “must be carried out by someone who has the appropriate authority and capability to change the decision”.  The review should include a “thorough assessment of all the relevant data, including any additional information provided by the data subject.”

In this respect, the conditions under which human intervention will qualify as meaningful are similar to those that apply to human oversight in ‘non-solely automated’ systems. However, a key difference is that in solely automated contexts, human intervention is only required on a case-by-case basis to safeguard the data subject’s rights.

Why is this a particular issue for AI systems?

The type and complexity of the systems involved in making solely automated decisions will affect the nature and severity of the risk to people’s data protection rights and will raise different considerations, as well as compliance and risk management challenges.

Basic systems, which automate a relatively small number of explicitly written rules (eg a set of clearly expressed ‘if-then’ rules to determine a customer’s eligibility for a product) are unlikely to be considered AI. It should also be relatively easy for a human reviewer to identify and rectify any mistake, if a decision is challenged by a data subject because of system’s high interpretability.

However other systems, such as those based on machine learning (ML), may be more complex and present more challenges for meaningful human review. ML systems make predictions or classifications about people based on data patterns. Even when they are highly accurate, they will occasionally reach the wrong decision in an individual case. Errors may not be easy for a human reviewer to identify, understand or fix.

While not every challenge on the part of data subject will be valid, organisations should expect that many could be. There are two particular reasons why this may be the case in ML systems:

  • The individual is an ‘outlier’, ie their circumstances are substantially different from those considered in the training data used to build the AI system. Because the ML model has not been trained on enough data about similar individuals, it can make incorrect predictions or classifications.
  • Assumptions in the AI design can be challenged, for example a continuous variable such as age, might have been broken up (‘binned’) into discrete age ranges, eg 20-39, as part of the modelling process. Finer-grained ‘bins’ may result in a different model with substantially different predictions for people of different ages. The validity of this data pre-processing and other design choices may only come into question as a result of an individual’s challenge.

See:  Innovative new law opens Guernsey up to Artificial Intelligence

What should organisations do?

Many of the controls required to ensure compliance with the GDPR’s provisions on solely automated systems are very similar to those necessary to ensure the meaningfulness of human reviews in non-solely automated AI systems.

Organisations should:

  • consider the system requirements necessary to support a meaningful human review from the design phase. Particularly, the interpretability requirements and effective user-interface design to support human reviews and interventions;
  • design and deliver appropriate training and support for human reviewers; and
  • give staff the appropriate authority, incentives and support to address or escalate data subjects’ concerns and, if necessary, override the AI system’s decision.

Continue to the full article --> here


NCFA Jan 2018 resize - Fully automated decision making AI systems: the right to human intervention and other safeguards The National Crowdfunding & Fintech Association (NCFA Canada) is a financial innovation ecosystem that provides education, market intelligence, industry stewardship, networking and funding opportunities and services to thousands of community members and works closely with industry, government, partners and affiliates to create a vibrant and innovative fintech and funding industry in Canada. Decentralized and distributed, NCFA is engaged with global stakeholders and helps incubate projects and investment in fintech, alternative finance, crowdfunding, peer-to-peer finance, payments, digital assets and tokens, blockchain, cryptocurrency, regtech, and insurtech sectors. Join Canada's Fintech & Funding Community today FREE! Or become a contributing member and get perks. For more information, please visit: www.ncfacanada.org

Latest news - Fully automated decision making AI systems: the right to human intervention and other safeguardsFF Logo 400 v3 - Fully automated decision making AI systems: the right to human intervention and other safeguardscommunity social impact - Fully automated decision making AI systems: the right to human intervention and other safeguards

Want to get insider access to some of the most innovative advances happening in #fintech. Join us May 31, 2023 in Toronto for an in-person 7th Summer Kickoff Networking!


Grab your May 31 ticket before they are gone...

7th Summer Kickoff Networking May 31 Presented by DIGTL 1000 1 - Fully automated decision making AI systems: the right to human intervention and other safeguards

Support NCFA by Following us on Twitter!







NCFA Sign up for our newsletter - Fully automated decision making AI systems: the right to human intervention and other safeguards




 

Leave a Reply

Your email address will not be published. Required fields are marked *

15 − 4 =