The increasing use of artificial intelligence (AI) and automated decision-making systems in the UK’s public sector has prompted Lord Clement-Jones, Liberal Democrat peer and former chair of the Lords Select Committee on AI, to bring forward a new bill aimed at regulating their deployment. This legislative move comes amidst growing concerns about the potential for algorithmic bias and the lack of transparency surrounding these systems.
Lord Clement-Jones, a prominent advocate for AI regulation, highlighted the dangers of unregulated automated computing: “The UK’s Post Office/Horizon scandal demonstrates the painful human cost when there aren’t proper checks in place to challenge these automated systems,” Lord Clement-Jones said in a statement. “Too often in the UK we legislate when the damage has already been done. We need to be proactive, not reactive, when it comes to protecting citizens and their interactions with new technologies. We need to be ahead of the game when it comes to regulating AI. We simply cannot risk another Horizon scandal.”
The proposed bill, titled the “Public Authority Algorithmic and Automated Decision-Making Systems Bill,” seeks to establish a framework for greater transparency and accountability in the use of AI within the public sector. It mandates public authorities to conduct impact assessments of any automated or AI algorithms employed in decision-making processes, ensuring that potential risks and biases are identified and addressed.
Furthermore, the bill champions the adoption of transparency standards, requiring public authorities to maintain a register of automated decision-making systems, thereby providing citizens with greater insight into how these systems affect their lives. In the event of an unfavourable automated decision, individuals will have the right to access information regarding the factors that influenced the outcome, enabling them to challenge the decision if necessary.
The UK government’s stance on automated decision-making has been characterised by a degree of hesitancy and ambiguity, despite the growing prevalence of AI in the public sector. While initiatives like the CDEI’s AI assurance roadmap and the algorithmic transparency standard have been introduced, concerns remain about potential dilutions of individuals’ rights to challenge AI-driven decisions.
The proposed bill seeks to address these concerns by mandating the provision of an independent dispute resolution service for individuals who wish to challenge automated decisions. This measure aims to empower citizens and ensure that they have recourse in cases where they believe they have been unfairly treated by an automated system.
Thousands of senior engineers and procurement professionals subscribe to our LinkedIn Market Intel newsletter – get yours here
For more help with looking at supply chain options, contact Astute Electronics