AI Under Fire: Navigating the Wild West
As artificial intelligence (AI) becomes increasingly integrated into our daily lives, its potential for both benefit and harm has come into sharp focus. By April 2022, about one-third of U.S. states had proposed or enacted laws aimed at protecting consumers from AI-related harm or overreach. These legislative efforts seek to balance the need for consumer protection with the desire to foster innovation and progress in AI technologies. In this blog post, we will examine several examples of AI-related laws and discuss how they attempt to strike this delicate balance.
Some of the laws proposed or enacted by U.S. states to protect consumers from AI-related harm or overreach include:
1. California's AB-2269 (2020): This law focuses on the use of AI in hiring processes. It aims to regulate the use of automated decision-making systems to ensure that they are transparent, fair, and non-discriminatory.
Protection: If enacted, this law would have aimed to regulate the use of AI in hiring
processes to ensure transparency, fairness, and non-discrimination, reducing the risk of
biased hiring decisions.
Stifling innovation: Increased regulation and compliance requirements might discourage
companies from adopting AI-based hiring tools, potentially limiting the benefits of
automating and streamlining the recruitment process.
2. Illinois' Artificial Intelligence Video Interview Act (2019): This law requires employers to inform job applicants when AI is being used in video interviews and to obtain their consent. It also mandates that employers delete the video interview data within 30 days upon the applicant's request.
Protection: This law ensures job applicants are aware of AI usage in video interviews and
protects their privacy by requiring employers to delete the video data upon request. It
prevents potential biases and unfair treatment during the hiring process.
Stifling innovation: The law might discourage companies from adopting AI-based
interview tools due to increased compliance requirements. This could slow down the
adoption of advanced HR technologies that may streamline and improve the hiring
3. New York City's Local Law 49 (2018): This law established the Automated Decision Systems Task Force to review the city's use of algorithmic decision-making tools. The task force is responsible for examining the potential biases and disparate impacts of these tools and providing recommendations to improve transparency and accountability.
Protection: By reviewing the city's use of algorithmic decision-making tools, the task
force aims to ensure transparency, fairness, and accountability in government decisions
affecting citizens, reducing the risk of discrimination and biased outcomes.
Stifling innovation: The increased scrutiny on automated decision systems could deter
local governments and private companies from adopting these tools, fearing potential
controversies or negative public perception.
4. Washington State's SB 5116 (2021): This law requires companies using facial recognition technology to obtain consent from individuals before collecting their facial recognition data. It also mandates that companies provide public notice when facial recognition technology is being used in public spaces.
Protection: By regulating facial recognition technology use by state and local
government agencies, this law aims to protect individual privacy and prevent
unwarranted surveillance. It also ensures accuracy and fairness by requiring testing for
Stifling innovation: Imposing strict regulations on facial recognition technology may
discourage companies and governments from developing and deploying these
technologies, which could be used for public safety or other beneficial purposes.
5. Maryland's SB 787 (2021): This bill establishes a task force to study the use of AI in the state's criminal justice system. The task force will examine potential biases, ethical concerns, and the impact of AI on criminal justice outcomes and provide recommendations on best practices and potential regulations.
Protection: If enacted, this law would have established a task force to study the use of AI
in the criminal justice system, potentially leading to recommendations for best practices
and regulations to prevent biases and ensure ethical use.
Stifling innovation: Increased scrutiny on AI usage in the criminal justice system might
lead to more cautious adoption of these technologies, limiting the potential benefits of AI
in identifying patterns, solving crimes, or improving efficiency in the justice system.
Regulating AI is undoubtedly a complex and evolving process. As lawmakers continue to address the potential risks and benefits of AI technologies, striking the right balance between protection and innovation remains a critical challenge. By understanding the nuances of these laws and their impact on both consumers and the technology industry, we can foster informed discussions and contribute to the development of a more responsible and ethical AI landscape. As AI continues to advance, it is essential for policymakers, technology companies, and users alike to work together to create a future where AI is both safe and beneficial to society as a whole.