Practical Considerations for AI & Integrated Data Systems: Implicit Bias
Ethical and Practical Considerations for AI, Implicit Bias, and Integrated Data Systems
There has been significant hype and political actions around artificial intelligence (AI). While it may be tempting to get swept up in the potential around AI and incorporate it into Integrated Data Systems (IDS) right away, it is crucial to understand that AI is only as good as the information fed into it and the soundness of the algorithms that it relies on. Additionally, the implementation of AI into IDS can create significant ethical and legal challenges. In this emerging issues update, we explore (1) the potential benefits and risks of incorporating AI in IDS, (2) how legal requirements may impact the use of AI in IDS, and (3) practical considerations for IDS stakeholders who are considering building AI into their IDS. You can read the longer version of this publication on our website here.
Potential Benefits and Risks of Incorporating AI in IDS
Incorporating AI in IDS and systems that utilize the data housed within an IDS can have major benefits for government agencies. Orly Lobel, Warren Distinguished Professor of Law at the University of San Diego explains that algorithms are like invisible code-breakers, unearthing patterns that often elude the human eye. Lobel details how the digitization and automation they bring can dramatically reduce the staggering administrative load federal government agencies grapple with, which is currently more than nine billion hours per year.
That being said, there is significant disagreement about whether the benefits of incorporating AI into certain governmental technologies outweigh the inherent risks–especially in terms of its impact on equity. Lobel argues that “the upsides of AI are immense. Automated decision-making is often fairer, more efficient, less expensive, and more consistent than human decision-making”. On the other hand, the Center for Democracy and Technology has noted that an “important lesson for the government is that AI is not necessarily objective or fair compared to alternatives. One reason for this is that many uses of AI involve data, but data is inherently biased. This is especially true for government agencies that want to change historical trends in data like student achievement gaps or unhoused rates.”
AI, Discrimination, and Existing Laws
Consideration of AI integrations into IDS’ must include an analysis of existing legal requirements. Governmental entities are subject to multiple anti-discrimination laws and statutes, including the Equal Protection Clause of the U.S. Constitution, Title VII of the Civil Rights Act of 1964, Title IX of the Education Amendments of 1972, and the Americans with Disabilities Act, among many others. In the press release announcing the FTC, DOJ, CFPB, and EEOC’s Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, FTC Chair Lina Khan said: “claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books.” IDS stakeholders should watch closely for more federal and state agency guidance as AI integration into technologies that IDS stakeholders are already using or considering adopting increases. For more insight on recent government activity related to AI, check out the Appendix.
Despite the potential legal peril that automated systems could raise, humans, of course, have their own significant implicit and explicit biases that can lead to discrimination. Some scholars have advocated for evaluating the effects of AI decisions as compared to human judgment. Lobel gives the example that: “expecting autonomous vehicles to drive with zero crashes is less useful (and indeed a riskier path) than comparing human driving with self-driving vehicles to determine the relative benefit. We need to critically consider the limits and risks from both human and algorithmic decision-making.” Emerging technologies such as AI have shown real promise for improving and increasing accessibility under anti-discrimination laws (check out some examples of how here and here). Rather than expecting perfection from AI incorporated in IDS, the relative benefits of using AI over only using non-technical alternatives must be taken into account when evaluating the effectiveness of these systems.
It is clear that regulators are evaluating legal obligations–like those prohibiting the government from discriminating against individuals on various bases–in the context of emerging technologies. So how should IDS stakeholders wanting to implement AI in their IDS proceed?
The Path Forward
Agencies that are considering implementing AI in IDS should start by considering why they want to incorporate AI in the first place. As Esther Dyson, founder of Wellville, advised: “Don't leave hold of your common sense. Think about what you're doing and how the technology can enhance it. Don't think about technology first.” (source). While identifying the underlying motivations and potential benefits of incorporating AI into IDS is a good starting point, the analysis cannot stop there. Rather, agencies must also consider the potential consequences and negative impacts (including novel harms) that AI integration may have on individuals and society as a whole.
As Jennifer Pahlka, former deputy chief technology officer in the Obama Administration, noted in her discussion at BenCon 2023: software, design, and government are all made by and for people (starting at 25:31). Government agencies should seek to center people in the underlying software decisions and design process for any AI incorporated in their IDS to ensure that these systems provide an overall benefit to the populations they are intended to serve.
IDS stakeholders incorporating AI in IDS must carefully think through and evaluate many complex considerations to ensure their AI use is ethical, legal, technically sound, and achieves the intended goal(s).
We recommend these great resources to help you start thinking through this analysis:
Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute
Algorithmic Equity Toolkit, ACLU of Washington
AI Guide for Government: A living and evolving guide to the application of Artificial Intelligence for the U.S. federal government., General Services Administration, Centers of Excellence
Purpose, Process, and Monitoring: A New Framework for Auditing Algorithmic Bias in Housing & Lending, National Fair Housing Alliance
Automated Decision-Making Systems and Discrimination: Understanding causes, recognizing cases, supporting those affected, AlgorithmWatch
Artificial Intelligence and Algorithmic Fairness Initiative, Equal Employment Opportunity Commission
Artificial Intelligence Risk Management Framework (AI RMF 1.0), Department of Commerce, National Institute of Standards and Technology
AISP Working Paper: Addressing Racial and Ethnic Inequities in Human Service Provision, Actionable Intelligence for Social Policy
The Privacy Expert's Guide to Artificial Intelligence and Machine Learning, Future of Privacy Forum
Guidance on AI and data protection, United Kingdom Information Commissioner’s Office
How DISC can help
The Data Integration Support Center (DISC) at WestEd, which has partnered with PIPC, can support public agencies’ ongoing efforts to evaluate privacy concerns regarding the use of AI in IDS’. DISC offers technical assistance to public agencies free of cost. Additionally, DISC will be releasing more information about the appropriate incorporation of generative AI while protecting privacy. For more information or technical assistance, reach out to DISC through our website.
Appendix: Relevant Government Activity on AI
The White House:
The White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People
“[T]he Biden-Harris Administration has secured voluntary commitments from [Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI] to help move toward safe, secure, and transparent development of AI technology.” (fact sheet).
OSTP released the “National Artificial Intelligence Research and Development Strategic Plan: 2023 Update”
OSTP released a “Request for Information [on] National Priorities for Artificial Intelligence”
Federal Agencies:
The Justice Department (DOJ), Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and Equal Employment Opportunity Commission (EEOC) released a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems”.
The FTC sent a 20 page letter to OpenAI (the creators of ChatGPT) asking for records regarding consumer protection (more details here)
The Department of Education, Office of Educational Technology released the “Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations”.
Congressional Hearings:
“Oversight of A.I.: Rules for Artificial Intelligence” - Subcommittee on Privacy, Technology, and the Law, Senate Committee on the Judiciary (May 16, 2023)
“Artificial Intelligence and Human Rights” - Subcommittee on Human Rights and the Law, Senate Committee on the Judiciary (June 13, 2023)
“Artificial Intelligence: Advancing Innovation Towards the National Interest” - House Committee on Science, Space, and Technology (June 22, 2023)
“Oversight of A.I.: Principles for Regulation” - Subcommittee on Privacy, Technology, and the Law, Senate Committee on the Judiciary (July 25, 2023)
Federal Bills:
Transparent Automated Governance Act (TAG Act)
Kids Online Safety Act (KOSA) – as amended by the Filter Bubble Transparency Act
State Bills:
The sheer amount of activity in this area demonstrates the importance of AI to federal regulators and policymakers. But interest in AI doesn’t stop at the federal level – state-level government officials are also taking notice, as demonstrated by bills like “An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT” (MA) and “An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy” (CT) being introduced this year.