The Future of AI in Government: White House Unveils AI Policy to Champion AI Development and Use for Federal Agencies
WASHINGTON, D.C. (AI Reporter/News): In a landmark move, Vice President Kamala Harris on March 28, 2024 announced the release of the first-ever government wide policy on artificial intelligence (AI) from the White House Office of Management and Budget (OMB).
This AI policy delivers on a key component of President Biden’s AI Executive Order, which aims to mitigate AI risks while harnessing its potential benefits. The Order outlines a comprehensive approach to strengthen AI safety and security, protect privacy, promote equity and civil rights, and advance American innovation and leadership in this critical field.
Notably, federal agencies have successfully completed all actions mandated by the Executive Order within the designated timeframes, demonstrating a strong commitment to responsible AI development and deployment.
The preliminary version of the guidance was published prior to Vice President Harris’ attendance at the inaugural global AI summit in the UK. Following a period for public input, the final version was issued on Thursday March 28, 2024.
Harris declared the guidance as mandatory, stressing the importance of prioritizing public interest on an international level.
While releasing AI policy, In a conversation with journalists, Vice President Kamla Harris stated, “President Biden and I aim for these domestic policies to set a standard internationally,” adding, “We persist in urging every nation to adopt our example and prioritize the public interest in government’s AI application.”
Shalanda Young, the OMB Director, asserted, “The public deserves confidence that the federal government will use the technology responsibly.”
While numerous government agencies are already leveraging artificial intelligence, the Biden administration’s memo further explores the technology’s potential benefits, such as predicting severe weather conditions, monitoring disease proliferation, and observing opioid consumption trends.
Joe Biden – Kamla Harris Administration announced completion of 150-day actions tasked by President Biden’s landmark Executive Order on AI.
***
OMB AI Policy: AI Innovation Leadership
The Biden-Harris Administration has taken a comprehensive approach to steer Federal departments and agencies towards leading the charge in responsible AI innovation. This move is in line with their consistent efforts to position America at the forefront of this domain.
The Office of Management and Budget (OMB) has disclosed that the President’s Budget is geared towards empowering agencies with the necessary resources to responsibly develop, test, procure, and incorporate transformative AI applications throughout the Federal Government.
This investment underscores the administration’s commitment to harnessing AI’s potential responsibly and effectively.
***
OMB Policy Directive
In line with the President’s Executive Order, OMB’s new AI policy directs the following actions:
Address Risks from the Use of AI
This guidance places people and communities at the center of the government’s innovation goals. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society, and the public must have confidence that the agencies will protect their rights and safety.
By December 1, 2024, Federal agencies will be required to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety. These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI. These safeguards apply to a wide range of AI applications from health and education to employment and housing.
For example, by adopting these safeguards, agencies can ensure that:
- When at the airport, travelers will continue to have the ability to opt out from the use of TSA facial recognition without any delay or losing their place in line.
- When AI is used in the Federal healthcare system to support critical diagnostics decisions, a human being is overseeing the process to verify the tools’ results and avoids disparities in healthcare access.
- When AI is used to detect fraud in government services there is human oversight of impactful decisions and affected individuals have the opportunity to seek remedy for AI harms.
If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.
To protect the federal workforce as the government adopts AI, OMB’s AI policy encourages agencies to consult federal employee unions and adopt the Department of Labor’s forthcoming principles on mitigating AI’s potential harms to employees.
The Department is also leading by example, consulting with federal employees and labor unions both in the development of those principles and its own governance and use of AI.
The guidance also advises Federal agencies on managing risks specific to their procurement of AI. Federal procurement of AI presents unique challenges, and a strong AI marketplace requires safeguards for fair competition, data protection, and transparency.
Later this year (2024), OMB will take action to ensure that agencies’ AI contracts align with OMB AI policy and protect the rights and safety of the public from AI-related risks. The RFI issued today will collect input from the public on ways to ensure that private sector companies supporting the Federal Government follow the best available practices and requirements.
(Note: On Oct 30, 2023 President Biden and Vice President Harris participate in an event highlighting the Administration’s commitment to advancing the safe, secure, and trustworthy development and use of Artificial Intelligence.)
Expand Transparency of AI Use
The AI policy released on March 28, 2024 requires Federal agencies to improve public transparency in their use of AI by requiring agencies to publicly:
- Release expanded annual inventories of their AI use cases, including identifying use cases that impact rights or safety and how the agency is addressing the relevant risks.
- Report metrics about the agency’s AI use cases that are withheld from the public inventory because of their sensitivity.
- Notify the public of any AI exempted by a waiver from complying with any element of the OMB policy, along with justifications for why.
- Release government-owned AI code, models, and data, where such releases do not pose a risk to the public or government operations.
OMB also released detailed draft instructions to agencies detailing the contents of this public reporting.
Advance Responsible AI Innovation
OMB’s AI policy will also remove unnecessary barriers to Federal agencies’ responsible AI innovation. AI technology presents tremendous opportunities to help agencies address society’s most pressing challenges.
Examples include:
Addressing the climate crisis and responding to natural disasters. The Federal Emergency Management Agency is using AI to quickly review and assess structural damage in the aftermath of hurricanes, and the National Oceanic and Atmospheric Administration is developing AI to conduct more accurate forecasting of extreme weather, flooding, and wildfires.
Advancing public health. The Centers for Disease Control and Prevention is using AI to predict the spread of disease and detect the illicit use of opioids, and the Center for Medicare and Medicaid Services is using AI to reduce waste and identify anomalies in drug costs.
Protecting public safety. The Federal Aviation Administration is using AI to help deconflict air traffic in major metropolitan areas to improve travel time, and the Federal Railroad Administration is researching AI to help predict unsafe railroad track conditions.
Advances in generative AI are expanding these opportunities, and OMB’s guidance encourages agencies to responsibly experiment with generative AI, with adequate safeguards in place. Many agencies have already started this work, including through using AI chatbots to improve customer experiences and other AI pilots.
Grow the AI Workforce
Building and deploying AI responsibly to serve the public starts with people. OMB’s AI guidance directs agencies to expand and upskill their AI talent. Agencies are aggressively strengthening their workforces to advance AI risk management, innovation, and governance including:
- By Summer 2024, the Biden-Harris Administration has committed to hiring 100 AI professionals to promote the trustworthy and safe use of AI as part of the National AI Talent Surge created by Executive Order 14110 and will be running a career fair for AI roles across the Federal Government on April 18.
- To facilitate these efforts, Office of Personnel Management has issued guidance on pay and leave flexibilities for AI roles, to improve retention and emphasize the importance of AI talent across the Federal Government.
- The Fiscal Year 2025 President’s Budget includes an additional $5 million to expand General Services Administration’s government-wide AI training program, which last year had over 4,800 participants from across 78 Federal agencies.
Strengthen AI Governance
To ensure accountability, leadership, and oversight for the use of AI in the Federal Government, the OMB AI policy requires federal agencies to:
- Designate Chief AI Officers, who will coordinate the use of AI across their agencies. Since December, OMB and the Office of Science and Technology Policy have regularly convened these officials in a new Chief AI Officer Council to coordinate their efforts across the Federal Government and to prepare for implementation of OMB’s guidance.
- Establish AI Governance Boards, chaired by the Deputy Secretary or equivalent, to coordinate and govern the use of AI across the agency. As of today, the Departments of Defense, Veterans Affairs, Housing and Urban Development, and State have established these governance bodies, and every CFO Act agency is required to do so by May 27, 2024.
In addition to this guidance, the Administration announcing several other measures to promote the responsible use of AI in Government:
- OMB will issue a request for information (RFI) on Responsible Procurement of AI in Government, to inform future OMB action to govern AI use under Federal contracts;
- Agencies will expand 2024 Federal AI Use Case Inventory reporting, to broadly expand public transparency in how the Federal Government is using AI;
- The Administration has committed to hire 100 AI professionals by Summer 2024 as part of the National AI Talent Surge to promote the trustworthy and safe use of AI.
With these actions, the Administration is demonstrating that Government is leading by example as a global model for the safe, secure, and trustworthy use of AI. The AI policy announced on Tuesday builds on the Administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework, and will drive Federal accountability and oversight of AI, increase transparency for the public, advance responsible AI innovation for the public good, and create a clear baseline for managing risks.
It also delivers on a major milestone 150 days since the release of Executive Order 14110, and the table below presents an updated summary of many of the activities federal agencies have completed in response to the Executive Order.
***
Summary of AI-Related Federal Actions and Timelines
This table provides a comprehensive summary of the actions taken by various federal agencies regarding the adoption and regulation of Artificial Intelligence (AI) technologies. The measures outlined in the table indicate the government’s commitment to integrating AI into federal operations, enhancing AI governance and innovation, and addressing the associated privacy, security, and workforce challenges.
Action | Agency | Required Timeline | Status |
---|---|---|---|
Evaluated ways to prioritize agencies’ adoption of AI through the Technology Modernization Fund | Technology Modernization Board | 30 days | COMPLETE |
Directed the National Artificial Emerging Transportation Technology Council to evaluate the transportation sector’s need for AI guidance and technical assistance | Department of Transportation | 30 days | COMPLETE |
Reported federal agency resources available to incorporate into the National AI Research Resource (NAIRR) pilot | Agencies identified by the National Science Foundation | 45 days | COMPLETE |
Identified priority areas for increasing federal agency AI talent and accelerated hiring pathways | Office of Science and Technology Policy & Office of Management and Budget | 45 days | COMPLETE |
Convened AI and Tech Talent Task Force | White House Chief of Staff’s Office | 45 days | COMPLETE |
Launched an AI Talent Surge to accelerate hiring AI professionals across the federal government, including through a large-scale hiring action for data scientists | Agencies coordinating with the AI and Tech Talent Task Force | 45 days | COMPLETE |
Published a Request for Information (RFI) on whether to revise the list of Schedule A job classifications that do not require permanent labor certifications | Department of Labor | 45 days | COMPLETE |
Provided interagency council to coordinate federal agencies’ use of AI | Office of Management and Budget | 60 days | COMPLETE |
Reviewed the need for – and granted – flexible hiring authorities including direct hire and excepted service authorities for federal agencies to hire AI professionals | Office of Personnel Management | 60 days | COMPLETE |
Issued Defense Production Act authorities to compel developers of powerful AI systems to report vital information, especially AI safety test results | Department of Commerce | 90 days | COMPLETE |
Proposed a rule that companies, U.S. domiciled companies that provide computing power for foreign AI training to report that they are doing so | Department of Commerce | 90 days | COMPLETE |
Established a talent exchange program to hire AI specialists in the government, especially those with cybersecurity experience | Sector Risk Management Agencies | 90 days | COMPLETE |
Ordered an agency to create a list of vulnerable AI technologies | National Science Foundation | 90 days | COMPLETE |
Streamlined visa processing, including by renewing and expanding interview-waiver authorities | Department of State | 90 days | COMPLETE |
Established an AI Task Force to develop policies to provide regulatory clarity and attract innovation in healthcare | Department of Health and Human Services | 90 days | COMPLETE |
Convened federal agencies’ civil rights offices to discuss the intersection of AI and civil rights | Department of Justice | 90 days | COMPLETE |
Directed Key Federal Advisory Committees to advise on AI and transportation | Department of Transportation | 90 days | COMPLETE |
Launched a pooled hiring action, to accelerate federal AI hiring, by leveraging open applications apply for roles in multiple agencies with just one application | Office of Personnel Management | 90 days | COMPLETE |
Released a draft framework for prioritizing generative AI technologies in security authorizations for federally procured products and services | General Services Administration | 90 days | COMPLETE |
Announced funding to create a Research Coordination Network that will enable the development of more secure and privacy-safeguarding approaches in AI | National Science Foundation & Department of Energy | 120 days | COMPLETE |
Established a pilot program to enhance existing successful training initiatives for training additional scientists in AI | National Science Foundation & Department of Energy | 120 days | COMPLETE |
Published guidance on patentability of AI-assisted inventions | U.S. Patent and Trademark Office | 120 days | COMPLETE |
Launched a pilot program for a domestic visa renewals | Department of State | 120 days | COMPLETE |
Entered into a contract with the National Academies of Sciences, Engineering, and Medicine to conduct a study regarding AI, biological data, and bioscurity risks | Department of Defense | 120 days | COMPLETE |
Issued guidance on how to attract, hire, and retain AI and AI-capable | Office of Personnel Management | 120 days | COMPLETE |
Published information for decision on how to help experts in AI understand and impact on waste in the United States | Department of Homeland Security | 120 days | COMPLETE |
Published reports from a working group in AI to shape other technological and emerging technologies – have utilized the immigration system | Department of Homeland Security | 120 days | COMPLETE |
Evaluated steps for updating and establishing new criteria for — the countries and skills on the United States Visitor B1 list, including possible critical to implement exchange | Office of Management and Budget | 150 days | COMPLETE |
Published a final memorandum with requirements and guidance for federal agencies’ AI governance, innovation, and risk management | Department of Treasury | 150 days | COMPLETE |
Published a report examining AI-related cybersecurity and fraud risks and best practices for financial institutions | Department of Treasury | 150 days | COMPLETE |
Announced the funding of new Regional Innovation Engines (NSF Engines), including with a focus on advancing AI | National Science Foundation | 150 days | COMPLETE |
Clarified the RFI on how federal agency privacy practices may be more effective at mitigating privacy risks, including those that are further exacerbated by AI and other advances in technology and data practices. | Office of Management and Budget | 180 days | COMPLETE |
Established an office to coordinate development of AI and other critical and emerging technologies across the agency | Department of Energy | 180 days | COMPLETE |
Established three joint technology teams over the next quarter — program and enhance its use, including by experts in AI and related fields. | Department of Homeland Security | 180 days | COMPLETE |
Published new policy guidance for international students, clarifying and modernizing this path way for experts in AI and other critical and emerging technologies | Department of Homeland Security | 180 days | COMPLETE |
Released for comment a draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence | Office of Management and Budget | (none) | COMPLETE |
Launched the Educational Initiative, in order to prioritize AI-related workforce development | National Science Foundation | (none) | COMPLETE |
Defined AI as a focus area for prize funds through the 2024 Growth Accelerator Fund Competition | Small Business Administration | (none) | COMPLETE |
Compared the eligibility of AI-related expenditures for support via key programs of the Small Business | Small Business Administration | (none) | COMPLETE |
Published an RFI on AI’s implications for global development | U.S. Agency for International Development & Department of State | (none) | COMPLETE |
Published an RFI to inform the development of the Global AI Research Agenda | U.S. Agency for International Development & Department of State | (none) | COMPLETE |
Published updated policy guidance regarding international student visas applicable to students in AI-related fields | Department of Homeland Security | (none) | COMPLETE |
Proposed a new rule to provide for public and peer responses when AI is used to impersonate an individual for commercial use | Federal Trade Commission | (none) | COMPLETE |
Proposed changes to a privacy rule that would further limit companies’ ability to monetize children’s data, including by limiting targeted advertising | Federal Trade Commission | (none) | COMPLETE |
Issued an advisory opinion to highlight that fake, incomplete, and old information must not appear in tagged threads, directories, including for tenant screening | Consumer Financial Protection Bureau | (none) | COMPLETE |
copyright@aireporter.news