Skip to main content

Maintaining American Leadership in Artificial Intelligence Through Public Investment and Workforce Development

The rapid rise of artificial intelligence (AI) tools has the potential to alter nearly all aspects of society with large but uncertain impacts on the economy and labor market. Generative AI has progressed quickly in the last few years, in particular with the release of ChatGPT, prompting governments to grapple with ways to encourage AI development within the bounds of ethical and national security concerns. AI tools may disrupt several industries from the music industry and copywriting to manufacturing and human resources. Many questions remain around AI, including inaccurate decision-making and algorithmic bias (e.g., facial recognition doing a worse job of identifying black female faces); lack of interpretability; information provenance (e.g., privacy concerns, deep fakes, and misinformation); and supply chain issues. AI may also increase inequality as AI tools consolidate the wealth and dominance of particular companies and individuals.

To maintain American leadership in AI and ensure a just integration of technology, the federal government, including the national labs, should work with technologists and other stakeholders to establish a safe and ethical structure for AI development. While there are a range of plausible scenarios of how this new technology transforms the economy and our workforce, substantial American leadership and public investment are needed to secure our competitiveness and national security while also ensuring that all U.S. citizens are uplifted by these changes and safeguarded against risks.

AI could fundamentally alter the U.S. labor market and may affect the demand for different jobs.

AI technologies may lead to fundamental changes in the U.S. labor market through their potential to reduce labor costs and increase productivity in ways that could increase global GDP by 7% each year. In doing so, these technologies would both expand economic opportunities in some sectors and reduce employment and activity in others. For example, one issue in the current Hollywood strike is the future of generative AI in the entertainment industry and its potential to disrupt writers’ and actors’ livelihoods.

Jobs across pay, skill, and experience spectrums could be affected by AI given that references to AI skills are increasingly common in job postings across virtually every sector (Figure 1). However, the precise impacts on each sector remain uncertain given the novelty of these technologies. AI may lead to job polarization where jobs become more concentrated in high- and low-paying occupations because routine tasks that are most susceptible to AI are predominantly in jobs that pay in the middle of the income ladder. However, other recent research suggests that tools like ChatGPT can boost productivity with certain writing tasks and narrow the productivity gap between more and less experienced customer service workers, indicating that AI may provide skills to grow the middle class. Some more complex occupations are also exposed to AI tasks that involve detecting patterns, making judgments, and optimizing processes, such as clinical lab technicians, chemical engineers, optometrists, and power plant operators. Together this suggests that technological advances in AI will impact the labor market in complicated and uncertain ways.

AI job opportunities are not equally available, and AI threatens jobs held by women more.

The history of technological change, from the advent of the dishwasher to the introduction of the internet, shows that technological developments do not destroy overall employment; but they can render some roles obsolete while providing others with opportunities. A study by the International Monetary Fund done prior to the recent rapid improvements in AI technology found that 11% of jobs held by women were at risk of being automated by tools like artificial intelligence compared to 9% of jobs held by men. The most at-risk group is women with lower secondary education, 50% of whom are at risk for automation relative to less than 40% of men in this group. Broadly, less educated workers and those in small- and medium-size firms are most at risk for automation, underscoring the need for more learning and retraining. This analysis also found that accommodation and food services, retail trade, and transportation sectors are most exposed to risks of automation. The retail trade, accommodation, and food services sectors employ similar proportions of men and women, while men are overrepresented in the transportation sector. These high-level sectoral numbers cannot fully explain differences in male versus female risk to automation though because the job composition within a sector also matters. For example, women are more represented in education, and their roles within the education sector are also more at risk for automation.

Long-running disparities in the STEM training and education pipeline also mean that job opportunities in fast-growing AI occupations are not equally available. Less than 19% of all AI and computer science PhD graduates in North America over the last decade were women. Furthermore. only 2.4% and 3.2% of U.S.-resident AI PhD graduates in 2019 were African American and Hispanic, respectively. In 2020, Queer in AI showed that almost half of its survey respondents view the lack of inclusion in the field as a barrier, and more than 40% of members surveyed reported experiencing discrimination or harassment at school or work. These disparities justify the need for efforts to ensure that historically underrepresented groups in STEM are not left behind during this AI revolution.

AI tools can both overcome and, in some cases, magnify biases.

There have also been reports that AI algorithms in hiring processes are biased against women. This is because of the years of bias and discrimination that is present in the data that is perpetuated by the model. We must also account for contextual and cultural complexity because AI will impact the working lives of women differently in different cultures and labor markets. Many AI facial recognition systems demonstrate racial disparities, with early work illustrating this for black women.

Paralleling these concerns, AI tools also have the potential to address certain systematic biases. For example, they can mitigate the corporate gender gaps, particularly in leadership roles, that broadly mirror the STEM gap by removing bias in recruiting, reviews, and promotion decisions and improving retention of female employees. One study found that a machine learning algorithm could help judges make bail decisions, lowering both crime and jail rates, but many other studies have shown that AI may perpetuate racial bias in bail decisions.

AI tools can be misused to harm consumers and the American public.

Another risk is that malicious actors may use AI tools to defraud the public before adequate protections can be put in place. It is imperative that the U.S. works to ward off threats to democracy posed by data and election manipulation, including campaign deepfakes. The U.S. government should and indeed already has begun to address the important privacy concerns surrounding the use of Americans’ personal data and AI. . A recent uptick in financial scams using AI voice cloning technology to trick consumers prompted several U.S. Senators to send a letter  to the Consumer Financial Protection Bureau urging action, and the Bureau has increased its focus on how AI technologies affect the financial marketplace. Recent proposed bipartisan legislation also aims to protect Americans’ data from unfriendly foreign nations. The bill would build upon federal government priorities to protect American health care records, geolocations, web browsing activity, and other information that malicious actors could use to harm American people and interests.

Public investment in American AI research and development infrastructure can improve the technology’s safety and development.

To bolster the United States’ role in the development of AI tools, the administration is making large investments in AI research and development (R&D). Because of the 2020 National AI Initiative Act championed by Senator Martin Heinrich and then Senator Rob Portman, in May 2023 the National Science Foundation (NSF) announced $140 million in funding for seven new National Artificial Intelligence Research Institutes as part of a cohesive cross-government approach to address AI related opportunities and risks. The new AI Institutes will advance foundational AI research on ethical and trustworthy technologies and on innovations in cybersecurity, climate change, understanding the human brain, and enhancing education and public health – all while supporting the development of a diverse AI workforce. Public investments have continued to increase over the last several years to $3.2 billion in 2022 (see below), underscoring the continued growth in public sector involvement in AI.

The Department of Energy (DOE) has the capabilities and experience to provide leadership in developing responsible AI R&D frameworks and quantifying the risks from AI. DOE has proposed a new initiative to lead the nation and the world on trustworthy AI development: FASST, or Frontiers in Artificial Intelligence for Science, Security, and Technology for the Nation. In consultation with the White House Office of Science and Technology Policy (OSTP), NSF has created a complementary roadmap for a National AI Research Resource (NAIRR) to enable the academic community to better utilize and expand AI within their own research. The recently introduced bipartisan CREATE AI Act of 2023 co-led by Senator Heinrich would authorize the NAIRR and help make this vital resource a reality. In addition to the academically-focused NAIRR, the federal government should explore ways to enable small and medium size firms to access, use, and interpret AI tools to prevent substantial consolidation among just a few technology firms.

The U.S. government should dramatically increase investments in AI education, reskilling, and training to prepare our workforce and shore up national security.

While the U.S. government has already made substantial investments in AI R&D and has outlined future goals, far less has been done to ensure that our workforce is ready to continue to support these efforts. For decades, the United States has been a magnet for AI talent – for example, the estimated hiring rate for AI workers in the US in 2020 was roughly double that in 2016. China’s growth rate over the same period was only 30%. However, other research points out that the United States is ahead in technology development but falling behind in people (STEM graduates and technology skill penetration), without which AI implementation will not be nearly as effective. The United States is particularly behind in the number of STEM graduates overall and in those that stay in the United States after graduating. For example, the limited number of skilled worker visas (H1B) makes it challenging both for highly educated workers to stay in the United States and for companies to focus operations here – with Canada and other countries capitalizing on this disincentive.

Educating, training, and reskilling to meet the new challenges of an AI-informed and augmented labor market will also become increasingly important to avoid job loss, especially for women and historically disadvantaged groups. Educating the future workforce to prepare people early on will be important – in particular, increased gender and racial equity efforts in STEM fields could help prevent certain groups from being left behind. Research conducted by the World Economic Forum and BCG showed that 95% of at-risk U.S. workers can be retrained for jobs that pay at or above what they make now and offer growth potential. Reskilling would be costly, but companies could profitably reskill 25% of their workforce – and 77% of workers could be retrained through government programs or incentives with a net cost benefit. Congress could aid these efforts by adopting tax policies that encourage companies to save costs by helping workers integrate technology into their jobs instead of replacing workers with technology.

Several policies options exist to propel our AI workforce forward to match our dominance on the technological side of the equation. The United States could leverage lessons learned from the space race during the Cold War. After the Soviet Union (USSR) launched the first crewed space flight, the United States government realized it needed to make substantial public investments in scientific education to close the gap with the USSR and promote national security. The National Science Foundation invested the equivalent of over $5 billion in teacher and classroom development, and Congress passed the National Defense Education Act to provide the equivalent of more than $10 billion for science education. Similarly, China’s emergence as a leader in AI should encourage the same “all hands on deck” national scale effort to maintain American leadership in AI through dramatically reinvigorated STEM education and workforce training efforts.

A multinational consortium approach could position the United States as an AI leader while providing the option to pool financial and human resources and share benefits across trusted allied countries. The United States has historically favored consortium approaches in dealing with cross-national issues like national security, such as with the North Atlantic Treaty Organization (NATO). The United States could lead such a group given the size of our economy, but working with others with AI expertise like Germany, the United Kingdom, and Canada would complement and accelerate action.

Educating the U.S. population broadly on the future of AI in addition to growing the AI-specific workforce is essential for broader AI usage, understanding, and public support. While public awareness of AI surged with the advent of tools like ChatGPT, many misunderstandings still exist, and a public education campaign could provide a more balanced view. This educational campaign could lay the groundwork for a sense of urgency to support efforts to maintain American leadership in AI, much like what was done during the space race. On July 25, 2023, a tech trade group and lobbying network, TechNet, kicked off a $25 million education campaign to do just that and focus on the positive aspects of AI. Raising the profile of STEM work and education will help the United States maintain technical dominance in AI and provide well-paying jobs. Finally, closely evaluating possible international partners will enable the United States to build on others’ advances, quickly partner with allies for mutual benefit, and maintain our leadership in this space.

To maintain American leadership in AI, the federal government should work with technologists to establish a safe and ethical structure for AI development.

Given the immense role that AI technologies could play in the global economy, and the rapid recent developments, both the Administration and Congress have begun working through what role the federal government can and should play in this space. Technologists are also looking for structure and guidance from the government on safe and ethical AI development while maintaining their own competitiveness. Sam Altman, the CEO of OpenAI, which created ChatGPT, went so far as to ask for government regulation of AI in a Senate hearing in May. A multi-stakeholder approach is thus necessary to engage with the government, private sector, technologists, and academia around key issues like data privacy, AI’s use in healthcare, and the need to foster skill-equalizing work environments for historically marginalized groups. The U.S. federal government should be at the forefront of coordinating a safe and ethical deployment of AI within the labor force and economy.

Governments themselves can leverage AI tools and their unique access to large datasets to make better policy decisions, especially during times of uncertainty. In the United States, the Internal Revenue Service (IRS) has used AI tools to reduce taxpayer wait times, and the Centers for Medicare and Medicaid Services have created AI competitions to predict health outcomes using Medicare data.

Training of AI systems and data storage needs can have substantial environmental impacts given the growing amount of energy, and resulting CO2-equivalent emissions, needed to power these tools. The U.S. government should encourage or incentivize developers and deployers to limit their climate impact. AI tools can also be used to optimize energy consumption across a range of sectors, which could prove essential to our energy and security missions. AI tools also fuel climate misinformation online, and government action to combat this is necessary.

Other vital and necessary work is also underway in the Administration and Congress to root out bias and promote equity and mitigate threats posed by AI to further solidify American leadership in safe AI deployment. On July 21, 2023, following pressure from the Biden Administration, seven leading AI companies agreed to voluntary safeguards on AI development – a first step in a regulatory and legal framework for this rapidly growing technology. These safeguards include testing products for security risks and marking products so that consumers can spot AI-generated material. In line with these efforts, the Administration has also put out a blueprint for an AI bill of rights focused on five principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. In February of this year, President Biden issued an executive order directing federal agencies to root out bias and promote equity in the design and use of new technologies including AI.

Simultaneously, Congress has been ramping up efforts to understand AI and lay the groundwork for regulation. Senate and House bipartisan caucuses complement work done in the Administration and have taken leadership roles on organizing member and staff level briefings to increase AI literacy on Capitol Hill. Initial legislative proposals are currently underway.