Skip to content Skip to footer

Artificial Intelligence – AI Index Report 2024: Top 10 Key Takeaways

AI Index Report 2024 Unveiled by Stanford’s Human-Centered AI Institute

Expanding Knowledge on Artificial Intelligence Trends and Impacts

Stanford, CA – Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) has released the seventh edition of the AI Index report. This comprehensive edition arrives at a pivotal time when the influence of artificial intelligence on global society and industries has reached unprecedented levels. Originally part of the One Hundred Year Study on Artificial Intelligence (AI100), the AI Index has evolved into an independent initiative under the auspices of Stanford HAI, reflecting its commitment to fostering a deeper understanding of AI’s roles and ramifications.

The 2024 report broadens its analytical scope significantly, encompassing key areas such as technological advancements in AI, evolving public perceptions, and the geopolitical dynamics that shape its development. It introduces novel data, including updated costs of AI training, in-depth analysis of the responsible AI framework, and debuts a new chapter dedicated to examining AI’s transformative effects on science and medicine.

As the AI Index continues to track, synthesize, and visualize critical AI data, it remains an indispensable resource for policymakers, researchers, business leaders, journalists, and the general public. This initiative aims to provide an unbiased, thoroughly vetted, and comprehensive data source to foster a more detailed and nuanced understanding of the complex AI landscape.

Introduction

The AI Index Report 2024 was conceived within the One Hundred Year Study on Artificial Intelligence (AI100). The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index studies trends in two types of frontier AI models: “notable models” and foundation models.3 Epoch, an AI Index data provider, uses the term “notable machine learning models” to designate noteworthy models handpicked as being particularly influential within the AI/machine learning ecosystem. In contrast, foundation models are exceptionally large AI models trained on massive datasets, capable of performing a multitude of downstream tasks. Examples of foundation models include GPT-4, Claude 3, and Gemini. While many foundation models may qualify as notable models, not all notable models are foundation models.

Evolution and Impact of Advanced AI Technologies

A decade ago, the best AI systems in the world were unable to classify objects in images at a human level. AI struggled with language comprehension and could not solve math problems. Today, AI systems routinely exceed human performance on standard benchmarks. Progress accelerated in 2023. New state-of-the-art systems like GPT-4, Gemini, and Claude 3 are impressively multimodal: They can generate fluent text in dozens of languages, process audio, and even explain memes. As AI has improved, it has increasingly forced its way into our lives. Companies are racing to build AI-based products, and AI is increasingly being used by the general public. But current AI technology still has significant problems. It cannot reliably deal with facts, perform complex reasoning, or explain its conclusions.

The Dual Futures of AI and Government Intervention

AI faces two interrelated futures. First, technology continues to improve and is increasingly used, having major consequences for productivity and employment. It can be put to both good and bad uses. In the second future, the adoption of AI is constrained by the limitations of the technology. Regardless of which future unfolds, governments are increasingly concerned. They are stepping in to encourage the upside, such as funding university R&D and incentivizing private investment. Governments are also aiming to manage the potential downsides, such as impacts on employment, privacy concerns, misinformation, and intellectual property rights.

Here are top 10 Key Takeaways of the AI Index Report 2024

1. AI beats humans on some tasks, but not on all.

AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.

2. Industry continues to dominate frontier AI research.

In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.

3. Frontier models get way more expensive.

According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.

4. The United States leads China, the EU, and the U.K. as the leading source of top AI models.

In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.

5. Robust and standardized evaluations for LLM responsibility are seriously lacking.

New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.

6. Generative AI investment skyrockets.

Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.

7. The data is in: AI makes workers more productive and leads to higher quality work.

In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.

8. Scientific progress accelerates even further, thanks to AI.

In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications—from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.

9. The number of AI regulations in the United States sharply increases.

The number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.

10. People across the globe are more cognizant of AI’s potential impact—and more nervous.

    A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.

    copyright@aireporter.news