The seventh-annual report on the global state of artificial intelligence from Stanford University’s Institute for Human-Centered Artificial Intelligence offers some concerning thoughts for society: the technology’s spiraling costs and poor measurement of its risks. According to the report, “The AI Index 2024 Annual Report,” published Monday by HAI, the cost of training large language models such as OpenAI’s GPT-4 — the so-called foundation models used to develop other programs — is soaring. “The training costs of state-of-the-art AI models have reached unprecedented levels,” the report’s authors write. “For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.” (An “AI model” is the part of an AI program that contains numerous neural net parameters and activation functions that are the key elements for how an AI program functions.) At the same time, the report states, there is too little in the way of standard measures of the risks of such large models because measures of “responsible AI” are fractured. There is “significant lack of standardization in responsible AI reporting,” the report states. “Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.”
Full report : Gen AI training costs soar yet risks are poorly measured, says Stanford AI report,