Tech Giants Disagree Over Approach to AI Regulation and Openness
ICARO Media Group
In a clash of interests, tech leaders Meta (Facebook's parent company) and IBM have formed a new group called the AI Alliance, advocating for an "open science" approach to the development of artificial intelligence (AI). This puts them at odds with their rivals Google, Microsoft, and OpenAI, the maker of ChatGPT. The key point of contention between these two camps, the open and the closed, is whether AI should be built in a way that allows widespread accessibility. The debate revolves around issues of safety and fair distribution of profits from AI advancements.
The advocates for openness emphasize the importance of building AI that is "not proprietary and closed," according to Darío Gil, a senior vice president at IBM. They believe that AI should not be locked away like a secret, but rather be openly accessible for the benefit of all. The concept of open-source AI stems from the long-standing practice of building software, where the code is freely available for examination, modification, and expansion.
However, defining open-source AI is not without challenges. Different computer scientists have varying interpretations, considering the extent to which components of the technology are publicly accessible and if any usage restrictions apply. Some use the term "open science" to describe the broader philosophy.
The newly formed AI Alliance comprises Meta, IBM, Dell, Sony, chipmakers AMD and Intel, several universities, and AI startups. Its purpose is to champion a future for AI that is based on open scientific exchange, open innovation, and the utilization of open source and open technologies, as outlined by Darío Gil in an interview with The Associated Press.
Adding to the confusion surrounding open-source AI, OpenAI, the company behind ChatGPT and DALL-E image generator, actually develops AI systems that are closed. Ilya Sutskever, OpenAI's chief scientist and co-founder, acknowledged the near-term and commercial incentives against open-source AI in a video interview with Stanford University. He raised concerns about the potential danger of making exceptionally powerful AI systems publicly accessible.
The risks associated with open-source or leaked AI models have been highlighted by the Center for Humane Technology, a longstanding critic of Meta's social media practices. They warn about the potential misuse of AI to amplify disinformation campaigns and disrupt democratic elections.
This disagreement over the benefits and dangers of open-source AI development has triggered a public debate. Yann LeCun, Meta's chief AI scientist, expressed his concerns on social media about "massive corporate lobbying" by OpenAI, Google, and Anthropic, aiming to shape the rules in their favor and concentrate their power over AI development. In response, these three companies, along with Microsoft, have formed their own industry group called the Frontier Model Forum.
LeCun argued that fearmongering by some scientists regarding AI "doomsday scenarios" is playing into the hands of those who seek to ban open-source research and development. He emphasized that in a future where AI systems hold vast amounts of human knowledge and culture, platform openness is crucial to ensure inclusivity and contribution from everyone.
For IBM, this dispute taps into a longer-standing competition that predates the AI boom. Chris Padilla, who leads IBM's global government affairs team, compared this approach to classic regulatory capture, citing Microsoft's historical opposition to open-source programs that could rival Windows or Office. IBM, with its history of supporting the open-source Linux operating system in the 1990s, views this as a similar pattern.
As the debate unfolds, the future of AI regulation and openness remains uncertain. The clash between the open and closed camps continues, with tech giants pushing for their preferred approach while acknowledging the complex and evolving nature of AI technology.