OpenAI expands lobbying team to influence regulations

OpenAI is building an international team of lobbyists as it seeks to influence politicians and regulators who are increasing their control over powerful artificial intelligence.

The San Francisco-based start-up told the Financial Times that it has expanded the number of staff in its global affairs team from three in early 2023 to 35. The company aims to increase that to 50 by the end of 2024.

The push comes as governments explore and debate legislation around AI security that risks limiting the startup’s growth and the development of its cutting-edge models, which underpin products such as ChatGPT.

“We’re not approaching this from a point of view that we just have to go in there and overturn the regulations. . . because we do not have an objective to maximize profit; we have a goal to make sure that AGI benefits all of humanity,” said Anna Makanju, OpenAI’s vice president of government affairs, referring to general artificial intelligence, or the point that machines have cognitive abilities equivalent to humans.

While it makes up a small part of OpenAI’s 1,200 employees, the global affairs department is the company’s most international unit, strategically positioned in countries where AI legislation is advanced. This includes staffing in Belgium, the UK, Ireland, France, Singapore, India, Brazil and the USA.

However, OpenAI lags behind its Big Tech rivals in this scope. According to US public filings, Meta spent a record $7.6 million engaging with the US government in the first quarter of this year, while Google spent $3.1 million and OpenAI $340,000. In terms of AI-specific advocacy, Meta has named 15 lobbyists, Google has five and OpenAI has only two.

“Walking in the door, [ChatGPT had] 100 million users [but the company had] three people to make public policy,” said David Robinson, head of policy planning at OpenAI, who joined the company in May last year after a career in academia and consulting for the White House on its AI policy.

“It was literally to the point where there would be someone high-level wanting to have a conversation, and no one would pick up the phone,” he added.

However, OpenAI’s global affairs unit does not deal with some of the more difficult regulatory cases. That task goes to its legal team, which is handling matters related to UK and US regulators’ review of its $18 billion alliance with Microsoft; the U.S. Securities and Exchange Commission’s investigation into whether Chief Executive Sam Altman misled investors during his brief departure from the board in November; and the Federal Trade Commission’s consumer protection investigation into the company.

Instead, OpenAI lobbyists focus on the proliferation of AI legislation. The UK, US and Singapore are among many countries dealing with how to govern AI and consulting closely with OpenAI and other tech companies on proposed regulations.

The company was involved in discussions around the EU’s AI Act, passed this year, one of the most advanced pieces of legislation seeking to regulate powerful AI models.

OpenAI was among the AI ​​companies that argued that some of its models should not be considered among those that present a “high risk” in early drafts of the act and therefore would be subject to tougher rules, according to three persons involved in the negotiations. Despite this push, the company’s most capable models will fall under the purview of the act.

OpenAI also argued against the EU’s push to review all data provided on its establishment models, according to people familiar with the negotiations.

The company told the FT that pre-training data – datasets used to give large language models a broad understanding of language or patterns – should be outside the scope of regulation as it was a poor way to understand the results of an AI system. Instead, he proposed that the focus should be on the post-training data used to adjust the models for a given task.

The EU ruled that, for high-risk AI systems, regulators can still require access to training data to ensure it is free of errors and biases.

Since the EU law was passed, OpenAI hired Chris Lehane, who worked for President Bill Clinton, Al Gore’s presidential campaign and was Airbnb’s chief policy officer as vice president of public affairs. Lehane will work closely with Makanju and her team.

OpenAI also recently fired Jakob Kucharczyk, a former Meta competition leader. Sandro Gianella, head of European policy and partnerships, joined in June last year after working at Google and Stripe, while James Hairston, head of international policy and partnerships, joined from Meta in May last year.

The company recently engaged in a series of discussions with policymakers in the US and other markets about OpenAI’s Voice Engine model, which can clone and create personalized voices, leading to the company narrowing its release plans after concerns over the risks. how it can be used. within this year’s global elections.

The team has organized workshops in countries facing elections this year, such as Mexico and India, and published guidelines on disinformation. In autocratic countries, OpenAI gives “trusted individuals” one-to-one access to its models in areas where it deems it unsafe to release products.

A government official who worked closely with OpenAI said another concern for the company was ensuring that any rules would be flexible in the future and not become obsolete with new scientific or technological advances.

OpenAI hopes to address some of the problems from the social media age, which Makanju said has led to a “general mistrust of Silicon Valley companies.”

“Unfortunately, people are often looking at AI through the same lens,” she added. “We spend a lot of time making sure people understand that this technology is quite different, and the regulatory interventions that make sense for it will be very different.”

However, some industry figures are critical of expanding OpenAI lobbying.

“Initially, OpenAI recruited people deeply involved in politics and AI specialists, whereas now they’re just hiring tech lobbyists, which is a very different strategy,” said one person who has been directly involved with OpenAI on the creation of the legislation. .

“They just want to influence lawmakers in ways that Big Tech has done for more than a decade.”

Robinson, OpenAI’s head of planning, said the global affairs team has more ambitious goals. “The mission is safe and broadly useful, and what does that mean? It means creating laws that not only allow us to innovate and bring useful technology to people, but also end up in a world where technology is safe.”

Additional reporting by Madhumita Murgia in London

Video: AI: a blessing or a curse for humanity? | FT Tech

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top