OpenAI Chief Executive Officer Sam Altman surprised everyone last month when he warned Congress of the dangers posed by artificial intelligence. Suddenly, it looked like tech companies had learned from the problems of social media and wanted to roll out AI differently.
“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” OpenAI execs Sam Altman, Greg Brockman and Ilya Sutskever said. “We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example. We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.”
Google and Alphabet CEO Sundar Pichai on May 22 penned a Financial Times article headlined “Google CEO: Building AI responsibly is the only race that really matters” where he doubled down on his previous calls for AI regulation. “I still believe AI is too important not to regulate, and too important not to regulate well,” he said.
Jen Gennai, Director of Responsible Innovation at Google, agreed with Microsoft’s assessment of the opportunity and argued that there has been a lot of learning around AI governance, and that companies are ensuring that there are guardrails in place to ensure AI is developed responsibly. She said:
“In terms of where we are, on the maturity curve, I would argue, AI is more mature than the original internet. But in terms of the potential, we haven’t seen all the areas AI can be helpful from a societal level, a commercial level, and that’s pretty exciting right now. But the governance is where I’d argue that the majority is actually further along than originally in the internet age”.
OpenAI has been at the forefront of this regulatory request, having previously been questioned by US Congress and agreeing that it does in fact need further restrictions.AI is advancing faster than anyone suspected, but a few have pointed out that the existing language models aren’t as smart as they’re made out to be. Meanwhile, others are using the current tools to build research machines for exploring the dark web or to destroy humanity itself.