Five years ago, Mark Zuckerberg was ridiculed for a wooden performance before the U.S. Congress where he came across as a cyborg trying to explain Facebook's disastrous data privacy fumbles. There were even memes portraying him as Data from "Star Trek."
For a CEO at the forefront of a technology that many fear will lead to cyborgs taking over the world, OpenAI's Sam Altman, who made his debut congressional appearance on Tuesday, actually came across as, well, human – even a likable one.
That was a good thing for AI amid the scary scenarios that the introduction of OpenAI's ChatGPT triggered six months ago, from even deeper erosion of privacy rights to unstoppable waves of deep fakes and disinformation.
Altman was even publicly praised by a fellow witness and known AI critic, Gary Marcus, a former New York University professor.
"His sincerity in talking about those fears is very apparent, physically, in a way that just doesn't communicate on the television screen," he said at the hearing.
Valoir analyst Rebecca Wettemann said the seemingly more collaborative tone of the hearing was somewhat refreshing. "It's nice to see someone in tech on Capitol Hill collaborating with Congress instead of being scolded," she told The Examiner.
"Politically brilliant" was how Silicon Valley investor Rob Siegel described Altman's performance. "He came across as knowledgeable, engaged, with government, etc."
That engagement featured a humble acknowledgment of what many people worry about: that AI will become so powerful it could upend many aspects of society.
Altman readily admitted that "if this technology goes wrong, it can go quite wrong, and we want to be vocal about that."
"We want to work with the government to prevent that from happening," Altman added, as he also joined Marcus and Christina Montgomery, IBM's chief privacy and trust officer, in stressing the need for AI regulation.
That candor appeared to assuage a stern critic of the tech industry like Senator Dick Durbin (D-Illinois).
When Zuckerberg appeared before Congress five years ago, Durbin took a more confrontational tone in grilling the tech executive whose company was accused of mishandling a massive trove of user data that was then misused for political campaigns.
The so-called Cambridge Analytica scandal was the largest known leak in Facebook's history.
Ex // Top Stories
Board of Supervisors President Aaron Peskin writes that the surge of overdose deaths and drug dealing is a failure of leadership in The City
Meanwhile, Salesforce has enjoyed substantial tax breaks from its charitable donations
The family who lived next to the exploded home are suing the landlords of that house
Durbin asked Zuckerberg if he wouldn't mind sharing the name of the hotel where he was staying.
"No. I would probably not choose to do that publicly, here" Zuckerberg said. "I think everyone should have control over how their information is used."
At Tuesday's hearing, Durbin said: "I can't recall when we've had people representing large corporations or private entities come and plead with us to regulate them."
Senator Richard Blumenthal (D-Connecticut) stressed the need to "not repeat our past mistakes." He was referring to how the web – which marked its 30th anniversary as a public domain this year – morphed into a realm for spreading disinformation and hate.
Blumenthal cited the controversial Section 230 of the 1996 Communications Decency Act that effectively shields web companies from liability for content posted on their sites. "Forcing companies to think ahead and be responsible for the ramification of their business decisions can be the most powerful tool of all," he said. In a notable twist, the Supreme Court issued a ruling Thursday that keeps Section 230 in place.
Altman acknowledged the need for a new framework to grapple with the new challenges posed by AI."Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well," he said.
But the Senate lovefest took place at a time when AI has become the hottest battleground in tech raising doubts about how seriously the industry will take responsibility for "the tools that we put out in the world."
This brings us back to Zuckerberg.
Facebook parent Meta is not considered a significant player in the battle over AI. The company is trying to change that with a controversial, and some say dangerous, strategy. In February, the company announced that it was making its AI technology open-source software, effectively allowing third parties to create their own AI tools.
Stanford researcher Moussa Doumbouya, in a private chat reported by the New York Times, compared what Meta did to making "a grenade available to everyone in a grocery store."
In a way, Altman was guilty of a similar kind of recklessness, Siegel of Stanford suggested.
While the OpenAI wowed the senators with pronouncements about taking responsibility and the need to prepare for the risks posed by AI, "he did this AFTER releasing the technology into the wild, which seems a bit odd," Siegel told The Examiner.
"If he really cared about safety, etc., wouldn't he have been more prudent?"