Smart contracts are transforming the way people around the world form agreements with each other and institutions like banks and insurance providers. Hybrid smart contracts are already democratizing economic destinies around the world by allowing people to control and protect their assets from fraud and manipulation.
Smart contracts are capable of improving human relationships by eliminating ambiguity and asymmetries of power and information. But what about relationships between people and artificial intelligence (AI) systems or relationships exclusively between AI systems?
AI researcher Lex Fridman and Chainlink co-founder Sergey Nazarov explored this paradigm on a recent episode of the Lex Fridman Podcast. The conversation ran just shy of three hours, providing ample time to examine decentralization, smart contracts and peripheral pop culture interests like simulation theory.
Fridman asked Nazarov, “What do you think about a world of hybrid smart contracts codifying agreements between hybrid intelligent being networks of humans and AI systems?”
“Everybody saw the Terminator movie in the 90s and it was like, ‘This is really scary,’” Nazarov said. But he proposed a different scenario where blockchain makes it possible for society to both advance with AI and avoid Judgement Day entirely.
As AI becomes more sophisticated and widely implemented, people will naturally come to distrust it. Nazarov said he views AI through the lens of someone working in “the world of trust issues,” developing solutions using cryptographically guaranteed systems and decentralized infrastructure.
“The way that trust issue would be solved with blockchains is actually very straightforward and, I think, in its simplicity, quite powerful,” Nazarov said.
If humanity wants to reap the benefits of AI without giving it free rein, blockchains and private keys would make it possible to impose strict limits on what AI can do, provided that encryption continues to work and the AI is not specialized to break encryption. “If you bake in these blockchain-based limitations, you can create the conditions beyond which an AI could never act,” Nazarov said.
Fridman clarified: “So smart contracts actually provide a mechanism for human supervision of AI systems.”
The common fear people have about AI is that artificial intelligence will quickly surpass human intelligence, but Nazarov framed the issue around encryption instead. “It’s not about ‘Is it smarter than us?’ It’s about ‘Will the encryption hold up?’”
“Cracking encryption is very difficult. I think we’re on safe ground for quite a long time, assuming encryption holds,” Fridman said. He steered the conversation toward program synthesis and the potential for AI to generate smart contracts. “That, to me, is kind of fascinating to think of, especially two AI systems between each other generating contracts.”
“I think the highly deterministic and guaranteed nature of smart contracts would probably be preferable to an AI because I’m guessing that an AI would have a lot of problems dealing with the human element of how contracts work today,” Nazarov said. He pointed to the financial industry’s reliance on nuanced social cues surrounding instruments like derivatives as an example of language processing and interpretation ill-suited to AI.
Fridman agreed. “AI definitely dislikes ambiguity and would prefer the deterministic nature of smart contracts.”
Listen to episode #181 of the Lex Fridman Podcast with Sergey Nazarov.