In a new video, Chainlink co-founder Sergey Nazarov outlined how Chainlink’s decentralized oracle networks (DONs) can interact with artificial intelligence (AI) models to the degree needed to mitigate risks such as hallucinations – incorrect or misleading results from factors including faulty assumptions, poor data training, and information biases.
As the industry-leading decentralized computing platform, Chainlink provides smart contracts with cryptographically verifiable data, computation, and cross-chain capabilities that maintain the integrity of a blockchain’s deterministically guaranteed environment. Originally conceived as a DON for aggregating different external market data sources to power decentralized finance (DeFi), Chainlink has now enabled over $12 trillion in transaction value and expanded to various other types of data, such as weather data for insurance smart contracts.
Nazarov regards Chainlink as a secure and reliable interface for connecting smart contracts to virtually any external data source, including AI. He outlined two primary ways Chainlink can interact with AI models.
“One way is to interface with individual AI models as a single source of inputs into smart contracts,” he explained. “For this to work well, I think you need to put a lot of faith in that single AI model and to consider that single AI model a very reliable source of inputs and information, similarly to how you would consider a single data source as a reliable source of inputs and information.”
If an AI model is reliable enough to control value with minimal hallucinations, Chainlink can enable that AI model to control smart contracts on multiple different chains. However, if the risks of a single point of failure are too great, Chainlink can be used to aggregate the results of multiple AI models as it would aggregate market data.
“You can actually connect Chainlink to multiple individual AI models and you can aggregate their responses within a Chainlink oracle network to get them to align or agree to a certain threshold that you feel creates a greater degree of reliability,” Nazarov said. “So you’re not getting an input from a single AI model; you’re actually getting data and input from multiple AI models – let’s say five or seven or more AI models – and now you’re getting the answer of the universe of AI models – not just a single one.”
Watch the full video.