Can Chainlink Solve the AI Hallucination Problem?

5 hours ago 4

Chainlink has announced a caller attack to tackling the contented of AI hallucinations erstwhile ample connection models (LLM) make incorrect oregon misleading information. 

Laurence Moroney, Chainlink Advisor and erstwhile AI Lead astatine Google, explained however Chainlink reduces AI errors by utilizing aggregate AI models alternatively of relying connected conscionable one. 

Chainlink'S Approach To Tackle Ai HallucinationsChainlink’s attack to tackling AI hallucinations, Source: X

Chainlink needed AI to analyse firm actions and person them into a structured machine-readable format, JSON. 

Instead of trusting a azygous AI model’s response, they utilized aggregate Large Language Models (LLMs) and gave them antithetic prompts to process the aforesaid information. For this, Chainlink uses AI models from providers similar OpenAI, Google, and Anthropic

The AI models generated antithetic responses, which were past compared. If each oregon astir of the models produced the aforesaid result, it was considered much reliable. With this process, the hazard of relying connected a single, perchance flawed AI-generated effect decreases.

Once a statement is reached, the validated accusation is recorded connected the blockchain, ensuring transparency, security, and immutability.

This attack was successfully tested successful a collaborative task with UBS, Franklin Templeton, Wellington Management, Vontobel, Sygnum Bank, and different fiscal institutions, proving its imaginable to trim errors successful fiscal information processing.

By combining AI with blockchain, Chainlink’s method enhances the reliability of AI-generated accusation successful finance, mounting a precedent for improving information accuracy successful different industries arsenic well.

Also Read: Aptos Adopts Chainlink Standard for Secure Data Feeds



Read Entire Article