OpenAI to address AI ‘hallucinations’ risk.

OpenAI, a company that focuses on researching and developing artificial intelligence (AI), has created a chatbot named ChatGPT that uses AI to communicate with users. Recently, the company has been working on a way to prevent AI “hallucinations”. This refers to unexpected responses generated by AI which do not seem to be justifiable by the training data. OpenAI believes that preventing these hallucinations is a crucial step towards building aligned AGI (artificial general intelligence) and solving reasoning problems.

The ChatGPT chatbot has had some instances of providing inaccurate information, such as stating that the Mona Lisa was created in 1815 instead of between 1503 and 1506. Additionally, the chatbot falsely accused a law professor of sexual assault and provided incorrect citations from a news article. To prevent these types of errors, OpenAI has developed a new approach called “process supervision”, which trains AI models to reward themselves for each correct step of reasoning rather than just for providing a correct final answer.

OpenAI has compared this process with outcome supervision, which provides feedback based on a final result. The company found that process supervision is a more effective tool and leads to significantly better performance. This method is also more likely to produce interpretable reasoning, as it encourages the model to follow a human-approved process. OpenAI warns users not to blindly trust ChatGPT as it may not always provide accurate information.

OpenAI has released a research paper and a dataset of 800,000 human labels it used to train the supervision model. However, it is not yet clear if the paper has been peer-reviewed and should be considered preliminary research.