Extrinsic Hallucinations in LLMs
Read OriginalThis technical article delves into the problem of hallucinations in large language models (LLMs), specifically defining and focusing on 'extrinsic hallucinations' where outputs are not grounded in the model's pre-training data (world knowledge). It analyzes root causes, including issues with pre-training data quality and how fine-tuning with new knowledge can increase hallucination tendencies, citing recent research.
Comments
No comments yet
Be the first to share your thoughts!
Browser Extension
Get instant access to AllDevBlogs from your browser