Can We Align LLMs to Honesty via Instruction Fine-Tuning? Addressing Hallucination in Large Language Models with Refusal-Aware Instruction Tuning
Researchers from the Hong Kong College of Science and Expertise and the College of Illinois Urbana-Champaign ...
Read more