AI
When MelCNN Hallucinates: Why Clean Spectrograms Still Produce Wrong Sounds
How MelCNN learned “ghost sounds” from mislabeled silence Introduction Hallucination is usually discussed in the context of large language models—confident answers to questions that were never asked, or facts that never existed. But hallucination isn’t exclusive to text models. In audio systems, especially CNN‑based classifiers like MelCNN,