Topic List | Page List: 1 |
---|---|
Topic | LLM hallucinations are getting worse, not better |
ssjevot 05/08/25 7:20:58 AM #21: | LuigiChalmersJr posted... Yeah, I often have to correct ChatGPT on basic things. Then it responds "Oh, you're absolutely right!" So if I didn't correct it, it would've continued to spew out wrong information. It doesn't actually know when it is or isn't making stuff up. I would prefer hallucinations be called confabulations, but there is the issue of people attributing intentionality to it. It's just answering queries based on its training data. And the most important part to remember is it doesn't actually have a repository of that data or know what it was trained on. It has learned general concepts that it uses to produce responses, but is not actually able to verify the accuracy of those responses. --- Favorite Games: BlazBlue: Central Fiction, Street Fighter III: Third Strike, Bayonetta, Bloodborne thats a username you habe - chuckyhacksss ... Copied to Clipboard! |
Topic List | Page List: 1 |