LogFAQs > #984994185

LurkerFAQs, Active Database ( 12.01.2023-present ), DB1, DB2, DB3, DB4, DB5, DB6, DB7, DB8, DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicLLM hallucinations are getting worse, not better
ssjevot
05/08/25 7:20:58 AM
#21:


LuigiChalmersJr posted...
Yeah, I often have to correct ChatGPT on basic things. Then it responds "Oh, you're absolutely right!" So if I didn't correct it, it would've continued to spew out wrong information.

It's troublesome for sure. I can't really trust it with topics I'm not familiar with.

It doesn't actually know when it is or isn't making stuff up. I would prefer hallucinations be called confabulations, but there is the issue of people attributing intentionality to it. It's just answering queries based on its training data. And the most important part to remember is it doesn't actually have a repository of that data or know what it was trained on. It has learned general concepts that it uses to produce responses, but is not actually able to verify the accuracy of those responses.

---
Favorite Games: BlazBlue: Central Fiction, Street Fighter III: Third Strike, Bayonetta, Bloodborne
thats a username you habe - chuckyhacksss
... Copied to Clipboard!
Topic List
Page List: 1