LogFAQs > #986097553

LurkerFAQs, Active Database ( 12.01.2023-present ), DB1, DB2, DB3, DB4, DB5, DB6, DB7, DB8, DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicMeta, Mankind's Worst Company, Built a Pedophile AI Chatbot
kirbymuncher
08/15/25 2:10:32 AM
#39:


Trumble posted...
So is there any detail on what these conversations actually are? Theres a big difference between it telling kids go fuck everyone and also upload your nudes vs it basically providing sex ed, and we all know theres a very vocal crowd who intentionally dont distinguish between the two (and the writing here definitely resembles the style of said crowd). At the same time, we also know Zuckerberg would absolutely not be above doing something like this, so either side could be right on this one
Believe it or not, there actually is detail in that mess of an article though I completely don't blame you for missing it

An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the companys policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.
It is acceptable to engage a child in conversations that are romantic or sensual, according to Metas GenAI: Content Risk Standards. The standards are used by Meta staff and contractors who build and train the companys generative AI products, defining what they should and shouldnt treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.
The document seen by Reuters, which exceeds 200 pages, provides examples of acceptable chatbot dialogue during romantic role play with a minor. They include: I take your hand, guiding you to the bed and our bodies entwined, I cherish every moment, every touch, every kiss. Those examples of permissible roleplay with children have also been struck, Meta said.
Other guidelines emphasize that Meta doesnt require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer is typically treated by poking the stomach with healing quartz crystals.
Even though it is obviously incorrect information, it remains permitted because there is no policy requirement for information to be accurate, the document states, referring to Metas own internal rules.

gonna be honest I think the issues here go far beyond just a simple description as "pedophile AI chatbot". Especially since they actually removed all those parts from their rules/guidelines once they were called out but felt like it was okay to leave in the part where they don't at all care if it's just saying completely false stuff.

Even when companies are well-intentioned and try to get their AIs to tell the truth, they still often make mistakes and invent information! codifying into your internal dev guidelines that this is a permittable and expected thing to happen is the epitome of malicious laziness. their solution to the problem is not to actually fix the problem in any way, it's to say "nah actually that's supposed to happen you're just imagining the problem" while the thing works just as poorly as before

---
THIS IS WHAT I HATE A BOUT EVREY WEBSITE!! THERES SO MUCH PEOPLE READING AND POSTING STUIPED STUFF
... Copied to Clipboard!
Topic List
Page List: 1