My opinion on AI search engines is kinda turning around.

Poll of the Day

Page of 2
Poll of the Day » My opinion on AI search engines is kinda turning around.
I've dunked on AI in the past alot because I've had a few too many people act like it's Gospel, to the point of people showing up at my retail job getting upset we didn't have something because AI told them we did. But recently I've realized how much time it's saved me when I search something up for info and means I often don't have to sift through dogshit fluff articles for the thing I need from junk journalists who's sole job it is fill out an article like it's a Highschool word count essay.

I'll always hate AI Art though, that will never change.

blogfaqs i guess.
My opinion on them hasn't changed because they return inaccurate or irrelevant info more often than not, and thus aren't just useless, they're actively harmful.

That being said, the real problem is that search engines as a whole have have degraded so much over the last 10-15 years or so that you're going to have a hard time finding what you're looking for no matter how you look for it.

I actually hate AI search waaay more than AI art. AI art really only potentially hurts artists, and most artists kind of suck anyway.
"Wall of Text'D!" --- oldskoolplayr76
"POwned again." --- blight family
Ironically, AI Google is actually doing a decent job of reminding me of days when Google actually used to be good.

Not because the AI summary is any good (it's often dogshit - just the other day I was looking up the birth and death dates of a particular martial artist and Google AI pulled the wrong dates and tried to tell me he was 139 years old when he died), but because it lists the sources it's pulling from, so you can just go into those and often find the actual data you're looking for. Kinda goes back to the days when the first ten results weren't ads and a bunch of paid sites that are exploiting SEO.
Kill 1 man: You are a murderer. Kill 10 men: You are a monster.
Kill 100 men: You are a hero. Kill 10,000 men, you are a conqueror!
Search engines are the ones making search worse. The reason so many articles got shitty in the first place was because of the rat race to game SEO algorithms with fluff keywords. Now with these AI results, it's depriving websites of clicks (even if the sites were annoying to try to read through), which is eventually going to kill legitimate websites. It's going to be a new rate race to spin up cheap bullshit content farms that lean heavily on AI, and it's going to spit out less and less reliable results as it basically plays a game of telephone with itself.
I asked Google how much it costs to send a letter via Registered Mail. The AI gave me the wrong answer, even though the correct answer was literally right there above it.

https://gamefaqs.gamespot.com/a/forum/2/2335ce41.jpg
Minutus cantorum, minutus balorum,
Minutus carborata descendum pantorum.
Any artificial intelligence built from human intelligence is 100% guaranteed to have flaws. We can't even properly utilize the intelligence we've been given.
"I don't question our existence, I just question our modern needs" Pearl Jam - Garden
My theme song - https://youtu.be/-PXIbVNfj3s
The AI overviews for me have either been 100% accurate because they steal from the first result which is accurate, or nearly 60% accurate which is a very bad place to be cuz you have to double-check everything.

Like, I'm not a good googler - I often fail to find what I'm looking for, and the AI overview tends to reflect this by spouting gibberish.
"Salt cures Everything!"
My YouTube: https://www.youtube.com/user/Nirakolov/videos
ParanoidObsessive posted...
My opinion on them hasn't changed because they return inaccurate or irrelevant info more often than not, and thus aren't just useless, they're actively harmful.
Music: https://www.youtube.com/playlist?list=PLv4cNOBY2eCInbxg6B-KRks6vKMfmFvtp
Genshin Showcase: https://enka.network/u/608173646/
I mean you cant just trust the AI answers but they are undeniably very good for how new they are. In a few years you probably will be able to just trust them as much as any other source on the internet.
Cause the best is yet to come
I'm big on using GPT's as a replacement for search engines. They're obviously capable of hallucinating and giving straight up false information, but even with that I've always been able to get to my desired result much faster than a search engine.

But I do agree that this will contribute to killing the internet, blogs, websites will stop getting traffic, even more so now than was already a problem with search engines. Bot written blogs will start replacing them, which GPT's will get trained on and it's going to be a mess.
So I was standing still at a stationary store...
Damn_Underscore posted...
I mean you cant just trust the AI answers but they are undeniably very good for how new they are. In a few years you probably will be able to just trust them as much as any other source on the internet.

But that's the problem - they're pulling their info from other sources on the Internet. Which means, in order to verify the AI, you have to check the sources. But at that point you might as well skip the AI entirely and go straight to the sources. Which is how search engines used to work, before they effectively broke themselves.

The real danger is that a lot of people will never check the sources. They'll just assume the AI answer is correct even when it's radically, massively, dangerously wrong.

And then because of how the Internet works, and with the sheer number of bots and incestuous AI loops, errors will almost certainly self-perpetuate, until you'll literally be unable to trust anything at all (even more so than already, that is). AI search likely won't improve over time, it'll actively degrade.

It's basically an example of a concept that's been around in computing for more than 50 years - and a lot of programmers and engineers today could really benefit from remembering it:

" Garbage In, Garbage Out "
"Wall of Text'D!" --- oldskoolplayr76
"POwned again." --- blight family
ParanoidObsessive posted...
But that's the problem - they're pulling their info from other sources on the Internet. Which means, in order to verify the AI, you have to check the sources. But at that point you might as well skip the AI entirely and go straight to the sources.

Pretty much this. I'll never be able to categorically trust that the AI answer is right, so I'll always have to look at the sources for the data. If I'm looking at the sources for the data, though, then the AI was irrelevant. The sole purpose of the AI in that case is to deliver me results that answer my question, and that's just what search engines have been doing for decades (though getting progressively worse).
This is my signature. It exists to keep people from skipping the last line of my posts.
Nah, i still get incorrect answers from it. It sucks because articles and videos are just full of fluff and ads now, so this is one area where AI would be useful.
ORAS secret base: http://imgur.com/V9nAVrd
3DS friend code: 0173-1465-1236
I've found they're specifically good for answering questions about how to do something in a video game, because they pull from Reddit where people have gone on deep dives into mechanics and specific situations. But it's awful for almost everything else, because it pulls from Reddit.

I have also just gotten blatantly false information. Not long ago I googled a drummer who has been in a lot of bands to find out what specific albums he had worked on and it was giving me albums that were released when he was a baby and claiming he'd had a tenure in bands that I know have had one single drummer for their entire existence. Like it was just making shit up.
\\[T]// Praise the Sun
man101 posted...
Like it was just making shit up.

The problem isn't that it was making shit up. The problem is that it's learning from sources that made shit up. And it has no capacity to differentiate what data is correct and what isn't.

At best , you might be able to program an AI to filter out answers depending on how often it reoccurs in sources, but then you're just going to get the popular lies.

And you also get the SEO problem of, if I want to increase the odds that your AI will say what I want it to, I just need to make about a thousand different fake articles with slightly varied text, so the AI will read them all as separate sources, compare them against each other, and take whatever it says as fact regardless of how many blatant and ridiculous lies I've packed into it. They all agree, so it must be true. And if I'm a corporation that makes a ton of dangerous products, I have a vested interest in using that strategy to convince the AI to tell you that "Explodium Death Gas" is perfectly safe for you to inhale or have in your house, or that it's a 10/10 product that everyone should buy and any negative reviews you might hear are just crazy radical right-wing paranoiacs trying to review bomb.

As you pointed out, AI pulling from Reddit are going to have about the same level of accuracy as your average Redditor. But you'll have a similar problem no matter where you're pulling from, unless you're severely limiting what sources it can pull from, and having a human pre-verify everything for factual accuracy in advance (and even then, you're still going to have human bias gumming up your works).

Every AI is basically as stupid as the stupidest people you allow to feed it.

AI could theoretically be useful on a smaller scale (ie, teach an AI to read everything Shakespeare ever wrote, then use it to look up any passage you want in an instant), but then you're mostly just using it as a search engine anyway, and we've already had those for 30+ years.
"Wall of Text'D!" --- oldskoolplayr76
"POwned again." --- blight family
My view on AI as a whole is starkly negative, for pretty much all the reasons stated here. Ai definitely has a place as a tool, specifically for ones that are fine-tuned specifically for fields of, say, medical research. There's plenty of niche cases for specific AI tools to be wonderfully beneficial.

For everything else, it's dogshit.

That's the thing, too - no one is using it as a tool or as an assistant. Rather, people are wholesale surrendering EVERYTHING to AI, and it's startling in just how many forms that has taken place. There's that now infamous case of the lawyer getting in deep shit (disbarred, I think) for using AI to do his case, and it cited past cases that straight-up don't exist. There are stories of people getting busted for using AI to write their wedding vows. People use it to draft their Fantasy Football lineups. Etc. upon etc. Not questioning it, either, just blindly letting AI tell them what to do. Considering the already plummeting literacy rates and overall education scores *already*, what we are having is a whole generation growing up that will have AI in their hands from day one. By the time I'm 65, I'll be talking to 20 year olds that will be functionally braindead.

The worst part is, it's all for nothing. Aside from niche cases I mentioned above, AI taking over everything is helping no one. Silicone Valley has admitted as much, and it's an open secret that the entire thing is a speculation vehicle to make the already insanely rich corporations richer. None of this is supposed to be profitable for 10 years, and it's a tech bubble that's guaranteed to burst. Meanwhile, some of these facilities consume more electricity than the entire state they are in consumers in a whole year, and literally drains rivers with the amount of water they need to run. We don't even have the electricity demand to supply all the hypothetical data centers, and prices for electricity are starting to spike.

In summary, it's a technology that makes almost everything worse, rapes the planet, is making humans dumber and reliant on it, all so the richest entities on the planet can get even richer. As of now, it's an absolute net negative on humanity.
PotD's resident Film Expert.
I basically never use AI outside of when I have to (Google searches). So it can exist but it's going to have to improve for me to use it regularly at least.
Cause the best is yet to come
I have all the ai overviews blocked in my adblocker. It's very frustrating for me to see incorrect information, especially when it's randomly generated incorrect information. Humans at least make comprehensible errors that lead to incorrect outputs, the AI just makes up an answer because nobody on reddit ever says "I don't know"

I only use LLMs to write code. They are pretty useful for making boilerplate for well-documented libraries -- I think this has more to do with the sorry state of the software industry than the power of LLMs.
creature-based
ParanoidObsessive posted...
they return inaccurate or irrelevant info more often than not
No they dont. They do occasionally return garbage but it's definitely not "more often than not" in my experience. I think it's just cool to hate AI now more than anything
OhhhJa posted...
No they dont. They do occasionally return garbage but it's definitely not "more often than not" in my experience. I think it's just cool to hate AI now more than anything

It's often enough that they should never be trusted without further verification. When that further verification consists of looking at other search results and evaluating them, it makes the AI results pointless.
This is my signature. It exists to keep people from skipping the last line of my posts.
People might be a bit overzealous in saying stuff like AI is always wrong or has no uses at all, but they're right to unconditionally oppose it in spirit. Defenders will say something like "it's just another tool" but we've seen enough throughout history to know that none of these productivity benefits will be passed on to the worker (if anything they've only made their lives worse). That's even if you can call it a tool or say it benefits productivity, which is dubious at best. Most tools have a clearly defined use case and are self-sufficient, neither of which is true of AI
You may be right that the productivity won't be passed onto to the worker but that is a separate issue and also not the fault of the tools themselves.
Cause the best is yet to come
I'm definitely not a "supporter" of AI but i cant really say I'm opposed. AI has been used for decades as a tool but I dont like what is beginning to happen with modern AI and the implications for the future. But as far as AI searches go, I agree with OP. It's largely helpful and generally pretty accurate in my experience. I wouldn't completely rely on it for some major project but it's useful if you kind of already know the subject matter of what you're doing enough to discern whether or not the info is good or bad.

Either way, you could say the same thing for simply googling something. The info can be good or bad. You still have to have some knowledge and exhibit some critical thinking
CyborgSage00x0 posted...
There's that now infamous case of the lawyer getting in deep shit (disbarred, I think) for using AI to do his case, and it cited past cases that straight-up don't exist.

"Your Honor, I'd like to cite the precedent set by Wright vs Von Karma (2001) ."



CyborgSage00x0 posted...
None of this is supposed to be profitable for 10 years, and it's a tech bubble that's guaranteed to burst

So what you're saying is, we need to start working on AI-powered NFTs.
"Wall of Text'D!" --- oldskoolplayr76
"POwned again." --- blight family
OhhhJa posted...
No they dont. They do occasionally return garbage but it's definitely not "more often than not" in my experience.

It probably depends on what you're looking up.

It happens often enough for me that it's not even remotely worth using at all. If not 50% of the time, it's pretty damned close. A lot of time, it's actually trying to answer a question I didn't even ask it, so it's 100% worthless regardless of whether or not the info is accurate or not.

It's very similar to years ago, when people were still praising Google search, but I found it to be more or less useless for every single thing I wanted to look up (Yahoo search actually worked better most of the time), except for reverse image look-ups. And even then, it was really only worth using not because it was good , but because it had managed to out-compete most of the better options on other sites.

Google's enshittification isn't a new thing. It just took some people longer to notice than others.



bachewychomp posted...
Defenders will say something like "it's just another tool" but we've seen enough throughout history to know that none of these productivity benefits will be passed on to the worker

To be fair, I'm one of the people who say "it's just another tool". Usually when defending AI art.

The problem is, in the case of AI being used for Internet searches, it's a tool that is broken. If I had a stove that only cooked at the right temperature unpredictably 50% of the time, I probably wouldn't be using it to cook my food. If it occasionally poisoned my food to boot, I definitely wouldn't be using it to cook dinner.

I have no problems with the moral or ethical considerations of AI. I just don't think it works in most of the applications it's being forced into. There's a push being driven by corporate greed to shove it into everything with very little consideration or preparation, and the end result is that almost everything it's going into is made worse in the process. And because companies are spending so much to integrate it, they have a vested interest in forcing you to use it, so often you don't even have the option to opt out.

But I'll listen to fake AI music on YouTube or fake movie trailers or look at pretty pictures, and it doesn't seem all that worse than what Hollywood is actually putting out these days, so I welcome our robotic entertainment overlords.

The problem with AI isn't AI. The problem with AI is corporate mentality.
"Wall of Text'D!" --- oldskoolplayr76
"POwned again." --- blight family
SunWuKung420 posted...
Any artificial intelligence built from human intelligence is 100% guaranteed to have flaws. We can't even properly utilize the intelligence we've been given.

Case and point, it's not artificial intelligence. It's a fucking search engine that throws up what it finds. Stop calling it "AI".
Your loyalty lies on the wrong side of the future
Damn_Underscore posted...
You may be right that the productivity won't be passed onto to the worker but that is a separate issue and also not the fault of the tools themselves.

It is, however, the fault of the people pushing those tools. Giving them a hard time is 100% fair game.

OhhhJa posted...
Either way, you could say the same thing for simply googling something. The info can be good or bad. You still have to have some knowledge and exhibit some critical thinking

You absolutely can, which means AI yields no actual benefit over just doing a regular search. In exchange for yielding no benefit, you you end up with significant concerns over data collection, massively greater environmental impact compared to a regular search, and a significantly elevated risk of less skeptical users accepting misinformation as fact. That's not a net benefit.

The only possible benefit would be that the AI results are conveniently at the top of your search results, but that's got nothing to do with the AI and everything to do with how the search results have been deliberately laid out by Google. If they instead put the results they determined were most likely to answer your question at the top of the search, that would serve the same purpose with no need to fabricate a use case for all the money they've invested into LLMs by including a dubious summary.
This is my signature. It exists to keep people from skipping the last line of my posts.
adjl posted...
You absolutely can, which means AI yields no actual benefit over just doing a regular search.
It's much quicker. Thats the benefit. There's been a ton of times where the ai summary has saved me a few minutes of perusing the results
OhhhJa posted...
It's much quicker. Thats the benefit. There's been a ton of times where the ai summary has saved me a few minutes of perusing the results

What that actually means is that it lets you skip the step of critically evaluating results to verify that the answer you've been given is correct. That's a pretty big tradeoff for "quicker."
This is my signature. It exists to keep people from skipping the last line of my posts.
I'll trust AI when autocorrect stops making me tell people to duck off.
Minutus cantorum, minutus balorum,
Minutus carborata descendum pantorum.
adjl posted...
It is, however, the fault of the people pushing those tools. Giving them a hard time is 100% fair game.

AI taking jobs (or assembly lines taking jobs, or sewing machines taking jobs) is a symptom rather than its own problem. You shouldnt require another person to employ you to live the lowest standard of a comfortable life. Actually at one point the standard of a comfortable life was so low that you really didnt require another person to employ you. Society has changed a lot over the years.
Cause the best is yet to come
adjl posted...
It's often enough that they should never be trusted without further verification. When that further verification consists of looking at other search results and evaluating them, it makes the AI results pointless.
Yeah, if something is demonstrably incorrect or unreliable even 20-30% of the time, then it cannot and should not be trusted entirely any percent of the time.

I've seen AI assistants and tools slowly creeping into the software that I use for work over the past year or so and without exception, all its suggestions have been wrong. Ranging from slightly inaccurate and thus not worth using to so highly inaccurate that if I were to just do what it suggested blindly, or if I were to be replaced by it, my company would be breaking the law repeatedly.
\\[T]// Praise the Sun
man101 posted...
Yeah, if something is demonstrably incorrect or unreliable even 20-30% of the time, then it cannot and should not be trusted entirely any percent of the time.

Heck, I'd say 5% is unacceptably high, and even that's way too generous for situations where a mistake can have real consequences.
This is my signature. It exists to keep people from skipping the last line of my posts.
my phone updated and added more AI features... now it drains idling battery 60% faster
"Salt cures Everything!"
My YouTube: https://www.youtube.com/user/Nirakolov/videos
I have to say searching for Is this player good or Is this team good and getting an instant answer is really helpful. The question is subjective anyway so you dont really have to worry about misinformation
Four bells were tolled, Four torches were lit
And the world continued for thousands of years...
ParanoidObsessive posted...
"Your Honor, I'd like to cite the precedent set by Wright vs Von Karma (2001)."
Would love to see the judge Google that.
PotD's resident Film Expert.
Damn_Underscore posted...
I have to say searching for Is this player good or Is this team good and getting an instant answer is really helpful. The question is subjective anyway so you dont really have to worry about misinformation

If you're asking a question for which the accuracy of the answer doesn't matter, it doesn't matter that you get an answer at all.
This is my signature. It exists to keep people from skipping the last line of my posts.
adjl posted...
If you're asking a question for which the accuracy of the answer doesn't matter, it doesn't matter that you get an answer at all.

There are certain questions for which you dont need to go deep diving to confirm that an answer is true or false. You can also use your previous knowledge to have an idea of whether the answer is in the ballpark or not.
Four bells were tolled, Four torches were lit
And the world continued for thousands of years...
Damn_Underscore posted...
There are certain questions for which you dont need to go deep diving to confirm that an answer is true or false. You can also use your previous knowledge to have an idea of whether the answer is in the ballpark or not.

That's the thing, though: If you accept whatever answer you find that seems correct enough based on what you know, you're no more likely to have the correct answer than if you just made up an answer that seems correct enough based on what you know. If being right matters, that's a bad thing either way. If being right doesn't matter, then why waste time putting in the effort to find somebody else's answer that seems correct instead of just inferring a plausible conclusion yourself?
This is my signature. It exists to keep people from skipping the last line of my posts.
adjl posted...
That's the thing, though: If you accept whatever answer you find that seems correct enough based on what you know, you're no more likely to have the correct answer than if you just made up an answer that seems correct enough based on what you know. If being right matters, that's a bad thing either way. If being right doesn't matter, then why waste time putting in the effort to find somebody else's answer that seems correct instead of just inferring a plausible conclusion yourself?

As an example I can just wonder down into my home gym, ask ChatGPT for a chest workout and it will give me a full workout list. It already knows what equipment I have, what injuries I have that impact specific movements I can do so it's just much faster than going through and getting all that info myself. Even though I could.

When it gives me the full workout list, I can see the names of the lifts and know if they're something I want to do or not. All I did was saving myself the effort of having to get it all and put it in a nice list with 4 sets of 5-6 or whatever. And at the end of the day I can ignore it anyway. This just expands on my earlier answer where it's just faster than search engines or doing something yourself.

GPTs are best used as a personal assistant that you assume knows nothing, but you ask it to go get stuff for you.

But another argument I want to make separate from the quoted discussion is that it IS useful when the answer doesn't have to be correct. I'm not very creative, so sometimes I'll give ChatGPT an NPC from my tabletop game, give them all the info I know about them and think about them in my head then I ask them to give me a famous movie character or actor for me to imitate when speaking as them. Specific mannerisms and ticks they may have. That's just another example where a search engine would have turned up zero results for such a specific question but I get a very satisfactory answer.
So I was standing still at a stationary store...
CyborgSage00x0 posted...
Would love to see the judge Google that.

It's okay, the judge will just use the AI look-up to see what it was, and be told it was a real case that set a real precedent, and the cycle will continue.
"Wall of Text'D!" --- oldskoolplayr76
"POwned again." --- blight family
ParanoidObsessive posted...
My opinion on them hasn't changed because they return inaccurate or irrelevant info more often than not, and thus aren't just useless, they're actively harmful.

This.

ParanoidObsessive posted...
That being said, the real problem is that search engines as a whole have have degraded so much over the last 10-15 years or so that you're going to have a hard time finding what you're looking for no matter how you look for it.

Also this. It's part of the great enshittification.

The whole thing goes a lot wider than just search engines. Nothing works as well as it used to, largely by design. I've copy-pasted exactly product names into Amazon, from the actual listings, only for that product to not show up for several pages, with many of the top results not even being close. (And that was after using a simplified name for the product turned up two kinda-similar things then a lot of unrelated results)
(\/)(\/)|-|
There are precious few at ease / With moral ambiguities / So we act as though they don't exist.
"I" "have" "to" "put" "everything" "in" "quotes" "when" "I" "use" "Google" "otherwise" "it" "gives" "me" "nothing" "but" "garbage"
Minutus cantorum, minutus balorum,
Minutus carborata descendum pantorum.
ParanoidObsessive posted...
It's okay, the judge will just use the AI look-up to see what it was, and be told it was a real case that set a real precedent, and the cycle will continue.
As it stands, students now are submitting AI-written assignments to teachers, who in turn are using AI to grade said papers.

It's just robots talking to robots.
PotD's resident Film Expert.
the issue with them is how much they cite Reddit, Quora, and Youtube as truths.
*flops*
https://gamefaqs.gamespot.com/a/forum/0/02ed4466.jpg
Minutus cantorum, minutus balorum,
Minutus carborata descendum pantorum.
https://gamefaqs.gamespot.com/a/forum/4/4996deaf.jpg
https://www.the-independent.com/life-style/kim-kardashian-bar-exam-law-psychics-b2863180.html

There's layers to this article (thank you, Kim Kardashian, for the PSA that "psychics" just miiiiiight be full of shit), and no one was mistaken Kardashian for being smart to begin with, but this is what AI is already doing to people. Literally convincing people that using ChatGPT and *nothing else* will get you to pass the California bar.
PotD's resident Film Expert.
I actually had a conversation yesterday with someone who was taking AI answers as fact. They are a helpful starting point but when someone does this its annoying af
Four bells were tolled, Four torches were lit
And the world continued for thousands of years...
Damn_Underscore posted...
I actually had a conversation yesterday with someone who was taking AI answers as fact. They are a helpful starting point but when someone does this its annoying af

If they're using ChatGPT tell them to look at the bottom of the page under the text bar lmao

https://gamefaqs.gamespot.com/a/forum/7/7cd72e63.png
So I was standing still at a stationary store...
Poll of the Day » My opinion on AI search engines is kinda turning around.
Page of 2