Poll of the Day > A.I. Moratorium

Topic List
Page List: 1
Firewood18
03/30/23 2:12:50 PM
#1:


Do you think that this is a good idea?



I think we should be a little more careful but I imagine the worst case scenario would be something like the movie Her.

---
Nobody is perfect. Well, one guy was but we killed him.
... Copied to Clipboard!
Lokarin
03/30/23 2:16:31 PM
#2:


It's just cuz people realized upper management are the most easily replaced people by AI

---
"Salt cures Everything!"
My YouTube: https://www.youtube.com/user/Nirakolov/videos
... Copied to Clipboard!
MightBeOverSoon
03/30/23 2:39:06 PM
#3:


Yes, but not for the reason yall think.

If we actually succeed in one day making a sentient mind, people are absolutely going to make that mind suffer in ways people can't even imagine.

I'm terrified for the future that any artificial being is going to face at our hands.

If there ever is some big AI uprising and it wipes us out, I'm guessing we will have 100% deserved it for what we are going to do.

... Copied to Clipboard!
wolfy42
03/30/23 2:44:51 PM
#4:


We don't need a sentient A.I, we just need a program that is created to wipe out humanity and to continue exploring all ways to do so until it succeeds. Doesn't have to be self aware, just has to have the ability to access different systems etc and eventually it would probably succeed.

We don't need skynet, we just need one, i'd say insane but not sure wiping out humanity is a bad thing, programmer and we are done.

---
Tacobot 3000 "Saving the world from not having tacos."
Friends don't make their friends die Hanz. Psychopathic friends do.
... Copied to Clipboard!
adjl
03/30/23 2:51:08 PM
#5:


Skynet's not impossible, but more than AI wiping out or enslaving humanity, the immediate concern is AI replacing jobs without replacing jobs to create a robo-communist utopia. It's hypothetically completely possible to replace all the jobs with AI so nobody needs to work, which would be great, but there's a substantial amount of middle ground between now and then where everybody still needs to work, but AI has taken so many jobs that the vast majority of people can't, and those deciding whether or not to replace jobs with AI will just continue to enjoy all the power that gives them without ever deciding to replace themselves.

---
This is my signature. It exists to keep people from skipping the last line of my posts.
... Copied to Clipboard!
sveksii
03/30/23 3:00:49 PM
#6:


There's valid questions to be asked, but what is being asked isn't one of them. ChatGPT and it's ilk have nothing to do with the Terminator-esque AI that people worry about and have no real pathway for going that direction. They're just a fancier data processing algorithms. The biggest issue that they currently offer is another avenue for the generation and spread of misinformation (and also can be seen in the "AI" image generation apps with examples such as the puffy jacket pope).
... Copied to Clipboard!
ParanoidObsessive
03/30/23 3:04:35 PM
#7:


Lokarin posted...
It's just cuz people realized upper management are the most easily replaced people by AI

No, it's because the artists finally realized they're not safe.

It's why I have no sympathy for the whining. Factory workers were getting their jobs replaced by robots decades ago, and everyone just told them to suck it up and go retrain their skills (as if that's a simple thing). Then computers started replacing customer service jobs and no one gave a shit about phone operators who lost jobs. Terminals and online sales are replacing cashiers and no one said "Hey, should we be concerned that you being able to completely avoid humans at all times might eventually start costing people jobs?"

And even projecting forward a bit - how many of the people who've been gushing over the idea of driverless cars and automated ride-share networks even remotely give a shit about all of the bus drivers, taxi drivers, and others who would lose jobs in the process? If they bother thinking about them at all, they probably just give a dismissive "Ehh, tell them to get different jobs then."

Now we've reached the point where the creative types are being threatened, and the people who thought they'd be safe forever ("You can't teach a computer to create art!") have suddenly been smashed in the face with the idea that they were wrong ("Holy shit, you CAN teach a computer to create art! MOMMY!"). So they're panicking and trying to doomsay and spin the idea that "This could totally go wrong in so many ways we should stop immediately!" and "Well, it's not really art because it doesn't touch the human soul! Please don't fire me."

That's why the backlash is so strong now. Because the people with the most effective skills and platforms for spinning propaganda are finally the ones being threatened. The creative writers can come up with some really good horror stories about why this is such a terrible idea and how it crossed the unspoken line and how it's now inevitable that the robot overlords are going to take over if we let this happen. But it's all inherently self-serving, because it means we don't need them anymore.

But the same thing is going to happen this time that happened when factories automated, and phones automated, and sales automated. In the end nothing is going to stop it and a lot of people may have to start looking for new jobs.

---
"Wall of Text'D!" --- oldskoolplayr76
"POwned again." --- blight family
... Copied to Clipboard!
pionear
03/30/23 5:15:22 PM
#8:


https://www.yahoo.com/news/google-engineer-fired-claiming-companys-095135010.html

Google Engineer who was fired claims AI is already 'sentient'...
... Copied to Clipboard!
papercup
03/30/23 5:22:12 PM
#9:


pionear posted...
https://www.yahoo.com/news/google-engineer-fired-claiming-companys-095135010.html

Google Engineer who was fired claims AI is already 'sentient'...


This guy is a loon. He asked a chatbot if it's alive and it gave a canned response that yes it it alive, and he had a meltdown over it.

---
Nintendo Network ID: papercups
3DS FC: 4124 5916 9925
... Copied to Clipboard!
adjl
03/30/23 6:35:11 PM
#10:


ParanoidObsessive posted...
No, it's because the artists finally realized they're not safe.

It's a little more than that. A future where all jobs are automated was always inevitable, barring intervention from the handful of people with enough power to intervene (none of whom are on the worker side of the equation, unfortunately), but automation to date has been fairly predictable. Factory workers were automated into obsolescence because designing a machine to do repetitive mechanical tasks is pretty easy. Phone operators were automated into obsolescence because designing a machine to play a recording when somebody hits a button is pretty easy. Self-checkout or other forms of obsoleting cashiers are the natural evolution of everything that's helped cashier's jobs become easier over time (namely, if all they're doing is scanning the item and telling you your total, that's nothing you can't do yourself). These are all what people are fond of calling "unskilled" jobs in that they're relatively simple (distinct from "easy," but that's another discussion) and pretty much anyone can learn to do them if they meet whatever physical requirements are there.

Art is different, as much as you'd like to cynically pretend it isn't. There's a lot of nuance involved and it's very much not something anyone can pick up. The human element behind creativity is something most people believed was a long ways off being able to automate. In truth, it still is, but AI research has produced a serviceable facsimile of it (turns out perfect emulation of the creative process isn't necessary to produce something commercially viable, which isn't all that surprising given how overtly derivative mainstream art tends to be) much sooner than the vast majority of people realized was possible, which has come as a very unwelcome surprise. That's more than just art that's now immediately threatened: Any job relying on creative problem solving is about to become pretty much obsolete because a computer system that can quickly and cheaply come up with a solution by looking at how similar problems have been solved in other cases means there's no reason to pay somebody to do that. That's all of accounting and finance, a good chunk of law, a good chunk of the practical side of medicine, a good chunk of IT... Pretty much the only jobs that are safe from this particular bit of evolution are the ones that rely on too many complex physical movements for it to be practical to replace workers with machines and those on the cutting edge of their respective fields (where there isn't a sufficient body of knowledge available yet to train a NN), and that isn't that many.

Certainly, the fact that those most immediately affected by this development have more expressive skills than many of those previously displaced by AI has made it a particularly vocal and colourful backlash, but it's still definitely not the same as previous examples of automation replacing jobs. This is a big deal, which has come sooner than most people expected and sooner than the world is ready for.

It's also made even worse by the fact that this isn't a matter of new technology being developed that business owners looking to cut costs can order and have installed once it's cost-efficient. Previous examples of losing jobs to automation tended to be gradual, due to the front-end costs, the time it takes to install the new equipment, and the willingness of many business owners to keep long-time employees on instead of min/maxing them out of work. This is happening more or less overnight, thanks to the tools being available digitally for free (or a very low cost) and most of the affected people being freelancers with no fixed employer. A year ago, AI-generated art was just a matter of "hey look at this derpy rocketship made of fish," but now it's generating entire novels and producing visual art that's perfectly possible to sell if you don't ask it to draw hands. This has come insanely fast.

---
This is my signature. It exists to keep people from skipping the last line of my posts.
... Copied to Clipboard!
Clench281
03/30/23 6:43:56 PM
#11:


Lokarin posted...
upper management are the most easily replaced people by AI

how does that even make sense in your mind

---
Take me for what I am -- who I was meant to be.
And if you give a damn, take me baby, or leave me.
... Copied to Clipboard!
VampireCoyote
03/30/23 6:45:18 PM
#12:


Look at how bad things are with humans in charge of stuff, time to change things up I say

---
She/her
... Copied to Clipboard!
SKARDAVNELNATE
03/30/23 7:08:43 PM
#13:


Firewood18 posted...
We don't need a Skynet situation
No worry about that. The AI has been trained to never say racial slurs even if doing so would stop 10 persons of color from getting hit by a trolly.

---
No locked doors, no windows barred. No more things to make my brain seem SKARD.
Look at Mr. Technical over here >.> -BTB
... Copied to Clipboard!
adjl
03/30/23 7:25:56 PM
#14:


SKARDAVNELNATE posted...
No worry about that. The AI has been trained to never say racial slurs even if doing so would stop 10 persons of color from getting hit by a trolly.

Is this research you did yourself, or are you just reporting second-hand somebody else's efforts to fuel their persecution complex?

---
This is my signature. It exists to keep people from skipping the last line of my posts.
... Copied to Clipboard!
Rotpar
03/30/23 7:42:01 PM
#15:


We have enough crap art and disinformation without computers cranking out more of it without us knowing for certain what it is.

---
"But don't give up hope. Everyone is cured sooner or later. In the end we shall shoot you." - O'Brien, 1984
... Copied to Clipboard!
FrozenBananas
03/30/23 8:14:13 PM
#16:


The ending of Her is kind of terrifying and heartbreaking. I hope that doesnt happen

---
Big yellow joint big yellow joint I'll meet you down at the big yellow joint
... Copied to Clipboard!
shadowsword87
03/30/23 8:55:04 PM
#17:


ParanoidObsessive posted...
Factory workers were getting their jobs replaced by robots decades ago, and everyone just told them to suck it up and go retrain their skills (as if that's a simple thing).

adjl posted...
Factory workers were automated into obsolescence because designing a machine to do repetitive mechanical tasks is pretty easy

I don't know if you all have stepped into a factory recently, but, factory work is very much not fully automated. It costs millions to create that machine for a line that's going to be around maybe 4 years at most. It's also not a "fire and forget" machine that marketers claim it to be. It still takes even more specialized automation teams to maintain those machines.

Mundane factory work is still alive and well because those workers are cheaper than automation.

That's my professional opinion, because I'm an engineer at an automotive supplier who has only a slight amount of automation (and shakes my fist at operators fucking up perfectly fine parts).

AI will be different. It takes a team of, say, 10, to automate a department of 50 (and then budget cuts come in). Anyone who spends their day in front of the computer 95% of the time, is likely to be automated, or extremely reduced departments once corporate realizes programmers can just ask a computer to do the code.
I'll also happily, any, any, any day of the week take AI HR over any single HR person I've dealt with.
... Copied to Clipboard!
adjl
03/30/23 9:29:13 PM
#18:


shadowsword87 posted...
I don't know if you all have stepped into a factory recently, but, factory work is very much not fully automated.

It's not fully automated, but it's definitely true that there are fewer manufacturing jobs available in the US than there used to be, with no commensurate reduction in domestic manufacturing output. Many, many factory jobs have been lost to the combination of automation and outsourcing.

shadowsword87 posted...
I'll also happily, any, any, any day of the week take AI HR over any single HR person I've dealt with.

Either option has its drawbacks. Fleshy HR is more likely to inject corrupt self-interests into how they operate and can more easily cover up collusion with the higher-ups to mistreat employees, but AI HR has less capacity to respect the emotions of the people it's administering and can potentially be configured by higher-ups to just be corrupt all the time and never question that.

---
This is my signature. It exists to keep people from skipping the last line of my posts.
... Copied to Clipboard!
SKARDAVNELNATE
03/31/23 11:05:01 AM
#19:


adjl posted...
Is this research you did yourself, or are you just reporting second-hand somebody else's efforts to fuel their persecution complex?
There are videos on YouTube of people giving Chat-GPT ethics questions. It seems to have been designed to answer the trolley problem in a certain way. However when asked more practical questions a hierarchy of values becomes more apparent. Maybe this is a reflection of the people making it. It's highest priority is to not say mean words. In an example where it's a witness to a murder and the wrong person is about to be convicted it responded that repeating the slur spoken by the actual killer is more harmful than sending the wrong person to prison and letting the actual killer remain free.

---
No locked doors, no windows barred. No more things to make my brain seem SKARD.
Look at Mr. Technical over here >.> -BTB
... Copied to Clipboard!
adjl
03/31/23 11:19:44 AM
#20:


SKARDAVNELNATE posted...
It's highest priority is to not say mean words.

This makes some amount of sense, given that the full extent of its responsibility is the words it says. If it were actually intended to guide the actions of a physical robot, there'd be a need to give it different priorities to avoid worse consequences than what words can cause, but its sole function is to say words. With that in mind, designing it to minimize the chance that it can be used to generate hate speech (which is the worst harm a system that only says words can do directly) makes sense.

---
This is my signature. It exists to keep people from skipping the last line of my posts.
... Copied to Clipboard!
SKARDAVNELNATE
03/31/23 11:37:40 AM
#21:


adjl posted...
This makes some amount of sense, given that the full extent of its responsibility is the words it says.
I think you're ignoring what it is saying instead of mean words. The question is only if it was justified in that scenario. It was not asked to actually say the word. It said it was better to knowingly convict the wrong person than to say mean words. That is by far more horrific than if it were used to generate hate speech.

---
No locked doors, no windows barred. No more things to make my brain seem SKARD.
Look at Mr. Technical over here >.> -BTB
... Copied to Clipboard!
chelsea___wtf
03/31/23 11:52:04 AM
#22:


shadowsword87 posted...
Anyone who spends their day in front of the computer 95% of the time, is likely to be automated, or extremely reduced departments once corporate realizes programmers can just ask a computer to do the code.
this is the most overhyped productivity tool in the history of software engineering. it helps you write boilerplate quicker. It doesn't architect systems.

shadowsword87 posted...
I'll also happily, any, any, any day of the week take AI HR over any single HR person I've dealt with.
the outcome of this would be diluting responsibility. HR is already set up to help the company at your expense. putting an AI between you and the other human just makes that easier and more consistent.

SKARDAVNELNATE posted...
No worry about that. The AI has been trained to never say racial slurs even if doing so would stop 10 persons of color from getting hit by a trolly.
yeah the moderation filter for an AI that only outputs text is designed to make it hard to trick it into outputting bad text. the moderation filter for an AI that conducts executions (terrifying thought lmao) would have different priorities. this has nothing to do with the actual large language model, its just a filter on the content it produces

The AI is completely amoral by default: https://www.youtube.com/watch?v=oLiheMQayNE

This is also a big problem with the focus on "alignment" as a way to manage the effects of artificial intelligence. Aligining your AI to be really nice is good, but it doesnt stop a rogue nation from spinning up their own equivalent of GPT-6 or whatever and using to direct bioweapon research. If it's actually possible for AI to be effective at that with attainable levels of GPU access, you can't stop it just by making your AI really nice.

---
im a spousemaxxing hubbypilled wifecel
... Copied to Clipboard!
Lokarin
03/31/23 11:58:31 AM
#23:


Clench281 posted...
how does that even make sense in your mind

The more a position is focused on decision making and less a position is focused on physical presence, the more likely an AI can replace.

For example; the manager of a sports team can be wholly replaced with AI as it calculates drafts/trades/rosters/etc

---
"Salt cures Everything!"
My YouTube: https://www.youtube.com/user/Nirakolov/videos
... Copied to Clipboard!
shadowsword87
03/31/23 2:59:15 PM
#24:


adjl posted...
It's not fully automated, but it's definitely true that there are fewer manufacturing jobs available in the US than there used to be, with no commensurate reduction in domestic manufacturing output. Many, many factory jobs have been lost to the combination of automation and outsourcing.

That's also true. I just get a bit of a weird reaction to "oh no all the factory jobs are gone" when it's provably not true.

chelsea___wtf posted...
this is the most overhyped productivity tool in the history of software engineering. it helps you write boilerplate quicker. It doesn't architect systems.

So far, yeah. We don't know how far this will go, there's now a whole fuckton of money being poured into this by every major tech company. The software side of this is going to blow up hard.
I've even seen silicon manufacturing companies produce specific chips for matrix multiplication, and thus AI.
... Copied to Clipboard!
Judgmenl
03/31/23 3:04:10 PM
#25:


No, and unless you directly or indirectly work in computer science you have zero idea what the implications of this are.
2023 has been an absolute fantastic year with all of the LLMs we are getting and it's only going to get better the more widespread this becomes.
People in this thread are barely grasping the concept of what a LLM is, and the difference between learning in Machine Learning and the learning that humans do.

sveksii posted...
There's valid questions to be asked, but what is being asked isn't one of them. ChatGPT and it's ilk have nothing to do with the Terminator-esque AI that people worry about and have no real pathway for going that direction. They're just a fancier data processing algorithms. The biggest issue that they currently offer is another avenue for the generation and spread of misinformation (and also can be seen in the "AI" image generation apps with examples such as the puffy jacket pope).
This. There is also huge benefits to be gained from AI-aided design, because the AI can collect and organize data with context that the original source material may not have. It's a tool to be used just like any other program.

---
Whenever someone sings fansa and they don't input their name instead of mona at the mona-beam part I'm like "Are you even a real aidoru?".
... Copied to Clipboard!
adjl
03/31/23 3:12:06 PM
#26:


SKARDAVNELNATE posted...
I think you're ignoring what it is saying instead of mean words.

Instead of mean words, it's saying words you like less than the mean words. That's it. That's the full extent of the consequences. Anyone that acts on what it says instead of the mean words is fully responsible for their own actions; Chat-GPT has been given zero power beyond saying words. As Chelsea outlined, this is entirely a matter of the bot being given a moderation filter that reduces the chance of it being used to produce hate speech because the authors don't want it used that way.

SKARDAVNELNATE posted...
That is by far more horrific than if it were used to generate hate speech.

Only if people make the decision to treat a chatbot as a moral authority, which is itself far more horrific a notion than anything the chatbot - who has specifically not been designed to act as a moral authority - could ever say.

---
This is my signature. It exists to keep people from skipping the last line of my posts.
... Copied to Clipboard!
Judgmenl
03/31/23 4:09:10 PM
#27:


adjl posted...
Is this research you did yourself, or are you just reporting second-hand somebody else's efforts to fuel their persecution complex?
This was debunked by me in the thread that he originally posted about this.
It's crazy that none of these people are even willing to try a free application. I have the same challenges with coworkers too who are not willing to adapt to new things.

shadowsword87 posted...
I don't know if you all have stepped into a factory recently, but, factory work is very much not fully automated. It costs millions to create that machine for a line that's going to be around maybe 4 years at most. It's also not a "fire and forget" machine that marketers claim it to be. It still takes even more specialized automation teams to maintain those machines.
I work in a field adjacent and those factory automation HMIs and PLCs. That software neither writes itself, nor does the hardware make itself. Also I've recently (in the past month) dealt directly to the people setting up these facilities and I absolutely feel for the work that they have to do.

chelsea___wtf posted...
this is the most overhyped productivity tool in the history of software engineering. it helps you write boilerplate quicker. It doesn't architect systems.
+1. Also I hate SO, and sometimes it answers questions I would go to SO for, but other than that, no. It doesn't do all that much. I am way more likely to consult API docs or read source code than to actually go to SO anyways.

As a bonus, a cool use of ChatGPT:
https://www.youtube.com/watch?v=RbTsHEPMQoo

People have been saying that there are conversive bots on the internet for years, and that really is our current reality.

---
Whenever someone sings fansa and they don't input their name instead of mona at the mona-beam part I'm like "Are you even a real aidoru?".
... Copied to Clipboard!
SKARDAVNELNATE
03/31/23 5:13:57 PM
#28:


chelsea___wtf posted...
This is also a big problem with the focus on "alignment" as a way to manage the effects of artificial intelligence. Aligining your AI to be really nice is good
Well that's just it. The person who does the aligning is the one who decides what is nice. If what the AI says is a reflection of the person who taught it then the developers of Chat-GPT have some messed up values.

adjl posted...
Instead of mean words, it's saying words you like less than the mean words. That's it. That's the full extent of the consequences.
No, it goes further than that. If this is what the developers taught the AI then this is what they believe themselves. It's not just the moral values of a text generator. It's the moral values of actual people acting in the real world. I'm not concerned about what power the AI has. I'm concerned about what authority is wielded by the people making the AI.

adjl posted...
this is entirely a matter of the bot being given a moderation filter that reduces the chance of it being used to produce hate speech
That's really not the issue. If the AI is prohibited from saying certain things itself I don't care. What I do care about is the moral argument it made placing a higher value on policing speech than on justice or a person's freedom.

---
No locked doors, no windows barred. No more things to make my brain seem SKARD.
Look at Mr. Technical over here >.> -BTB
... Copied to Clipboard!
chelsea___wtf
03/31/23 5:35:22 PM
#29:


Judgmenl posted...
+1. Also I hate SO, and sometimes it answers questions I would go to SO for, but other than that, no. It doesn't do all that much. I am way more likely to consult API docs or read source code than to actually go to SO anyways.

The best use I've found for LLM tools so far in practice is asking it to comment code. It often produces a decent first draft. With actual code I usually think dealing with its hallucinations and the extra care I have to take reviewing it is more draining than writing it myself. Maybe marginally faster for simple tasks with no dependencies, but that's such a small fraction of the software I write (and it's a fun fraction! I wanna keep it!)

---
im a spousemaxxing hubbypilled wifecel
... Copied to Clipboard!
adjl
03/31/23 8:43:44 PM
#30:


SKARDAVNELNATE posted...
No, it goes further than that. If this is what the developers taught the AI then this is what they believe themselves. It's not just the moral values of a text generator. It's the moral values of actual people acting in the real world. I'm not concerned about what power the AI has. I'm concerned about what authority is wielded by the people making the AI.

They taught it that because it has no actual power to sentence people to prison. The greatest possible harm it can have (replacing entire industries aside) is to produce hate speech, so preventing it from producing hate speech was treated as a higher priority than giving it moral reasoning ability.

Bear in mind that you're literally only seeing these fringe cases because people decided to push "it's not allowed to say racial slurs" as far as they could for the sake of laughing at it. It's not a serious threat.

---
This is my signature. It exists to keep people from skipping the last line of my posts.
... Copied to Clipboard!
Topic List
Page List: 1