LogFAQs > #910429613

LurkerFAQs, Active DB, DB1, DB2, DB3, Database 4 ( 07.23.2018-12.31.2018 ), DB5, DB6, DB7, DB8, DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicI'm going to die alone.
OhhhJa
10/12/18 7:58:16 PM
#38:


ParanoidObsessive posted...
1) Humans are incredibly paranoid about the idea of creating something that eventually turns on us. Even if we ever achieved a level of understanding where we COULD create fully sapient AI, odds are we'd go out of our way to NOT do it for decades, at the very least. And even if we ever did, we'd slam so many safeties and failsafes into it that it would be almost impossible to achieve any sort of self-reinforcing escalation.

I completely disagree with this point. Humans are inherently self destructive. We've continued down many paths throughout history even when we've known it would have negative consequences. It could be argued that we seemingly want to tear civilization down when things are going too well.

ParanoidObsessive posted...
(See also, the same reason why we'll likely never have Von Neumann machines or unrestricted nanotechnology even if they become possible, unless our species as a whole undergoes a radical paradigm shift - which isn't going to happen over decades, and likely not even over centuries.)

Billions have been spent on nanotechnology though... almost nothing is ever unrestricted but if history has taught me anything it's that if someone with power stands to benefit from it then it will likely eventually happen

ParanoidObsessive posted...
2) Diminishing returns are a thing, and at some point things start running up against concepts like the uncertainty principle. It's why, in spite of the fact that we've spent almost 50 years constantly inventing newer and newer hard drives that can hold more and more data, we're starting to push up against the limits of what's possible to compact into a small enough space. To counter it we're considering radically different means of storage (most of which are currently only hypothetical), but there's still a built-in wall to the constant improvements and refinements of development. In the same way, we may easily find that we reach a certain point beyond which we cannot proceed, for reasons that we don't even know enough to predict at our current level.

So what you're saying is that when we appear to hit a wall, we keep researching to further advancement...

ParanoidObsessive posted...
Most assumptions that AI development will not only continue apace but run up against no blockages of any kind, and ultimately reach a point of self-sufficiency, are effectively wishful thinking.

Which is not to say that it CAN'T happen that way, but there's also no guarantee that it ever will. And plenty of reasons to suggest that it won't.

I'd like to hear the reasons that suggest it wont because I'm not following on this point

ParanoidObsessive posted...
It's also worth noting that, no matter how often American schools teach the idea, progress is not an infinitely increasing arc that always advances and improves. Modern phone technology shows us that we can keep shrinking the tech, but we may eventually reach a point where we don't WANT to make it go any smaller (because we go past the point of convenience in the opposite direction). In a similar vein, we've seen that advances to one technology may render other technologies obsolete - we basically stopped trying to improve pagers and fax machines because they've mostly been superceded by alternate tech. Having reached the moon, we basically shifted our entire space program to the point where we were literally no longer capable of reaching the moon without radical redevelopment.

You're arguing that these are examples that counter the idea that tech advancement is ever increasing but these are all examples of us adapting when better tech became available. Yes, we stopped improving pagers and fax machines because we moved on to better tech
... Copied to Clipboard!
Topic List
Page List: 1