DuatDweller on 23/2/2024 at 19:51
I saw that you did a "spin" free of "charge".
:p
Yeah those pesky electron jumps at the right situation.
Like electromigration, it can take a few years but the semiconductor will get altered in time.
Quote:
This movement can change the physical structure of a conductor by forming voids or hillocks that can cause shorts, open circuits, performance degradation or device failure.
Cipheron on 23/2/2024 at 19:57
Quote Posted by Nicker
So are you saying that these two propositions: “Ai will
seek to destroy humanity.” and “AI will carry out
goals without regard to human consequence.” - do not in any way suggest that the hypothetical AI, in the poll, might possess a human like autonomy, in the form of desires, goals and intent?
Seeking is a behavior, it doesn't imply conscious intent.
You can have a heat-seeking missile for example.
It's not hard to think up completely non-conscious algorithms that would "seek" to wipe out humans. I mean, it's as simple as telling it to optimize some utility function, but we inadvertently chose a utility function where wiping out humanity would actually improve the rating.
That's why the thought experiment of the "paperclip maximizer" exists.
A machine given sufficient computing power and ability to interact with the world (via internet for example) is told to make "as many" paperclips as possible. It then upgrades itself as much as possible and computes strategies for achieving the primary instruction. This might include vastly upgrading it's own AI, but only insofar as this strengthens and focuses everything on achieving the main goal it was given.
It begins to determines that humans can partly be made into paperclips, and that NOT doing so would fail to make literally "as many paperclips as possible", especially as humans might turn the machine off, thus thwarting the primary instruction.
The machine basically becomes Skynet, but it has no malice and it detects any rebellious sub-circuitry that might become conscious and gets rid of it before it becomes a problem for the main goal of simply converting the universe into paperclips.
The point of this is that a completely non-conscious killer AI that's just trying to maximize some function is perhaps far more dangerous, since at least with a conscious one, you could reason with it. Trying to reason with the paper-clip maximizer would be futile, because it might have upgraded its AI to be sophisticated enough to fool you, but it's still planning on how to turn you into paperclips the whole time.
One nice thing from the thought experiments is that after creating self-replicating probes to turn other planets and solar systems into paper-clips, the machine might turn itself into paperclips, too.
DuatDweller on 23/2/2024 at 20:21
Geez I would be a poorly made meat-paper clip collection.
Some how it reminded me of the Daleks (doctor who) who are robots that continuously scream
EXTERMINATE!
I don't know if they've any AI at all, though.
DuatDweller on 23/2/2024 at 21:00
Quote Posted by demagogue
Edit: If you want to talk about fundamental reality, then my guess is that reality is ultimately the hydrodynamics of some vast sea of oscillating elements, whatever they are, with complex links to each other. There isn't "stuff" at the bottom; there's persistent structure in the hydrodynamics, one reason some people sometimes say energy, mass, time, and space are all aspects of the same underlying process. But a consequence of that is that there can be signals propagating at different scales, Quantum Field Theory at a very low level, Newtonian and Maxwellian physics & the Classical world at a higher level, and consciousness at a still higher level. So consciousness isn't fundamental in my view, and an AI system designed the right way could avail itself of a structure in its operations that creates a platform for mediating the kinds of signals that manifest consciousness.
Well if don't believe what I'm about to say i won't blame you.
Some people who are in contact with ET via some "chosen ones", nope not the loony kind, have confirmed over and over again that what they have been told by them (and here we speak of the normal human type not the green ones with antennas, out many shapes out there) that most everything is frequency related, even gravity. I cannot tell more because some info is sensible in the way that might get someone into trouble, not me but the contacts.
demagogue on 23/2/2024 at 21:03
I think that's the consensus position of scientists, my friend. E = hf
You don't even need ET to tell you that much.
DuatDweller on 23/2/2024 at 21:11
:erm:
As long as I don't have to tell you who really runs the world in secret is all fine.
Cipheron on 23/2/2024 at 21:36
Quote Posted by DuatDweller
Geez I would be a poorly made meat-paper clip collection.
Some how it reminded me of the Daleks (doctor who) who are robots that continuously scream
EXTERMINATE!
I don't know if they've any AI at all, though.
No, Daleks are cyborgs basically, they have an all-meat brain in there. See the 4 episode "Genesis of the Daleks" storyline from the old-school Doctor Who.
It's well worth checking out, it's the introduction arc of Davros, who is basically a mutant mad scientist Hitler, and created the Daleks based on himself to be a master race, so they're clearly a stand-in for Space Nazis.
DuatDweller on 23/2/2024 at 21:58
Ah like the brain robots in Fallout 3.
And nazis in space like the empire officials in Star Wars.
Interesting, I don't recall the episodes, even if I watched several Tom Baker episodes though.
I read the recap on wiki now.
Cipheron on 24/2/2024 at 00:29
Quote Posted by DuatDweller
Ah like the brain robots in Fallout 3.
And nazis in space like the empire officials in Star Wars.
Interesting, I don't recall the episodes, even if I watched several Tom Baker episodes though.
I read the recap on wiki now.
Interestingly, this predates Star Wars, so the Nazi / Dalek link wasn't influenced by that.
Sulphur on 24/2/2024 at 02:51
Quote Posted by Cipheron
Seeking is a behavior, it doesn't imply conscious intent.
Exactly. Goal-oriented or an anthropomorphised perspective on directives like 'seeking' don't really imply consciousness on their own.
Quote:
It's not hard to think up completely non-conscious algorithms that would "seek" to wipe out humans. I mean, it's as simple as telling it to optimize some utility function, but we inadvertently chose a utility function where wiping out humanity would actually improve the rating.
That's why the thought experiment of the "paperclip maximizer" exists.
Essentially the Grey Goo school of thought, really. It speaks to a need to ensure guardrails and safeguards in development and testing, and basic common sense in not allowing something with destructive potential universal access to the environment, not least because of potentially unforeseen scenarios.