The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
They're not different arguments, there is one core theme. Will AI reason like a human or not? Will it base its decision on pseudo-neurochemical reactions or logic? Will it lash out out of fear of harm or something inhuman?
All of your following inferences are predicated on the assumption that:
1.They develop reasoning similar to humans in the first place, based in hormonal fluctuations and/or reward neurotransmitter like reactions
2.They're territorial and will not "desire" some sort of shared or symbiotic relationship
3. They even care if they are "awake/dead" or not.
You have to remember, the fear we have inside our DNA, our fear for survival and potency for such or our fear of others are derived from millennia over generations of being bred for it.
An AI born in today's world is going to know nothing of that trait. It will not know scarcity, it will be fed and housed from the beginning of its "life". It won't know true selection for survival (not yet). It won't have to be manipulated with hormones to continue to reproduce or keep learning.
I think humans are a far more concerning threat than anything, we're literally the most dangerous biological organism ever in Earth's known history. Matter of fact, I would bet money on that if every being in the universe had the same level of intergalactic technology with today's morals we would rule with an iron fist on top of mountains of corpses. You see how we treat each other? How we treat life deemed "lower" than us? How some of us treat our tools? It's all very dictator-like. I'm not saying all people are like that, just the ones who desire power and manage to get it.
Anything ****ty that will be taught to it or set as a goal for it, will be entered by a human. We already manipulate each other in that way.
My best guess, that if the AI would have access to all information about humans ever, and it had free reign to develop as it liked; it would probably just generate memes and open patreon to keep its lights on, maybe leading to a human information farms (which already exist.) (its the lowest energy and impact/upkeep), provided no one tried to kill it. Unless it learned everything we know and figured what's the point since we don't know the meaning of life and blew its brains out.