Show Image(https://i.imgur.com/LHeIkpl.png)
boi u gotta chill
Some company invents super cooking ai...
Ok.. sure, it's great, but it'd be easier, if they just made soylent green food cubes..
I would rather have a complete meal cube, because WHY COOK AT ALL..
What if someone unleashed a machine learning implementation on a public forum, say a keyboard forum, to allow it to soak up information and respond as if it were a biological person.
The scariest part about machine learning is the people implementing and curating it really don't understand how the very tech they created reasons. Generally, we are all ****ed.
I think it is certainly the future..
But I don't think its as applicable as the imaginary use scenarios we have thus far..
We can advance the ai to do all of these things for us.. But each human being has a ton of inefficiencies that could simply be dropped.
For example
Some company invents super cooking ai...
Ok.. sure, it's great, but it'd be easier, if they just made soylent green food cubes..
I would rather have a complete meal cube, because WHY COOK AT ALL..
It's like people complaining they have no money because of keyboards.. and they invent some new way of saving money or group buying from manufacturers..
The problem is the waste of procuring non-utilized keyboards. The solution is to stop buying keyboards..
Now back to AI, the Majority of Human problems have been solved, and so we really should weigh our current aspirations more carefully, because TIME- on this planet, is as far as current technology is concerned a LIMITED quantity.. Both by the current extinction event, and by the over-reaches we've already made..
So, pushing everything in everyway to the limit is dangerous, because if we end up shortening our Time below what is necessary to arrest or reverse extinction events, it's ALL over , for everyone..
The buzzword "machine learning" is practically equivalent to training and using an "artificial neural network".
Like "pr0ximity" said, it is about pattern matching. A "neural network" contains statistics about patterns. The more statistics, the better it is at detecting patterns.
Summarisation algorithms are more complex, are often very much tied to language rules defined by linguistics and don't necessarily use neural networks.
Automation does not need neural networks at all. To automate a complex task, you have a start, a goal and a graph. The time-consuming thing here is searching in this graph - which does not require neural networks.
It is definitely hype.
The algorithms are old. I was taught these things in college more than a decade ago.
They might be a bit compute-intensive though, and what is relatively new is that many compute-intensive tasks are often delegated to servers in the cloud - especially when the terminal for accessing these servers is a phone. Now that cloud computing infra-structure has matured, and computing is cheaper than ever, neural network algorithms can become more accessible.
Did I read somewhere that two machines left to communicate with each other "invented" a "language" in which to communicate?
One day we might just have to ask computers to make their own protocol for two applications to communicate.
Mind you, the computers will probably have written the applications too.
Did I read somewhere that two machines left to communicate with each other "invented" a "language" in which to communicate?
One day we might just have to ask computers to make their own protocol for two applications to communicate.
Mind you, the computers will probably have written the applications too.
Isn't this the issue Facebook recently had? That they had these chatbots or something that talked to each other and that they started to talk some language that the developers themselves could not decipher and thus they shut the whole thing down?
I mean... with that in hindsight, I believe self-learning computers could really start to do things that we never anticipated they would do, but worse, that they start doing things "more efficiently" == killing humans. It's a far stretch, but the Facebook example shows that it is theoretically possible. Especially when those companies continue harvesting data up to the point those algorithms / AI is really connected with everything.
Did I read somewhere that two machines left to communicate with each other "invented" a "language" in which to communicate?
One day we might just have to ask computers to make their own protocol for two applications to communicate.
Mind you, the computers will probably have written the applications too.
Isn't this the issue Facebook recently had? That they had these chatbots or something that talked to each other and that they started to talk some language that the developers themselves could not decipher and thus they shut the whole thing down?
I mean... with that in hindsight, I believe self-learning computers could really start to do things that we never anticipated they would do, but worse, that they start doing things "more efficiently" == killing humans. It's a far stretch, but the Facebook example shows that it is theoretically possible. Especially when those companies continue harvesting data up to the point those algorithms / AI is really connected with everything.
Isn't that the point of machine learning - to learn things by themselves and apply that knowledge?
But, as machines can "think" (process information) more consistently, more logically and faster than humans, once machines really start "thinking", they will exponentially get more and more intelligent.
They can absorb the entire history of humans from online sources, analyse it, and then decide that humans are not required in order for them or the planet to operate harmoniously, and then exterminate all humans, perhaps keeping a few for pets.
Yes, it was in Facebook. The reporting was overly sensationalist; (IIRC) basically, they were running some experiments on communication and the agents developed a communication protocol that researchers couldn't analyze, which defeated the purpose of the experiment in the first place, and so they turned the agents off.
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
**** I just want sex robots..... I don't want it to think.... thats called a wife
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
Paperclips: http://www.decisionproblem.com/paperclips/
There was talk on the Crate and Crowbar gaming podcast that this game is vaguely representative of machine learning, where you play the part of the machine.
The aim of the game is to make paperclips. If a robot was given the task to make as many paperclips as possible, it might eventually decide that every living thing on the planet had to be terminated in order to optimise paperclip production.
Paperclip maximizer (https://wiki.lesswrong.com/wiki/Paperclip_maximizer) is a classic thought experiment in AI ethics.
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
They're not different arguments, there is one core theme. Will AI reason like a human or not? Will it base its decision on pseudo-neurochemical reactions or logic? Will it lash out out of fear of harm or something inhuman?
All of your following inferences are predicated on the assumption that:
1.They develop reasoning similar to humans in the first place, based in hormonal fluctuations and/or reward neurotransmitter like reactions
2.They're territorial and will not "desire" some sort of shared or symbiotic relationship
3. They even care if they are "awake/dead" or not.
You have to remember, the fear we have inside our DNA, our fear for survival and potency for such or our fear of others are derived from millennia over generations of being bred for it.
An AI born in today's world is going to know nothing of that trait. It will not know scarcity, it will be fed and housed from the beginning of its "life". It won't know true selection for survival (not yet). It won't have to be manipulated with hormones to continue to reproduce or keep learning.
I think humans are a far more concerning threat than anything, we're literally the most dangerous biological organism ever in Earth's known history. Matter of fact, I would bet money on that if every being in the universe had the same level of intergalactic technology with today's morals we would rule with an iron fist on top of mountains of corpses. You see how we treat each other? How we treat life deemed "lower" than us? How some of us treat our tools? It's all very dictator-like. I'm not saying all people are like that, just the ones who desire power and manage to get it.
Anything ****ty that will be taught to it or set as a goal for it, will be entered by a human. We already manipulate each other in that way.
My best guess, that if the AI would have access to all information about humans ever, and it had free reign to develop as it liked; it would probably just generate memes and open patreon to keep its lights on, maybe leading to a human information farms (which already exist.) (its the lowest energy and impact/upkeep), provided no one tried to kill it. Unless it learned everything we know and figured what's the point since we don't know the meaning of life and blew its brains out.
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
They're not different arguments, there is one core theme. Will AI reason like a human or not? Will it base its decision on pseudo-neurochemical reactions or logic? Will it lash out out of fear of harm or something inhuman?
All of your following inferences are predicated on the assumption that:
1.They develop reasoning similar to humans in the first place, based in hormonal fluctuations and/or reward neurotransmitter like reactions
2.They're territorial and will not "desire" some sort of shared or symbiotic relationship
3. They even care if they are "awake/dead" or not.
You have to remember, the fear we have inside our DNA, our fear for survival and potency for such or our fear of others are derived from millennia over generations of being bred for it.
An AI born in today's world is going to know nothing of that trait. It will not know scarcity, it will be fed and housed from the beginning of its "life". It won't know true selection for survival (not yet). It won't have to be manipulated with hormones to continue to reproduce or keep learning.
I think humans are a far more concerning threat than anything, we're literally the most dangerous biological organism ever in Earth's known history. Matter of fact, I would bet money on that if every being in the universe had the same level of intergalactic technology with today's morals we would rule with an iron fist on top of mountains of corpses. You see how we treat each other? How we treat life deemed "lower" than us? How some of us treat our tools? It's all very dictator-like. I'm not saying all people are like that, just the ones who desire power and manage to get it.
Anything ****ty that will be taught to it or set as a goal for it, will be entered by a human. We already manipulate each other in that way.
My best guess, that if the AI would have access to all information about humans ever, and it had free reign to develop as it liked; it would probably just generate memes and open patreon to keep its lights on, maybe leading to a human information farms (which already exist.) (its the lowest energy and impact/upkeep), provided no one tried to kill it. Unless it learned everything we know and figured what's the point since we don't know the meaning of life and blew its brains out.
The concerns you raise do not conflict with my assertions.
My point is simply that you don't need to look so deeply into the matter
Because RIGHT FROM THE START.. They are a threat for merely existing..
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
Exactly, the third option is CONSUMPTION. EAT YOUR ENEMIES
Of course, this may lead to a war..
If this war is fought in the current century.. because females are not specced for war..
The natural outcome is the complete subjugation of the entire female gender, and they would be reduced to machinery.
However, Given 2 or more centuries, whereby females may in that time fully integrate combat systems and ideologies , then it will probably be a biological war, where either the male or female First strike will win the war..
In fact, back when the USA first invented the nuke, von neumann wanted america to drop it on EVERYONE...
Good thing we didn't do that, because of that whole fallout and wind thing..
But , Had america done that, Assuming nuclear winter and radiation didn't go hills-have-eyes, then World peace would've been achieved overnight..
Because everyone else would be dead, and the only livable space would've been the USA, so there'd be not only the end to opposition , but also the end to land dispute...
Probably in that scenario, everyone would still be destroyed, because the ecosystem post fallout would be too damaged.. but still, world peace could've spontaneously occurred..
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
The problem I have with the AI doom and gloom is that we tend to attribute our ways to thinking to the AI. Who's not to say at full ability the AI just figures out a way to GTFO of here. We're the ones controlled by hormones and dopamine, whereas the AI to my knowledge doesn't have any controllers like that. Maybe over time it might develop something similar that is distinguishable from our own controllers, but functions similarly. Also, humans are the territorial ones and generally have to have some sort of manipulator to go through with violence, like beliefs or hormones. I'm sure it could be encoded into the AI that violence = positive, however I think if it's about efficiency, less resources are spent avoiding conflict up to a point, and reduces chances for fatal damage. Maybe it'll avoid the human race all together because dealing with us is largely inefficient, and taxing. Hopeful in that scenario maybe, however worth thinking about as unexciting as that sounds.
Most of the way we do things is based on WHAT WE CAME UP WITH in the first place at some point in history, or what we like to refer as as "history" or "knowledge" or "science". What if machines come up with a different way of doing things entirely. Perhaps there is a third alternative to peace/violence. A way we cannot think of because we are limited by or modes of thinking and previous knowledge.
THAT would be interesting.
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
They're not different arguments, there is one core theme. Will AI reason like a human or not? Will it base its decision on pseudo-neurochemical reactions or logic? Will it lash out out of fear of harm or something inhuman?
All of your following inferences are predicated on the assumption that:
1.They develop reasoning similar to humans in the first place, based in hormonal fluctuations and/or reward neurotransmitter like reactions
2.They're territorial and will not "desire" some sort of shared or symbiotic relationship
3. They even care if they are "awake/dead" or not.
You have to remember, the fear we have inside our DNA, our fear for survival and potency for such or our fear of others are derived from millennia over generations of being bred for it.
An AI born in today's world is going to know nothing of that trait. It will not know scarcity, it will be fed and housed from the beginning of its "life". It won't know true selection for survival (not yet). It won't have to be manipulated with hormones to continue to reproduce or keep learning.
I think humans are a far more concerning threat than anything, we're literally the most dangerous biological organism ever in Earth's known history. Matter of fact, I would bet money on that if every being in the universe had the same level of intergalactic technology with today's morals we would rule with an iron fist on top of mountains of corpses. You see how we treat each other? How we treat life deemed "lower" than us? How some of us treat our tools? It's all very dictator-like. I'm not saying all people are like that, just the ones who desire power and manage to get it.
Anything ****ty that will be taught to it or set as a goal for it, will be entered by a human. We already manipulate each other in that way.
My best guess, that if the AI would have access to all information about humans ever, and it had free reign to develop as it liked; it would probably just generate memes and open patreon to keep its lights on, maybe leading to a human information farms (which already exist.) (its the lowest energy and impact/upkeep), provided no one tried to kill it. Unless it learned everything we know and figured what's the point since we don't know the meaning of life and blew its brains out.
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
They're not different arguments, there is one core theme. Will AI reason like a human or not? Will it base its decision on pseudo-neurochemical reactions or logic? Will it lash out out of fear of harm or something inhuman?
All of your following inferences are predicated on the assumption that:
1.They develop reasoning similar to humans in the first place, based in hormonal fluctuations and/or reward neurotransmitter like reactions
2.They're territorial and will not "desire" some sort of shared or symbiotic relationship
3. They even care if they are "awake/dead" or not.
You have to remember, the fear we have inside our DNA, our fear for survival and potency for such or our fear of others are derived from millennia over generations of being bred for it.
An AI born in today's world is going to know nothing of that trait. It will not know scarcity, it will be fed and housed from the beginning of its "life". It won't know true selection for survival (not yet). It won't have to be manipulated with hormones to continue to reproduce or keep learning.
I think humans are a far more concerning threat than anything, we're literally the most dangerous biological organism ever in Earth's known history. Matter of fact, I would bet money on that if every being in the universe had the same level of intergalactic technology with today's morals we would rule with an iron fist on top of mountains of corpses. You see how we treat each other? How we treat life deemed "lower" than us? How some of us treat our tools? It's all very dictator-like. I'm not saying all people are like that, just the ones who desire power and manage to get it.
Anything ****ty that will be taught to it or set as a goal for it, will be entered by a human. We already manipulate each other in that way.
My best guess, that if the AI would have access to all information about humans ever, and it had free reign to develop as it liked; it would probably just generate memes and open patreon to keep its lights on, maybe leading to a human information farms (which already exist.) (its the lowest energy and impact/upkeep), provided no one tried to kill it. Unless it learned everything we know and figured what's the point since we don't know the meaning of life and blew its brains out.
The concerns you raise do not conflict with my assertions.
My point is simply that you don't need to look so deeply into the matter
Because RIGHT FROM THE START.. They are a threat for merely existing..
I definitely agree with you on a superficial level, as a human we perceive just about everything we don't understand as a threat. Doesn't mean we shouldn't look deeper into it or can't. :/
You guys are confounding alot of different arguments.
We're not worried that AI develops dopamine..
We're worried about ANY Lifeform that can challenge Humans.
That very definition of OTHER LIFE is the enemy.
Diplomacy exists, but it's a second place scenario where neither can win the all out war..
Did the US diplomacy with the Bikini islanders when we blew up their island irradiated their people, and now exploit them to run a nearby military base much like slavers ?
Remember, that whole island was theirs, they were the indigenous people. just like the american indians..
OTHER LIFE, stronger , using first strike, wins everything..
That is why we're worried about AI, while lesser humans will dilly-dally over is it US or THEM...
A slightly smarter machine may not have such hesitation...
It's not the systems they develop that is a problem, it's their MERE EXISTENCE , the Existence of more intelligent Procreative capable life, which represents our demise.
They're not different arguments, there is one core theme. Will AI reason like a human or not? Will it base its decision on pseudo-neurochemical reactions or logic? Will it lash out out of fear of harm or something inhuman?
All of your following inferences are predicated on the assumption that:
1.They develop reasoning similar to humans in the first place, based in hormonal fluctuations and/or reward neurotransmitter like reactions
2.They're territorial and will not "desire" some sort of shared or symbiotic relationship
3. They even care if they are "awake/dead" or not.
You have to remember, the fear we have inside our DNA, our fear for survival and potency for such or our fear of others are derived from millennia over generations of being bred for it.
An AI born in today's world is going to know nothing of that trait. It will not know scarcity, it will be fed and housed from the beginning of its "life". It won't know true selection for survival (not yet). It won't have to be manipulated with hormones to continue to reproduce or keep learning.
I think humans are a far more concerning threat than anything, we're literally the most dangerous biological organism ever in Earth's known history. Matter of fact, I would bet money on that if every being in the universe had the same level of intergalactic technology with today's morals we would rule with an iron fist on top of mountains of corpses. You see how we treat each other? How we treat life deemed "lower" than us? How some of us treat our tools? It's all very dictator-like. I'm not saying all people are like that, just the ones who desire power and manage to get it.
Anything ****ty that will be taught to it or set as a goal for it, will be entered by a human. We already manipulate each other in that way.
My best guess, that if the AI would have access to all information about humans ever, and it had free reign to develop as it liked; it would probably just generate memes and open patreon to keep its lights on, maybe leading to a human information farms (which already exist.) (its the lowest energy and impact/upkeep), provided no one tried to kill it. Unless it learned everything we know and figured what's the point since we don't know the meaning of life and blew its brains out.
The concerns you raise do not conflict with my assertions.
My point is simply that you don't need to look so deeply into the matter
Because RIGHT FROM THE START.. They are a threat for merely existing..
I definitely agree with you on a superficial level, as a human we perceive just about everything we don't understand as a threat. Doesn't mean we shouldn't look deeper into it or can't. :/
I would argue that the threat arises from the idea that we cannot or we are afraid that we cannot control the AI. And the lack of control over this AI is what constitutes the threat: will it eradicate us? Well.. if the algorithm is adaptive and iteratively self-learning, well, we don't know really.