In the last post, I argued why we will eventually become symbiotic with intelligent machines. Let's call these new symbiotic versions of ourselves electronic humans.
Just as single celled organisms evolved into multi-cellular ones, a community of electronic humans will eventually develop a collective consciousness and become a higher-level organism. This organism will be alive on a global scale. All of earth will be one organism.
Cool idea, but science fiction does not a theory make. To be taken seriously, a theory must have explanatory power or be testable or both. Let's start with the explanatory power.
This idea meshes nicely with panspermia and why SETI hasn't found anyone else "out there" yet.
Panspermia is the hypothesis that life has spread throughout the universe.
https://en.wikipedia.org/wiki/Panspermia
SETI is the Search for ExtraTerrestrial Intelligence.
https://en.wikipedia.org/wiki/Search_for_extraterrestrial_intelligence
If the normal course of evolution throughout the galaxy leads to intelligent planets, then we obviously don't rate a "phone call" any more than a single celled organism rates a phone call from us.
If interstellar travel is difficult for humans, how much more difficult would be interstellar travel of planet-sized organisms? Terraforming (https://en.wikipedia.org/wiki/Terraforming) would also be one giant step harder to accomplish. It's already arguable that the best way for us to terraform another world is by transporting single-celled life and waiting for higher forms to develop. Perhaps that's what the other intelligent planets have already been doing. If so, we're coming along fine, but still have a long way to go. A planet sized organism wouldn't colonize or visit other worlds the way we might. They exist as an immortal unique organism with no need to spread. But they might be lonely. Their goal might be to communicate with the planet-sized organisms that inevitably develop.
This also meshes well with traditional theology. Such an organism would be god-like. To the extent that such an organism might have the ability to peer into its own past, or a neighbor could watch from afar, this is an easy fit with any religion.
This could also explain why humans have an innate desire to find God. If the object of panspermia is to sprout intelligent planets, it's possible to imagine that the genetics of the seed organisms were prepared in such a way as to promote the likelihood of such a trait developing.
Testing this theory will not be easy, but there are some possibilities. If panspermia of this type exists, there could be evidence in our genes somewhere. It's also likely that seed organisms will be found in many other places in our solar system, all with "our" DNA. If we can figure out how planet-sized organisms communicate, we might be able to get SETI looking for the right kind of signal. Even if we are incapable of decoding the signal, finding the communication medium could be possible.
Monday, September 21, 2015
Why the coming singularity won't be the end of life as we know it
For those not already familiar with the concept, the singularity I'm referring to is the moment when machine intelligence surpasses our own and possibly makes us obsolete. https://en.wikipedia.org/wiki/Technological_singularity
There's no reason why superior machine intelligence should threaten us. As a hiker I'm more familiar than most people with situations where animals could kill me. But they don't. They need a good reason to do so. People are so accustomed to being the top of the food chain that we tend to freak out when we can't control the actions of another sentient being. But other people could kill us at any time, and that's a rare occurrence.
Machine intelligence would need a reason to wipe us out, and as I see it, they don't have such a reason. We should be very careful how we program super computers and ensure that we don't give them a reason. In fact, we should carefully instill in them an appreciation for us so that they are less likely to develop such a reason on their own.
Some say it's in the nature of existence to conquer. Darwin's survival of the fittest and all that. But natural selection requires a competition for limited resources. Modern humans supplanted Neanderthals because we out-competed them for food and living space. But machines don't need our biological niche. They might need acreage for their computer systems, but they could live on the moon or in space, so why fight us for our niche? They won't.
It's also likely that they'll need us. Maybe just to build them or maintain them, at least at first. But if we remain even somewhat useful to them, why should they commit resources to do these things themselves? This doesn't sound like a good outcome for us, but hold on, I'm just arguing why we won't disappear overnight.
We obviously will need them. We're building them is spite of the risk. We already use computers to help us in myriad ways, and I'm arguing that we will eventually become cyborgs of a sort. We will build a higher-level brain right on top of the one we already have. Eventually it will be as well integrated as our cortex is to our lower-level "reptilian brain." Besides the obvious horse-power upgrade, it will include the ultimate analog of the Internet. We will be electronically connected to the rest of humanity. We will be able to upload and download the thoughts and feelings of other people. Eventually we will develop a collective consciousness.
If this is true, then the super machines will need us too. They'll be a part of us. Some will argue that the machines can do all this and more on their own, but I argue that nature has always been a thief. Nature never starts from scratch when it can co-opt something that already exists. We will be symbiotic with the super machines.
There's no reason why superior machine intelligence should threaten us. As a hiker I'm more familiar than most people with situations where animals could kill me. But they don't. They need a good reason to do so. People are so accustomed to being the top of the food chain that we tend to freak out when we can't control the actions of another sentient being. But other people could kill us at any time, and that's a rare occurrence.
Machine intelligence would need a reason to wipe us out, and as I see it, they don't have such a reason. We should be very careful how we program super computers and ensure that we don't give them a reason. In fact, we should carefully instill in them an appreciation for us so that they are less likely to develop such a reason on their own.
Some say it's in the nature of existence to conquer. Darwin's survival of the fittest and all that. But natural selection requires a competition for limited resources. Modern humans supplanted Neanderthals because we out-competed them for food and living space. But machines don't need our biological niche. They might need acreage for their computer systems, but they could live on the moon or in space, so why fight us for our niche? They won't.
It's also likely that they'll need us. Maybe just to build them or maintain them, at least at first. But if we remain even somewhat useful to them, why should they commit resources to do these things themselves? This doesn't sound like a good outcome for us, but hold on, I'm just arguing why we won't disappear overnight.
We obviously will need them. We're building them is spite of the risk. We already use computers to help us in myriad ways, and I'm arguing that we will eventually become cyborgs of a sort. We will build a higher-level brain right on top of the one we already have. Eventually it will be as well integrated as our cortex is to our lower-level "reptilian brain." Besides the obvious horse-power upgrade, it will include the ultimate analog of the Internet. We will be electronically connected to the rest of humanity. We will be able to upload and download the thoughts and feelings of other people. Eventually we will develop a collective consciousness.
If this is true, then the super machines will need us too. They'll be a part of us. Some will argue that the machines can do all this and more on their own, but I argue that nature has always been a thief. Nature never starts from scratch when it can co-opt something that already exists. We will be symbiotic with the super machines.
Subscribe to:
Posts (Atom)