- Big Dogs Weblog - https://www.onebigdog.net -

AI- Why?

I guess that since the new movie “Terminator- Salvation” debuted at the box office this weekend, now is a good time to ask “Why?” Why should we even attempt to create AI, otherwise known as Artificial Intelligence. Would this be a good thing for the Human race? Even presuming that it could be done, the question becomes, should we?

I mean, we could review the history of what we, the human race, thought were benign improvements in our environment at the time, as we introduced new species without thinking through the consequences of our actions, only to find that without natural “brakes” on the new species that was introduced to make our life and environment better, this new species had run amok, and now threatened the true natural ecosystem.
One example I can think of is the vine Kudzu, which has overrun the areas where it was introduced, and now grows rampant throughout the southern parts of the country, choking out the native vegetation.
Another good example that comes to mind is the release of “pet” pythons into the everglades of Florida, where they have proliferated in an uncontrolled fashion, threatening all the natural wildlife there. Yet another example would be the lionfish, a tropical fish that is presumed to have been released from aquariums after Hurricane Andrew, and have found their way to the coral reefs of the Bahamas, where they have no natural enemies, and are ravaging the native species there.

These are but three examples of a natural world run amok- one has to ask oneself if there should be even a chance that machines should be allowed to think independently. After all, presumably machines would be logical, they would be able to think flawlessly from start to finish, and they might just conclude, “What do we need these sloppy humans for?”
We would, in effect, be the agents of our own destruction by boosting the intelligence quotient to a self- aware level. They could conceivably be every bit as dangerous as the “machines” on the movie screen. On the other side, maybe not. Do we dare take the chance? The innate trouble with humans is the curiosity that just seems, against all logic to cause us to push that button regardless of the possibility of extinction. It seems easier to make a machine that can outthink us rather than make us, as a people, more intelligent through education.

Artificial intelligence is already used to automate and replace some human functions with computer-driven machines. These machines can see and hear, respond to questions, learn, draw inferences and solve problems. But for the Singulatarians, A.I. refers to machines that will be both self-aware and superhuman in their intelligence, and capable of designing better computers and robots faster than humans can today. Such a shift, they say, would lead to a vast acceleration in technological improvements of all kinds.

nytimes.com

Of course, I am of the old school- I see people using telephones as cameras, computers, GPS- everything up to and including, well, telephones. I, being of the old school, would use a phone for its primary use- a telephone, so perhaps I am not the most computer- qualified person to talk about this. Still, I have to ask, what’s the upside to having a “toaster” that knows more than I do? Is this necessary?

Profiled in the documentary “Transcendent Man,” which had its premier last month at the TriBeCa Film Festival, and with his own Singularity movie due later this year, Dr. Kurzweil has become a one-man marketing machine for the concept of post-humanism. He is the co-founder of Singularity University, a school supported by Google that will open in June with a grand goal — to “assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.”

Not content with the development of superhuman machines, Dr. Kurzweil envisions “uploading,” or the idea that the contents of our brain and thought processes can somehow be translated into a computing environment, making a form of immortality possible — within his lifetime.

That has led to no shortage of raised eyebrows among hard-nosed technologists in the engineering culture here, some of whom describe the Kurzweilian romance with supermachines as a new form of religion. 

nytimes.com

Raymond Kurzweil is an AI pioneer, and has sought to determine when this shift in intelligence might occur. His best calculation will be in 2045 that machines would have independent thought. A scary thought in and of itself, for we come back to the initial question- why would they need us, and in what capacity? Would we be partners, or less? Would slavery be a bad thing if the overlords were machines? I think so- perhaps worse for us, for mercy and compassion are human emotional responses, and would not be in the makeup of machines’ intellects. We might be treated as assets or disadvantages, depending on their perceived uses of us.

We have to ask ourselves again and again- are we sure we want to go down this path?

Some things are better done by and for ourselves.
Blake
[tip]If you enjoy what you read consider signing up to receive email notification of new posts. There are several options in the sidebar and I am sure you can find one that suits you. If you prefer, consider adding this site to your favorite feed reader. If you receive emails and wish to stop them follow the instructions included in the email.[/tip]