Pages

Friday, November 30, 2012

Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct

Cambridge Proposes New Centre To Study Ways Technology May Make Humans Extinct

from the but-will-skynet-let-it-happen? dept

As the march of technology progresses, folks are coming up with all kinds of interesting questions regarding the machines we use every day. I wrote a while back about a one researcher questioning whether or not robots deserve rights, for instance. On the flip side of the benevolence coin, I also had the distinct pleasure of discussing one sports journalist's opinion that we had to outlaw American football as we know it today for the obvious reason that the machines are preparing to take over and s#@% is about to get real.

Hyperbole aside, one group is proposing a more reasonable, nuanced platform to study possible pitfalls regarding technology and mankind's dominance over it.
A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose "extinction-level" risks to our species.
Now, it would be quite easy to simply have a laugh at this proposal while writing off concerns about extinction-level technological disasters as being the thing of science fiction movies, and to some extent I wouldn't disagree with that notion, but this group certainly does appear to be keeping a level head about the subject. There doesn't seem to be a great deal of fear-mongering coming out of group, unlike what we see in cybersecurity debates, and the founding members of the group aren't exactly luddites. That said, even some of the group's members seem to realize how far-fetched this all sounds, such as Huw Price, the Bertrand Russell Professor of Philosophy and one of the group's founding members.
"Nature didn't anticipate us, and we in our turn shouldn't take AGI for granted. We need to take seriously the possibility that there might be a "Pandora's box" moment with AGI that, if missed, could be disastrous. I don't mean that we can predict this with certainty, no one is presently in a position to do that, but that's the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies."
Unfortunately, the reasonable nature of Price's wish to simply study the potential of a problem does indeed lead to what seems to be laughable worries. For example, Price goes on to worry that an explosion in computing power and the possibility of software writing new software will relegate humanity to the back burner in competition with machines for global resources. My issue is that these researchers appear to equate intelligence with consciousness. Or, at the very least, they assume that a machine as intelligent as or even more intelligent than a human being will also have a human's motivation for dominance, expansion, or procreation (as in writing new software or creating more machines). Following the story logically, and having written a fictional novel discussing exactly that subject matter, I'm just not sure how the researchers got from point A to point B without a little science fiction magic worked into the mix.

So, while it would seem to be unreasonable to decry studying the subject, I would hope this or any other group looking at the possible negative impact of expanding technology would try to keep their sights on the most likely scenarios and stay away from the more fantastical, albeit entertaining, possibilities.

No comments:

Post a Comment