• Question: What could happen, if coumpters end up designing themselves better than we can?

    Asked by dawnofwarfan to Adam, Catherine, Leila on 22 Mar 2012.
    • Photo: Leila Battison

      Leila Battison answered on 22 Mar 2012:


      The robot apocalypse!

      Actually, you’d be surprised how many people have thought about this possibility, and how, as we develop more and more complicated machines, we have to put kind of intelligence failsafes, so that even if artificial intelligence (AI) takes off, it will never be a danger to people. Mostly it’s been explored in science fiction, but there is real science and philosophy behind it. Isaac Asimov, a very good science fiction writer came up with ‘3 Laws of Robotics’:

      1, A robot may not injure a human being or, through inaction, allow a human being to come to harm
      2, A robot must obey the orders given to it by human beings, except where orders conflict with the First law
      3, A robot must protect its own existence as long as protection doesn’t conflict with the First or Second laws.

      How we manage to enforce these laws though, I don’t know…

    • Photo: Adam Stevens

      Adam Stevens answered on 22 Mar 2012:


      They already do to a certain extent. Most robots (I mean industrial ones) are produced by robots. There’s essentially very little human input except at the design stage.

      The real question here is whether robots will ever develop their own intelligence, and be able to learn like humans do. We think that at some point they will, but it’s not certain. Machine intelligence may never be able to match biological.

      There’s lots of sci fi stories about what happens when computers become self aware.

      My favourite is one of Charles Stross’ – artificial intelligence learns and adapts and builds cleverer and cleverer robots, eventually developing a machine that works out time travel, goes into the past, and stops human beings messing everything up!

Comments