Are you worried about artificial superintelligence?

New / Eng /dev/null/
Sep 28 13 Comments

Ever since reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, I've been worried about implications or upcoming superintelligent AI takeover. You may think it's all BS, but many smart people are worried including Elon Musk, Bill Gates, Steven Hawkins, Stuart Russel, Demis Hassabis. Ray Kurzweil was hired by Google as high level exec which means Larry and Sergei share his views. It seems like we should take a possibility of this happening into account when planning out our life. Many things we are worried about and working for may not even matter of technological singularity becomes reality.

comments

Want to comment? LOG IN or SIGN UP
TOP 13 Comments
  • Kaspersky Lab / HR
    mayfair

    Kaspersky Lab HR

    PRE
    Heineken
    mayfairmore
    Also Kurtzweil promised that by 2020 we will have holographic phones, world government and computer passes Turing test. Still have 3 months to make it happen
    Sep 28 4
    • Amazon bcif84ju7u
      We do have chatbots that pass the turing test. Yet, they are useless
      Sep 28
    • Kaspersky Lab / HR
      mayfair

      Kaspersky Lab HR

      PRE
      Heineken
      mayfairmore
      So they only proved that Turing test was bogus
      Sep 28
    • Amazon bcif84ju7u
      Correct
      Sep 28
    • Meetup svk837
      Turing test is more a test of humans than it is AI. Turns out we have low standards for what’s considered normal conversation.
      Sep 28
  • Kaspersky Lab / HR
    mayfair

    Kaspersky Lab HR

    PRE
    Heineken
    mayfairmore
    1) Considering how badly Netflix predicts what I want to watch and DoorDash/Seamless predict what I want to order, I am not worried about any AI yet. So far ML works within simple yet non-relevant algorithms like a person who loved a movie with Kevin Spacey is doomed to love all movies with Kevin Spacey, and if you ordered spiced tuna once you’re doomed to love them forever.
    2) If we ever create a next iteration of intelligence, let’s call it superintelligence (SI), SI would know how to handle us gently so we won’t even understand that it’s taking over.
    I’m personally waiting when nanobots that can make me look like young Monica Belucci start working, as Kurtzweil promised
    Sep 28 0
  • Google / Eng Workwkdkw
    I doubt intelligence explosion will happen. The original singularity happened billions of years ago with self replicating molecules. It ran very large number of evolutionary experiments to come up with human brain. Whatever SI that emerges in future still have to deal with physics. It will have to run long time consuming experiments to master physics and that can take a long time.
    Sep 28 0
  • New / R&D judgejudE
    Would an AI that takes over be that bad? Dinosaurs reigned once, why should humans last forever?
    Sep 28 0
  • Amazon bcif84ju7u
    Businessmen BSing about AI shouldn’t worry you. It is just a marketing machine, nothing else.
    Sep 28 0
  • Meetup svk837
    Eventually? Yes.

    Sooner than most people realize? Yes.

    Is it imminent? No way. There are multiple ??? steps between here and there.
    Sep 28 0
  • Google / Eng d3j88wq
    I think AI could become an existential threat sometime in this century, but not anytime soon. The StarCraft, DOTA, and poker AIs show an ability to outthink humans in a somewhat realistic setting. Those AIs work in a game and could only pose a threat if they were trained for the real world, which would be difficult or impossible because you would need to simulate the world during training.
    Sep 28 0
  • Amazon ICCs84
    Nope, at least not with the current approach of function approximation
    Sep 28 0
  • Microsoft pig pig
    Not at all.
    Sep 28 0

Salary
Comparison

    Real time salary information from verified employees