Misc.Sep 28, 2019
New/dev/null/

Are you worried about artificial superintelligence?

Ever since reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, I've been worried about implications or upcoming superintelligent AI takeover. You may think it's all BS, but many smart people are worried including Elon Musk, Bill Gates, Steven Hawkins, Stuart Russel, Demis Hassabis. Ray Kurzweil was hired by Google as high level exec which means Larry and Sergei share his views. It seems like we should take a possibility of this happening into account when planning out our life. Many things we are worried about and working for may not even matter of technological singularity becomes reality.

Kaspersky Lab mayfair Sep 28, 2019

1) Considering how badly Netflix predicts what I want to watch and DoorDash/Seamless predict what I want to order, I am not worried about any AI yet. So far ML works within simple yet non-relevant algorithms like a person who loved a movie with Kevin Spacey is doomed to love all movies with Kevin Spacey, and if you ordered spiced tuna once you’re doomed to love them forever. 2) If we ever create a next iteration of intelligence, let’s call it superintelligence (SI), SI would know how to handle us gently so we won’t even understand that it’s taking over. I’m personally waiting when nanobots that can make me look like young Monica Belucci start working, as Kurtzweil promised

Kaspersky Lab mayfair Sep 28, 2019

Also Kurtzweil promised that by 2020 we will have holographic phones, world government and computer passes Turing test. Still have 3 months to make it happen

Amazon bcif84ju7u Sep 28, 2019

We do have chatbots that pass the turing test. Yet, they are useless

Kaspersky Lab mayfair Sep 28, 2019

So they only proved that Turing test was bogus

Amazon bcif84ju7u Sep 28, 2019

Businessmen BSing about AI shouldn’t worry you. It is just a marketing machine, nothing else.

New
judgejudE Sep 28, 2019

Would an AI that takes over be that bad? Dinosaurs reigned once, why should humans last forever?

Microsoft pig pig Sep 28, 2019

Not at all.

Amazon ICCs84 Sep 28, 2019

Nope, at least not with the current approach of function approximation

Google d3j88wq Sep 28, 2019

I think AI could become an existential threat sometime in this century, but not anytime soon. The StarCraft, DOTA, and poker AIs show an ability to outthink humans in a somewhat realistic setting. Those AIs work in a game and could only pose a threat if they were trained for the real world, which would be difficult or impossible because you would need to simulate the world during training.

Google Workwkdkw Sep 28, 2019

I doubt intelligence explosion will happen. The original singularity happened billions of years ago with self replicating molecules. It ran very large number of evolutionary experiments to come up with human brain. Whatever SI that emerges in future still have to deal with physics. It will have to run long time consuming experiments to master physics and that can take a long time.

Meetup svk837 Sep 28, 2019

Eventually? Yes. Sooner than most people realize? Yes. Is it imminent? No way. There are multiple ??? steps between here and there.