Ever since reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, I've been worried about implications or upcoming superintelligent AI takeover. You may think it's all BS, but many smart people are worried including Elon Musk, Bill Gates, Steven Hawkins, Stuart Russel, Demis Hassabis. Ray Kurzweil was hired by Google as high level exec which means Larry and Sergei share his views. It seems like we should take a possibility of this happening into account when planning out our life. Many things we are worried about and working for may not even matter of technological singularity becomes reality.
Also Kurtzweil promised that by 2020 we will have holographic phones, world government and computer passes Turing test. Still have 3 months to make it happen
We do have chatbots that pass the turing test. Yet, they are useless
So they only proved that Turing test was bogus
Businessmen BSing about AI shouldn’t worry you. It is just a marketing machine, nothing else.
Would an AI that takes over be that bad? Dinosaurs reigned once, why should humans last forever?
Not at all.
Nope, at least not with the current approach of function approximation
I think AI could become an existential threat sometime in this century, but not anytime soon. The StarCraft, DOTA, and poker AIs show an ability to outthink humans in a somewhat realistic setting. Those AIs work in a game and could only pose a threat if they were trained for the real world, which would be difficult or impossible because you would need to simulate the world during training.
I doubt intelligence explosion will happen. The original singularity happened billions of years ago with self replicating molecules. It ran very large number of evolutionary experiments to come up with human brain. Whatever SI that emerges in future still have to deal with physics. It will have to run long time consuming experiments to master physics and that can take a long time.
Eventually? Yes. Sooner than most people realize? Yes. Is it imminent? No way. There are multiple ??? steps between here and there.
Tech Industry
2d
43532
Worried that our top performer is an attrition risk. How do managers handle this?
Tech Industry
Yesterday
674
Meta Stock Drop
Tech Industry
Yesterday
1144
I haven’t done shit today!
Tech Industry
Yesterday
3238
Avoid teams with only Chinese or Indians especially with a Chinese/Indian manager
India
Yesterday
292
Heard congress distributing wealth
1) Considering how badly Netflix predicts what I want to watch and DoorDash/Seamless predict what I want to order, I am not worried about any AI yet. So far ML works within simple yet non-relevant algorithms like a person who loved a movie with Kevin Spacey is doomed to love all movies with Kevin Spacey, and if you ordered spiced tuna once you’re doomed to love them forever. 2) If we ever create a next iteration of intelligence, let’s call it superintelligence (SI), SI would know how to handle us gently so we won’t even understand that it’s taking over. I’m personally waiting when nanobots that can make me look like young Monica Belucci start working, as Kurtzweil promised