The confusion here is a matter of timing. The key take away here is that as the system becomes more intelligent it also becomes more capable of affecting more change and hence more dangerous.
Short term - Zuck is right: AIs are likely to be narrow domain and unlikely to cause extinction level events or massive loss of life without some stupidity on the part of human systems designers. (E.g. Putting it in charge of ICBMs)...
Long term - Musk is right: If we assume that a recursively self improving system is possible to build at some point (a decade, a century...), then we have to consider the potential risks (and benefits) of such a system as a limiting case. The key assumption is that this system will be able to affect change in the real world on its own without human intervention, including making improvements to itself.
Benefits and risks include the very outlandish possibilities explored by various transhumanist thinkers, basically whatever is not limited by the laws of physics. (immortality, post human / post singularity existence, uploading, etc). Downsides include extinction, eternal enslavement, destruction of the planet, solar system etc.
Want to see the real deal?
More inside scoop? View in App
More inside scoop? View in App
blind
SUPPORT
FOLLOW US
DOWNLOAD THE APP:
FOLLOWING
Industries
Job Groups
- Software Engineering
- Product Management
- Information Technology
- Data Science & Analytics
- Management Consulting
- Hardware Engineering
- Design
- Sales
- Security
- Investment Banking & Sell Side
- Marketing
- Private Equity & Buy Side
- Corporate Finance
- Supply Chain
- Business Development
- Human Resources
- Operations
- Legal
- Admin
- Customer Service
- Communications
Return to Office
Work From Home
COVID-19
Layoffs
Investments & Money
Work Visa
Housing
Referrals
Job Openings
Startups
Office Life
Mental Health
HR Issues
Blockchain & Crypto
Fitness & Nutrition
Travel
Health Care & Insurance
Tax
Hobbies & Entertainment
Working Parents
Food & Dining
IPO
Side Jobs
Show more
SUPPORT
FOLLOW US
DOWNLOAD THE APP:
comments
Think about it, AI in its benign form(self driving cars) grew so fast in last 4-5 years that in the next 4 years it has potential to take away almost all jobs from a major job sector. No country is prepared for such a major disruption. Economies and societies are built around them which take years to evolve and stabilize. If someone disrupts them with no back up plan, we're talking riots and looting here.
Regulations around AI is necessary. It's better to assemble a great task force to closely study it's impact and help us prepare for it. Otherwise, we're fucked
"Musk is an idiot because he thinks long term"
Well you are the idiot because you only think short term.