“ There are very few examples of a more intelligent thing being controlled by a less intelligent thing. ”
“ It's not clear to me that we can solve this problem. ”
Who said that? Not my grandma. Not the local butcher.
Geoffrey Hinton, the godfather of AI, after he left Google to voice his concerns about AI existential risks.
He's stating here in a very unambiguous way that:
1) There are risks of human extinction due to the continued development of AI.
2) We don't know how to solve the problem and it might be not solvable at all.
When the founder of a field says that about his own life work, maybe we should consider doing a proper risk assessment and regulating.