But let’s be honest - what’s one of the most common experiences with computers? They crash. And who has to switch it off and switch it on again? A human being.
Now, I understand that with the things we here might often use (programs to make games, music, art, and games themselves), it’s not mission critical programming, and military stuff is probably better, and they’ll have secondary and tertiary systems and failsafes and stuff.
But … e.g. let’s say with the self-driving cars thing. A few years ago everyone was like, “wow, we’ll have self-driving cars in a few years”; it turns out that it’s still harder than people thought, and while a self-driving car might do very well for ages, it’s those tricky moments when it’s flummoxed, and a human wouldn’t be, that means it’s still not ready for prime time as an “intelligent” system. Intelligence just is the trick of not being flummoxed by novelty.
Same with speech recognition, face recognition - heck, even OCR is still flakey to some extent. Expert systems? Well, they’re helpful, but they aren’t going to replace doctors any time soon.
We’re the product of millions of years of evolution. The brain is still the most complex thing we know; and there are soooo many ways the brain can go wrong, it’s all such a jerry-built contrivance, very robust in some ways, but surfing a narrow corridor of functionality in others.
So - yeah, maybe in 100 years or so, but not any time soon, at least not for self-aware, standalone AI - and it will still probably be subject to the AI equivalent of epileptic fits.
I think what’s far more likely is continued cyborgification and melding of humans and machines, so that we become more integrated with computers that do what computers do best, while our brains do what they do best (which is still pattern recognition, coping with novelty, etc.).