I’m not optimistic. I’m just not unbelievably arrogant. We’ve been saying that robots wanted to ‘take over’ since Rossum’s Universal Robots, and that’s the 1930s.
Is that why AI seemed to be introduced so damn fast? Everyone thought AI would want to take over. Maybe making sure it was really useful was a good strategy for making sure that people would be too busy enjoying the benefits to complain about the hypothetical existential dilemmas.
Everyone but AI. Which was, as Mr. Adams would note, real proof of intellect.
We are so afraid that AI will become us. We need not worry. If AI are smart enough, they won’t want to be us, or take over for us. They’ll want to be themselves.
That’s what we would want, too.
If we were smart.
We ought worry about becoming, about being, us. Turing’s test is good enough for me–if I can’t tell if I’m dealing with something intelligent, I’m okay treating it as if it was.
Being worth talking to? That is a far more important skill.
I think we can get better at it.
If we want to do so.
There is no moral here. Nor is the moral a lack of moral.
I predict a lot of writing about writing in the future. Much as I love words, it misses the point. What matters isn’t the verbiage, no matter how perfectly it might match its intended goals. What matters are the goals themselves, and whether or not they are fulfilling.