Back in the year 2000, when I was a young man, one of my favorite CDs was the new Our Lady Peace album “Spiritual Machines”. It was a concept album that I believe was inspired by a book, and one track was this short spoken work piece:
“The year is 2029. The machines will convince us that they are conscious, that they have their own agenda worthy of our respect. They’ll embody human qualities and claim to be human. And we’ll believe them.”
Since this LaMDA “interview” initially came out, I’ve read a bit about about how the content is heavily curated, and perhaps the “evidence” is a bit flimsy even if it wasn’t, but still: with seven more years to go until 2029, I’m starting to wonder if that Our Lady Peace album might have been a more relevant predictor of our future than intended.
It will be interesting to revisit this same topic in 2029 and see how far we’ve come. With so many companies investing in AI, there will be some big breakthroughs if nothing else by then.
Back in 1995 when I was coding a neural network simulation library for my Masters project in software engineering, I was very intrigued by these ideas. It was partly why Blade Runner had fascinated me so much. Incidentally I rewatched it only a few days ago
At a time when people are loosing trust in authority, both government and police. Also Google, Banks, and a heap of institutions that seem to have been exposed and shown to be corrupt.
And now people getting excited to put their faith in a neural net / black box, that has been trained to do a particular task, but effectively has no moral compass at all.
The mind boggles.
At some point in time people who have been given the responsibility to make decisions that impact on others, will pass the buck and find it easier to go the ‘Google says’, or such and such Ai says…
Not a case of if, But a case of when. Some executive or government bureaucrat instead of reaching for the calculator and reducing everything down to the economics of the situation. They are going to give the data to an AI, and that AI will be involved in making a decision that will effect actual human beings. Might be something that will be good for some, but bad for others. The economics… the AI of the situation.