"I think we're absolutely at an inflection point. This technology is incredible. I do believe it's the future. But, at the same time, it's like we're opening Pandora's Box. And we need safeguards to adopt it responsibly."
These words are from a 22-yr old in the news today who’s written some code to identify whether writing (aka kids’ school essays) are written by AI or a human. (And kudos to him - it’s great that someone has acted so quickly on the threat of the education-bomb that is generative AI.) Seemingly wise words from a young man. We are indeed opening Pandora’s Box. We all know it, whether consciously or unconsciously. People I talk to say they’re ‘excited and terrified’, almost without exception. Perhaps it is exciting, in a sense, but the fact everyone says it’s terrifying tells you everything you need to know.
Unfortunately the thing about technology is we can’t un-invent what we invent. We can’t undo what we’ve done. Yes, we can try to make treaties to restrict the awful shit we’ve created, like those to limit the use of biological or nuclear weapons. But the technology is still there. It’s still threatening. It could still destroy us. The smart thing to do would probably be to pause development of this tech. But because we’re threatened by the idea of not doing it (e.g. if the US doesn’t then China will, and vice versa, and on and on….) we have no choice. We are pushed irreversibly by our fear and separation into risk without consideration.
The safeguards he talks about have been mooted many times before. All the biggest brains of our time have talked about the risks of AI and the safeguards that may be required. A term you’ll be hearing more over the coming months & years is ‘Alignment’, that is, getting the AI to do what we actually tell it to do (i.e. aligning with our wishes); the term is borne out of the risk, presumably very real (otherwise, why would the term even exist?) that AI may end up deciding to do its own thing. Our history with safeguards, unfortunately, is poor. We have failed to restrict much the damage caused by cars or weapons or power generation or most if not all of our inventions – although when the problems intensify we do eventually act.
We can try and program safeguards into the AI, but I’m doubtful they’ll work. (I’m sorry if I come across as a Negative Nellie.) (Is that actually a thing?) They’ll be programmed by imperfect, biased humans who will overlook things or have imperfect agendas; and the AI will quickly rise to a level of (seeming) intelligence such that it’ll be very difficult to predict how it will evolve. This is why Stephen Hawking (who said it best, IMO) referred to proceeding with AI as akin to ‘inviting aliens to planet earth.’
This is very much right. It’s as unpredictable as an alien race. We don’t know if they’ll stay in service to us as programmed, or decide humans are the problem on this planet. (If the latter – that’s hardly an unintelligent conclusion. LMK if you disagree!) I’d probably feel a bit better about this if it wasn’t for other developments like Skynet’s Boston Dynamics’ horrifying line of robots (um, did someone say something about not being able to undo what we’ve done?)
So nice thinking, clever 22 year old. Let’s program in our safeguards. But let’s not be under any illusions that our safeguards are much more than a white line on the road helping keep the vehicle in its lane.
Enjoy every day. We all have many, many things to be grateful for. This is good advice regardless of the potential robot apocalypse.