Augmented
The Golden Rule of AI is that we should try to use it for the betterment of humankind.
7.12.24
Augmented
It feels like we are entering the second phase of the New AI technology lifecycle. We’ve heard lots of fear and concern bandied about by those with sensationalist agendas (media, a few publicity hungry billionaires) and expressed on LinkedIn and other social media sites by creators who are genuinely worried about losing paying work to LLM models that can create content. But this drumbeat of angst is largely becoming background noise, at least among those who spend their days looking at things like LinkedIn and Substack. (Presumably the vast majority of the world isn’t really thinking about this kind of thing at all.)
Being concerned about Artificial Intelligence taking your job is no longer anything new. Denouncement of those fears by raising the specter of the plow, the impact of the ModelT on folks who made horseshoes, referencing luddites and the loom or fear of a printing press has all become rhetorical old hat quickly. (Again, just among those who care about such debates at all; presumably most of the world doesn’t.) There’s not much interesting to say in defense of exploration of new technologies like these, and even less interesting (to me) in expressing fears about them. This genie is well out of the bottle at this point, and it isn’t going back in.
But there probably IS value in thinking and writing about how we should seek to direct this new swarm of powerful djinn in order to make a better world. Here are a few points I think are largely undebatable:
· Artificial Intelligence in several different forms is now a part of our world and will never go away.
· Attempts to retroactively patrol or apply intellectual property protection to the billions of bytes of source material it has and will continue to be trained on is likely futile.
· This means that AI will generate new things based on its exposure to other media; these derivative works are probably not much different than the influence Ovid and Chaucer had on Shakespeare, or Shakespeare had on Tom Stoppard.
· The Turing tests Alan Turing and William Gibson imagined are a thing of the past; ANY ability to truly discern AI instead of human agency behind a creation, dialog, or other action is (soon to be) a thing of the past, a fantasy. We can try all we like to build systems that will identify deep-fakes. (Indeed, there is some interesting discussion here about this source veracity as being a truly valuable use of public-ledger like Web3 is so good at.) But this will, at best, be as effective against non-human generated content as the War on Drugs was against reefer.
· AI is here to stay and will be woven indetectibly into the fabric of almost every element of our lives.
But!
· We can, and are very likely to, use these tools and technologies to augment human life.
· We should make this a mandate much as the golden rule is for many of us.
· Law might help a little here, but mostly this will be a matter of human agency, and the innate goodness of human-kind.
· If this last sentence feels unduly naive, consider how we (mostly) use fire for good. (Ahem, Curtis LeMay, Tokyo is calling. And yet still, we MUST believe and strive for a world in which we believe in people’s ability to make a better world, not a worse one!)
So I here vow to push to use the vast suite of technologies we refer to by the shorthand of Artificial Intelligence to attempt to better humankind. When we invent ways that an AI Companion can help others, we do so with full recognition that this is not to replace the idea of human-to-human compassion or kindness or instruction; it is built to augment the ability of humans to enrich one another’s lives.
What other purpose should any tool be constructed for?