Hot on the heels of a spike in public awareness of ChatGPT, artificial intelligence software that's capable of all sorts of things previously doable only by humans, comes "deepfake" video generation. As in, computer technology that can create "animated" videos of people that difficult to distinguish from the real thing. This, per Rule 34, immediately went porn, as the harrowing tale of internet gamer QTCinderella reveals.
For the click-thru-resistant, there's a growing subculture in porn, where existing videos of porn stars are altered to incorporate the face of whomever the deepfaker chooses. It's the video equivalent of Photoshop, and the tech appears to be advancing quite rapidly.
How long until it becomes difficult or impossible to tell that a video is a fake?
It's easy to say "something should be done" about this. The average person would be aghast were he or she "deepfaked" into hardcore pornography or criminal activity, and given the old "a lie will fly around the whole world while the truth is getting its boots on" adage, the harm such could do can be immeasurable.
The problem lies in the "what," of course. What can the legislative and legal systems do within the bounds of Constitutional protections for liberty and within the context of liberty itself? Parody, as most are likely aware, is protected speech, as is satire, and even apart from those, the degree to which one can assert control of others’ uses of one’s appearance is not infinite. Public figures forego some right to privacy and image-control because they choose to be public figures. Conversely, individuals, even public ones, retain 'personality rights,' that protect commercial use of their images and other elements of their identities.
Those are property rights, and personalities such as Kim Kardashian have used them (in her case, copyright law) to challenge deepfakes.
There are also the slander and libel exceptions to free speech at play, and it's rather clear that crafting a video of a person doing or saying something he or she did not would fall into such, especially if the words or deeds are disparaging or harmful to their reputations.
Parody, satire, and the like cannot, however, be abrogated simply because technology has advanced and sorting truth from fiction has become harder. See: Poe’s Law. We have already started on the slippery slope that grants censorious power to the offended upon mere declaration thereof. Such ceding of power invites, nay, begs for abuse, as the ever-growing list of slights, offenses, and the like shows us. Tools that society creates to protect against deepfake abuses must be conceived carefully, lest they abet those who'd quash free speech and other liberties.
At the core of all this is the question of self-ownership. To what degree do you own your likeness? How much restrictive or exclusionary power can you exert upon others within the bounds of individual liberty? When does a depiction cross the line from art to violation?
We turn to the law, both common and statutory, for such questions, and there is ample statutory and case law to guide us in disputes in more 'traditional' media. The law, however, is not a static thing. Nor should it be. This deepfake business raises new questions and warrants new debates, as technological innovations so often do. As new realities emerge, so must our responses as both liberty lovers and advocates for individual rights evolve.
We are in the midst of a seismic battle over freedom of speech, social media platforms, and the latters' potential status as "state actors." The foundations for such a debate didn't exist a few decades ago, but now the matter has become a critical front in the fight for rights and liberties. Deepfakes - whether they be the written form of ChatGPT or the audio-visual form used to create false "recordings" of public figures, have disruptive potential as substantial as social media.
Which is to say, world-changing.
When students can turn to AI to draft essays and term papers indistinguishable from human work, teachers will have to find new ways to judge academic success.
When fake videos can be created, recordings of improper behavior will lose utility in criminal prosecution.
When videos depicting politicians telling lies, doing bad things, or contradicting themselves can be made from scratch, voters will have to redouble their skepticism and caution in selecting their representatives.
When AI can effectively mimic humans in both appearance and language, whole industries will appear and disappear.
I'm not a dystopian. I believe that human ingenuity, which is at the root of all our advances in living conditions and prosperity, will continue to do its thing. The biggest obstacle, the true source of dystopian misery, is big government, and I'm certain that big government types will attempt to leverage AI's expansion into our lives to advance their harmful agendas, just as they have with global warming.
That doesn't mean we shouldn't pay attention to its impact on our lives. We shouldn't fall prey to alarmism, but we shouldn't go nothing-burger either. I don't know what the right answer to all this is, and I cannot say what degree of ownership of our visage we can assert in the face (pun belatedly realized) of all this. I do know that many parties, with disparate priorities, will make a lot of noise in the coming months and years. We must take care to balance individuals' rights against others' liberties while resisting the ever-present drive to authoritarian lockdown. For it is certain that the preferred government response will not be compatible with freedom.
I didn't think tech would get this far before I died. I remember when I was a young teen having to comb through the Sears catalog or other such magazines to get my thrills. Maybe I was born too soon. Or too late, not sure which.
Here's the primary law that would currently apply to deepfake technology in most states, which normally has a standard of causing emotional distress to the person falsely portrayed (thus different from defamation actions such as libel or slander): https://en.wikipedia.org/wiki/False_light