[Editor’s note: This story originally was published by Real Clear Wire.]
By Carrie Goldberg
Real Clear Wire
As the founder of a victims’ rights law firm focused on technology-facilitated sex crimes, I’m in the business of protecting my clients. That means making use of every tool available to keep them safe. Facial recognition is one of those tools. The events of this month have made it abundantly clear that facial recognition is here to stay.
On January 6, hundreds of people ransacked the U.S. Capitol. Americans understood that every available tool should be used to identify the insurrectionists. Clearview AI was one of those tools; in a moment of unmitigated chaos, law enforcers saw it as a product that they could rely on. The company saw a 26% percent surge in usage after the riot.
This technology should also be leveraged to protect victims of Internet-induced violence, harassment, and fraud. In particular, law enforcement must become more proactive in investigating child sexual-abuse material (CSAM). The scale of the problem cannot be overstated. In 2019, more than 70 million images and videos were reported to the National Center for Missing and Exploited Children (NCMEC). The images exposed horrific instances of abuse, including rape and torture of infants, with the underlying acts of abuse depicting production, distribution, and possession. CSAM can remain online for decades, compounding a trauma to an unimaginable level.
Big Tech, with its encryption, live-streaming, and cloud-sharing options, has exacerbated the crisis. According to news reports, Facebook Messenger was responsible for 12 million of the 18.4 million reports of CSAM – meaning that the social-media giant is profiting from CSAM.
I first learned about Clearview AI last year, when the company asked me to conduct an independent review of its technology. I concluded that Clearview AI offers unprecedented capabilities to identify stalkers, rapists, child abusers, and other online predators and could discover previously unidentified child victims depicted in child sexual-abuse material proliferating online. I also realized that Clearview could potentially identify serial rapists – including those with outstanding restraining orders and already registered as sex offenders – who use dating apps as their personal concierge.
Facial-recognition technology is inherently controversial, and Clearview AI has gone through a maturation process as it has strengthened its code of conduct and user-accountability protocols. Critics are right to be concerned about privacy. However, unlike Big Tech, Clearview AI has carefully adopted best practices to ensure that only authorized law enforcers — investigating actual cases — have access to its powerful technology. Like all technology, it can be abused, but bad actors are flagged, reported, and permanently banned, even running the risk of losing their jobs.
Victims of CSAM deserve to know that they were targets of what many consider society’s most vile and heinous crime. However, some child victims of sexual abuse may not realize that they were filmed or know that images of their sexual abuse are in circulation. Victims of CSAM have a federally mandated right to be notified when images depicting them are discovered during a criminal investigation. These victims deserve to know that they’re on display online, and that they’re eligible for special federal funds to help them.
Clearview AI’s technology may be the scalable solution to this confounding problem.
I have worked with hundreds of adult victims of nonconsensual pornography. Intimate images and videos of victims are sometimes disseminated for years before the depicted individuals discover that their privacy had been stolen from them. Clearview AI offers the possibility of rapid notification for these victims. Law enforcers could be empowered to notify victims of other websites that contain their intimate images and videos so that they can effectuate removal.
Law enforcement agencies across the country are already using technologies such as Clearview AI to apprehend child predators, drug traffickers, financial fraud rings, and murderers. These agencies want and need the technology.
All technology has the power to be used for good or for evil. As we say at my firm, it’s not the technology that’s bad – it’s the potential misuse. Civil libertarian organizations like the ACLU have long been hostile toward laws that could protect victims of CSAM and revenge porn, acting instead in favor of untrammeled Big Tech and an unfettered Internet. It’s inconsistent of these organizations to single out facial-recognition for scrutiny when they celebrate far more intrusive technologies. Rather than attack facial recognition, they should help develop a thoughtful regulatory plan that enables law enforcement to scale up its efforts to help victims. Facial recognition can and should be harnessed to make the world a safer place.
Carrie Goldberg owns victims’ rights law firm C. A. Goldberg, PLLC and is the author of Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls.
[Editor’s note: This story originally was published by Real Clear Wire.]