A new software is being tested that helps prevent “sexting,” the trend among youth to transmit explicit sexual images on social media.
The problem has been surging in recent years, leaving some minors with criminal records for the images they shared with friends.
The Belfast Telegraph reports a British startup is using software that detects threats on the fly.
The app identifies potentially problematic images or videos even while children are composing messages or texts, the report said.
“It could be used on children’s phones to prevent them from creating, sending or receiving video or pictures involving nudity, sexual content and violence ‘before any damage is done.'”
It also could be used by social-media companies to halt any uploads or streaming of videos and images.
The president of the company, SafeToNet, said new smart phones should be considered a “dangerous weapon.”
Richard Pursey said the targets would be campaigns that are well-known to police to groom children and engage in sextortion and bullying.
It also is being developed to identify gore, weapons and extreme violence.
Pursey said, “There’s no point you being told that yesterday your 12-year-old son sent a naked picture of himself, because it’s too late you know, you’ve squeezed that tube of toothpaste, the paste has come out and you can’t put it back in again.”
The report said the artificial intelligence, Safe-To-Watch, “successfully detected 92% of content involving nudity and 84% of violent examples during initial analysis of millions of images and videos, and accuracy rates are likely to improve as the system training continues.”
It is expected to be available sometime next year.
The U.K.’s Christian Institute said there have been reports of girls as young as 6 composing sexually explicit messages.