South Korean police use deepfake detection tool ahead of election

Amid the sharp rise in politically motivated deepfakes, South Korea’s National Police Agency (KNPA) has developed and implemented a tool to detect AI-generated content for use in potential criminal investigations.

According to the KNPA’s National Office of Investigation (NOI), the deep learning program was trained on approximately 5.2 million data points from 5,400 Korean citizens. It can determine whether a video (which it has not been pre-trained on) is real or not in just 5-10 minutes, with an accuracy rate of around 80%. The tool automatically generates a hit sheet that the police can use in criminal investigations.

This was reported by the Korean media, these results will be used to inform investigations but not as direct evidence in criminal trials. The police will also give space to collaboration with artificial intelligence experts in the academic and business world.

AI security experts have called for the use of AI for good, including the detection of misinformation and deepfakes.

“That’s the thing: AI can help us analyze [false content] at any scale,” Gil Shwed, CEO of Check Point, told Dark Reading in an interview this week. While AI is the disease, he said, it is also the cure: “[Detecting fraud] in the past they required very complex technologies, but with artificial intelligence you can do the same thing with a minimal amount of information, not just good, large amounts of information.”

The problem of deepfakes in Korea

While the rest of the world waits in anticipation for deepfakes invade election seasonsthe Koreans have already addressed the issue up close and personal.

The canary in the coal mine occurred during the 2022 provincial election, when a video circulating on social media appeared to show President Yoon Suk Yeol endorsing a local candidate for the ruling party.

This type of deception has become more widespread lately. Last month, the country’s National Election Commission revealed that between January 29 and February 16 129 deepfakes detected in violation of election laws – a figure that is only expected to increase as Election Day on April 10 approaches. This is despite a revised law that took effect Jan. 29 that says using deepfake videos, photos or audio in connection with elections can earn a citizen up to seven years in prison and fines of up to 50 million won. (approximately $37,500).

Not just misinformation

Check Point’s Shwed warned that, like any new technology, AI has its risks. “So yes, there are bad things that can happen and we have to defend ourselves from them,” he said.

False information isn’t so much the problem, he added. “The biggest problem in human conflict in general is that we don’t see the whole picture: we pick and choose [in the news] that we want to see, and then based on that we will make a decision,” he said.

“It’s not about misinformation, it’s about what you believe. And based on what you believe, you choose what information you want to see. Not the other way around.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *