딥페이크란 인공지능 기술에 의해 합성된 이미지, 비디오, 오디오 등의 미디어로서 실제 존재하지 않거나 발생하지 않은 사건 등을 묘사한 것을 뜻하며, 창의적인 콘텐츠 문화를 만들어 낼 수 있다는 긍정적인 측면과 함 께, 여론조작을 통한 민주주의에 대한 위협, 범죄에의 활용 등 여러 사회적 위험성 또한 내포하고 있다. 이 글에서는 딥페이크 기술의 악의적 활용의 위험성에 기초하여 딥페이크 기술 이용에 대한 규제방안을 검토함으로써, 우리 사회에서 이 인공지능 기술을 안전하게 이용할 수 있는 방법을 모색 해 본다. 이를 위하여 딥페이크 기술을 악용하는 행위의 위험에 대하여 살 펴보고, 이에 관련된 국내외 입법동향 및 딥페이크 기술 일반에 관련된 전 반적인 대응 방향성을 검토하였다. 딥페이크의 일반 규제를 위해서는 딥페이크에 대한 규제내용을 담은 기 본법이 필요하며, 딥페이크의 특징인 허위성, 미디어의 파급력, 디지털 형 식의 빠른 유포를 고려한 고려한 규제방식이 필요하다. 이를 위하여 딥페이 크 생성물에 대한 표시강제제도 도입, 악의적 딥페이크의 유통 금지, 그 외 온라인 서비스 제공자의 의무로서 사전 모니터링 및 자율규제, 피해자 지원 을 위한 신속 삭제 등이 요구된다. 이에 대한 입법 및 개발자와 온라인 플 랫폼 업체의 적극적인 대응이 동시에 요구되고 있다.
As Deepfakes phenomenon is spreading worldwide mainly through videos in web platforms and it is urgent to address the issue on time. More recently, researchers have extensively discussed deepfake video datasets. However, it has been pointed out that the existing Deepfake datasets do not properly reflect the potential threat and realism due to various limitations. Although there is a need for research that establishes an agreed-upon concept for high-quality datasets or suggests evaluation criterion, there are still handful studies which examined it to-date. Therefore, this study focused on the development of the evaluation criterion for the Deepfake video dataset. In this study, the fitness of the Deepfake dataset was presented and evaluation criterions were derived through the review of previous studies. AHP structuralization and analysis were performed to advance the evaluation criterion. The results showed that Facial Expression, Validation, and Data Characteristics are important determinants of data quality. This is interpreted as a result that reflects the importance of minimizing defects and presenting results based on scientific methods when evaluating quality. This study has implications in that it suggests the fitness and evaluation criterion of the Deepfake dataset. Since the evaluation criterion presented in this study was derived based on the items considered in previous studies, it is thought that all evaluation criterions will be effective for quality improvement. It is also expected to be used as criteria for selecting an appropriate deefake dataset or as a reference for designing a Deepfake data benchmark. This study could not apply the presented evaluation criterion to existing Deepfake datasets. In future research, the proposed evaluation criterion will be applied to existing datasets to evaluate the strengths and weaknesses of each dataset, and to consider what implications there will be when used in Deepfake research.
In the following years, technology has progressed in so many ways that it has provided the cyber society with a resource that only computers can excel at, such as the art of counterfeit of media, which was before unavailable. Deepfakes are a term used to describe this kind of deception. The majority of well-documented Deep Fakes are produced using Generative Adversarial Network (GAN) Models, which are essentially two distinct Machine Learning Models that perform the roles of attack and defence. These models create and identify deepfakes until they reach a point where the morphing no longer detects the deepfakes anymore. Using this algorithm/model, it is possible to discover and create new media that has a similar demographic to the training set, resulting in the development of the ideal Deep Fake media. Because the alterations are carried out utilising advanced characteristics, they cannot be seen with the human eye. However, it is completely feasible to develop an algorithm that can automatically identify this kind of tampering carried out via the internet. This not only enables us to broaden the scope of our search beyond a single media item, but also beyond a large library of mixed media. The more it learns, the better it becomes as artificial intelligence takes over in full force with automation. In order to create better deep fakes, new models are being developed all the time, making it more difficult to distinguish between genuine and morphing material.