Approaches for Fake Content Detection: Strengths and Weaknesses to Adversarial Attacks

Matthew Carter, Michail Tsikerdekis, Sherali Zeadally

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

In the last few years, we have witnessed an explosive growth of fake content on the Internet which has significantly affected the veracity of information on many social platforms. Much of this disruption has been caused by the proliferation of advanced machine and deep learning methods. In turn, social platforms have been using the same technological methods in order to detect fake content. However, there is understanding of the strengths and weaknesses of these detection methods. In this article, we describe examples of machine and deep learning approaches that can be used to detect different types of fake content. We also discuss the characteristics and the potential for adversarial attacks on these methods that could reduce the accuracy of fake content detection. Finally, we identify and discuss some future research challenges in this area.

Original languageEnglish
Article number9233435
Pages (from-to)73-83
Number of pages11
JournalIEEE Internet Computing
Volume25
Issue number2
DOIs
StatePublished - Mar 1 2021

Bibliographical note

Publisher Copyright:
© 1997-2012 IEEE.

Keywords

  • adversarial
  • attacks
  • content
  • detection
  • fake

ASJC Scopus subject areas

  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Approaches for Fake Content Detection: Strengths and Weaknesses to Adversarial Attacks'. Together they form a unique fingerprint.

Cite this