AI Threats: Adversarial Examples with a Quantum-Inspired Algorithm

Kuo Chun Tseng, Wei Chieh Lai, Wei Chun Huang, Yao Chung Chang, Sherali Zeadally

Research output: Contribution to specialist publicationArticle

Abstract

AI is integral to our lives and consumer electronics (such as biometric recognition, autonomous vehicles, voice assistants, and others). However, the use of AI in consumer electronics also face serious security threats. Attackers can generate adversarial examples, to exploit AI vulnerabilities for specific attacks. This study discusses potential attack chains with adversarial examples and current developments in the broadly applicable field of image recognition. We also propose a simple black-box framework for generating adversarial examples that can be used to attack AI models. This framework enables the easy swapping of metaheuristics or other algorithms. The implementation includes some classic metaheuristics and introduces an effective quantum-inspired metaheuristic with an average success rate of 96.2%, thereby achieving an attack efficacy nearly equivalent to that of white-box attacks. Additionally, its convergence capability is superior to other well-known metaheuristic algorithms.

Original languageEnglish
Pages1-8
Number of pages8
Specialist publicationIEEE Consumer Electronics Magazine
DOIs
StateAccepted/In press - 2024

Bibliographical note

Publisher Copyright:
IEEE

Keywords

  • Adaptation models
  • Artificial intelligence
  • Closed box
  • Consumer electronics
  • Data models
  • Glass box
  • Perturbation methods

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Hardware and Architecture
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'AI Threats: Adversarial Examples with a Quantum-Inspired Algorithm'. Together they form a unique fingerprint.

Cite this