Abstract
AI is integral to our lives and consumer electronics (such as biometric recognition, autonomous vehicles, voice assistants, and others). However, the use of AI in consumer electronics also face serious security threats. Attackers can generate adversarial examples, to exploit AI vulnerabilities for specific attacks. This study discusses potential attack chains with adversarial examples and current developments in the broadly applicable field of image recognition. We also propose a simple black-box framework for generating adversarial examples that can be used to attack AI models. This framework enables the easy swapping of metaheuristics or other algorithms. The implementation includes some classic metaheuristics and introduces an effective quantum-inspired metaheuristic with an average success rate of 96.2%, thereby achieving an attack efficacy nearly equivalent to that of white-box attacks. Additionally, its convergence capability is superior to other well-known metaheuristic algorithms.
Original language | English |
---|---|
Pages | 1-8 |
Number of pages | 8 |
Specialist publication | IEEE Consumer Electronics Magazine |
DOIs | |
State | Accepted/In press - 2024 |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- Adaptation models
- Artificial intelligence
- Closed box
- Consumer electronics
- Data models
- Glass box
- Perturbation methods
ASJC Scopus subject areas
- Human-Computer Interaction
- Hardware and Architecture
- Computer Science Applications
- Electrical and Electronic Engineering