Workshop on Adversarial Machine Learning and Voice Assistant Vulnerabilities

aimcom 2 workshop riding with ai towards mission n.w
1 / 4
Embed
Share

Explore the innovative discussions at the AIMCOM2 Workshop on Adversarial Machine Learning and Voice Assistant Vulnerabilities, featuring insights from Ananthram Swami and Yingying Chen on critical topics in network protocols and edge computing. Discover the latest advancements in detecting and defending against adversarial attacks in machine learning systems and voice assistants. Join this session to delve into the complexities of mission-critical communications and computing at the edge and stay updated on cutting-edge research in the field.

  • Workshop
  • Machine Learning
  • Voice Assistants
  • Network Protocols
  • Edge Computing

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. AIMCOM2Workshop Riding with AI towards Mission-Critical Communications and Computing at the Edge Session D The 28th IEEE International Conference on Network Protocols (ICNP 2020) Madrid, Spain, October 13, 2020

  2. AIMCOM2Workshop - Session D Session D [Keynote] Adversarial Machine Learning Ananthram Swami (ARL, USA) [Invited] Vulnerabilities of Voice Assistants at the Edge: From Defeating Hidden Voice Attacks to Audio-based Adversarial Attacks Yingying (Jennifer) Chen (Rutgers University, USA)

  3. Adversarial Machine Learning Ananthram Swami(ARL, USA) Ananthram Swami is with the US Army CCDC Army Research Laboratory and is the Army's Senior Research Scientist (ST) for Network Science. Prior to joining ARL, he held positions with Unocal Corporation, the University of Southern California, CS-3 and Malgudi Systems. He was a Statistical Consultant to the California Lottery, developed a MATLAB-based toolbox for non-Gaussian signal processing. He has held visiting faculty positions at INP, Toulouse and Imperial College, London. He received the B.Tech. degree from IIT-Bombay; the M.S. degree from Rice University, and the Ph.D. degree from the University of Southern California (USC), all in Electrical Engineering. Swami's work is in the broad area of network science. He is an ARL Fellow and a Fellow of the IEEE. Adversarial Machine Learning. Modern machine learning systems are susceptible to adversarial examples; inputs that preserve the characteristic semantics of a given class, but whose classification is incorrect. Current approaches to defense against adversarial attacks rely on modifications to the input (e.g. quantization, randomization) or to the learned model parameters (e.g. via adversarial training), but are not always successful. This talk will include: 1) Overview of attacks on machine learning and defenses. 2) Discussion of the enablers of successful adversarial attacks via theory, and empirical analysis of commonly used datasets. 3) Discussion of recently proposed defenses that change the representation of the model outputs, drawing upon insights from coding theory. 4) Novel approaches to detection of adversarial examples using confidence metrics. The talk will conclude with a discussion of issues in distributed ML in coalition operations.

  4. Vulnerabilities of Voice Assistants at the Edge: From Defeating Hidden Voice Attacks to Audio-based Adversarial Attacks Yingying (Jennifer) Chen (Rutgers University, USA) Yingying (Jennifer) Chen is a Professor of Electrical and Computer Engineering and Peter Cherasia Faculty Scholar Endowed Professor at Rutgers University. She is the Associate Director of Wireless Information Network Laboratory (WINLAB). She also leads the Data Analysis and Information Security (DAISY) Lab. She is an IEEE Fellow. Her research interests include mobile sensing and computing, cyber security and privacy, Internet of Things, and smart healthcare. Her background is a combination of Computer Science, Computer Engineering and Physics. She was a tenured professor at Stevens Institute of Technology and had extensive industry experiences at Nokia previously. She has published over 200 journal articles and conference papers and 8 patents. She is the recipient of multiple Best Paper Awards from EAI HealthyIoT 2019, IEEE CNS 2018, IEEE SECON 2017, ACM AsiaCCS 2016, IEEE CNS 2014 and ACM MobiCom 2011. She is also the recipient of NSF CAREER Award and Google Faculty Research Award. She received NJ Inventors Hall of Fame Innovator Award and is also the recipient of IEEE Region 1 Technological Innovation in Academic Award. Her research has been reported in numerous media outlets including MIT Technology Review, CNN, Fox News Channel, Wall Street Journal, National Public Radio and IEEE Spectrum. She has been serving/served on the editorial boards of IEEE Transactions on Mobile Computing (IEEE TMC), IEEE Transactions on Wireless Communications (IEEE TWireless), IEEE/ACM Transactions on Networking (IEEE/ACM ToN) and ACM Transactions on Privacy and Security. Vulnerabilities of Voice Assistants at the Edge: From Defeating Hidden Voice Attacks to Audio-based Adversarial Attacks. Voice access technologies are widely adopted in mobile and voice assistant systems at the edge to serve as both critical and convenient ways for user interaction. Recent studies have demonstrated various vulnerabilities of voice assistant systems. One serious attack is to use synthetically rendered adversarial sounds embedded within a voice command to trick the speech recognition process into executing malicious commands, without being noticed by legitimate users. We show that by employing low-cost motion sensors, in a novel way, to detect these hidden voice commands. Our approach is based on the premise that while the crafted audio features of the hidden voice commands may fool an authentication system in the audio domain, their unique audio-induced surface vibrations captured by the motion sensor are hard to forge. Our approach extracts and examines the unique audio signatures of the issued voice commands in the vibration domain to detect the presence of hidden voice attacks. We further show that speech/speaker recognition systems are vulnerable to adversarial attacks. In particular, we demonstrate that we could successfully launch over-the-air adversarial attacks to the state-of-the-art deep neural network (DNN) based speaker recognition system in practical scenarios. In such scenarios, the adversarial examples are played through a loudspeaker to compromise speaker recognition devices.

Related


More Related Content