Informationweek Commentary How To Ensure Your Machine Learning Models Arent Fooled Machine learning models are not infallible. In order to prevent attackers from exploiting a model, researchers have designed various techniques to make machine learning models more robust. Any system that uses a neural network can be exploited. Luckily, there are known techniques that can mitigate or even prevent adversarial attacks completely. The field of adversarial machine learning is growing rapidly as companies realize the dangers of adversarial attacks. We will look at a brief case study of face recognition systems and their potential vulnerabilities. The attacks and counters described here are somewhat general, but face recognition offers easy and understandable examples.