These are exciting times for the artificial intelligence community. Interest in the field is growing at an accelerating pace, registration at academic and professional machine learning courses is soaring, attendance in AI conferences is at an all-time high, and AI algorithms have become a vital component of many applications we use every day. But as with any […]
Next image
Antonio Torralba, faculty head of Artificial Intelligence and Decision Making within the Department of Electrical Engineering and Computer Science (EECS) and the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science, has been selected as a 2021 Fellow by the Association for the Advancement of Artificial Intelligence (AAAI). AAAI Fellows are selected in recognition of their significant and extended contributions to the field (contributions which typically span a decade or more), including technical results, publications, patent awards, and contributions to group efforts.
Torralba received a degree in telecommunications engineering from Telecom BCN in Spain in 1994 and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France in 2000. From 2000 to 2005, he received postdoc training at both the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory
Hereâs how neuroscience can protect AI from cyberattacks
Shares
Deep learning has come a long way since the days it could only recognize hand-written characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications, from photo and video editors to medical software and self-driving cars.
Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as we humans do. But they still have a long way to go and make mistakes in situations that humans would never err.
These situations, generally known as adversarial examples, change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead machine learning models failing in unpredictable ways or becoming vulnerable to cyberattacks.