Memory Corruption Issues Lead 2021 CWE Top 25
The MITRE Common Weakness Enumeration (CWE) team s latest list of most dangerous software flaws includes several that shot up in significance since 2020.
Jai Vijayan
Memory corruption errors remain one of the most common and dangerous weaknesses in modern software.
The MITRE-operated Homeland Security Systems Engineering and Development Institute put the issue on top of its latest list of the 25 most dangerous software weaknesses based on an analysis of Common Vulnerabilities and Exposures (CVE) data and severity scores associated with each CVE.
The MITRE Common Weakness Enumeration (CWE) team counted a total of 3,033 identified security bugs associated with out-of-bounds – or memory corruption issues in the National Vulnerability Database (NVD) over the past two years. The vulnerabilities had an average severity ranking of 8.22 on a scale of 10, which meant most were considered serious to very critical. Among other things,
PDF
Research into methods of attacking machine-learning and artificial-intelligence systems has surged with nearly 2,000 papers published on the topic in one repository over the last decade but organizations have not adopted commensurate strategies to ensure that the decisions made by AI systems are trustworthy
A new report from AI research firm Adversa looked at a number of measurements of the adoption of AI systems, from the number and types of research papers on the topic, to government initiatives that aim to provide policy frameworks for the technology. They found that AI is being rapidly adopted but often without the necessary defenses needed to protect AI systems from targeted attacks. So-called adversarial AI attacks include bypassing AI systems, manipulating results, and exfiltrating the data that the model is based on.
Expect an Increase in Attacks on AI Systems darkreading.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from darkreading.com Daily Mail and Mail on Sunday newspapers.
Funding will advance ethical AI research
BERRYVILLE, Va., Jan. 27, 2021 – The Berryville Institute of Machine Learning (BIML), a research think tank dedicated to safe, secure and ethical development of AI technologies, announced today that it is the recipient of a $150,000 grant from Open Philanthropy.
BIML, which is already well known in ML circles for its pioneering document, “Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning,” will use the Open Philanthropy grant to further its scientific research on Machine Learning risk and get the word out more widely through talks, tutorials, and publications. In a future where machine learning shapes the trajectory of humanity, we ll need to see substantially more attention on thoroughly analyzing ML systems from a security and safety standpoint, said Catherine Olsson, Senior Program Associate for Potential Risks from Advanced Artificial Intelligence at Open Philanthropy. We are excite