MIT researchers developed a machine-learning technique that captures and models the underlying acoustics of a scene from a limited number of sound recordings. The system can accurately simulate what any sound, like a song, would sound like if a person were to walk around to different locations in a scene.
A new technique enables on-device training of machine-learning models on edge devices like microcontrollers, which have very limited memory. This could allow edge devices to continually learn from new data, eliminating data privacy issues, while enabling user customization.
MIT scientists discover that contemporary deep-learning methods confidently recognize images that are nonsense, which could be concerning for medical and autonomous driving decision-making.