MIT researchers developed a machine-learning technique that captures and models the underlying acoustics of a scene from a limited number of sound recordings. The system can accurately simulate what any sound, like a song, would sound like if a person were to walk around to different locations in a scene.