Imaginary Soundscape (2017)

Take a walk in soundscapes "imagined" by AI

Presented at NIPS 2017 workshop, Machine Learning for Creativity and Design

Imaginary Soundscape: Cross-Modal Approach to Generate Pseudo Sound Environments
Yuma Kajihara, Shoya Dozono, Nao Tokui


By taking a glance at a photo, we can imagine sounds we might hear if we were there. Can an AI system do the same? If so, what if we apply the method to images of Google Street View, so that we can walk around with the generated soundscape? This relatively straight-forward fantasy ended up as a website called Imaginary Soundscape.

“Imaginary Soundscape” is a web-based sound installation, in which viewers can freely walk around Google Street View and immerse themselves into imaginary soundscape generated with deep learning models.
We tried to create cross-modal sensory experiences to investigate our relationship with sound environment.

For more technical detail and the background:

A Post On Medium (in English)

Mediumに書いた解説記事 (日本語)

Coming features

  • Safari/Smartphone support
  • 3D Audio Spatialization for VR Headset
  • Add permalink to each location


  • Backend/Frontend Programming: Yuma Kajihara
  • UI Design: Shoya Dozono
  • Concept/Frontend Programming: Nao Tokui