The Latent Future (2017)

An Audio-Visual Installation exploring the Latent Space of a Deep Learning Language Model.

Embedded in the 3-D space in this video next to news of actual past events are almost limitless amounts of texts of various kinds. Based on this real news, news of things that could happen or could have happened are generated.

By focusing on the nearly endless possible combinations of texts incorporated in language models, this work addresses the fragility of a society in which reality and fiction are mixed, as is prominently expressed in today’s Internet environments among others. Artificial intelligence (AI) has been fed with large amounts of past news*, and continuously receives news feeds from Twitter, based on which it generates and presents new, alternative news of things embedded in neural networks that “already exist” but have “not yet happened.” Like in Jorge Luis BORGES’s The Library of Babel that contains all possible combinations of words, this news contain large amounts of composed fictional news, as a result of which the line between fact and fiction is blurred.

The three-dimensional space in the display contains 3-D mappings of each sentence’s high-dimensional “latent feature” vectors (representing the sentence’s characteristics translated into a sequence of numbers), whereas distances within the latent space correspond with the semantic distances between the sentences.

The Latent Future – 潜在する未来 (2017)









  • Concept / Programming:  Nao Tokui
  • Visualization:  Shoya Dozono
  • Sound: Taeji Sawai
  • Assistant: Yuma Kajihara, Robin Jungers
  • Documentation Video: Mahaya Takara
  • Photo Courtesy: NTT InterCommunication Center [ICC]
  • Photo: KIOKU Keizo