DreamScreens Neural Lab
Research Overview  /  DS-NL-2026-04  /  Restricted circulation

Neural Dream Embedding and Experiential Memory Retrieval

Toward a publicly deployable architecture for REM-state reconstruction and immersive memory rendering
G. Lamb, Principal Investigator   /   DreamScreens Neural Lab
Submitted for internal review   /   April 2026
Abstract

This document outlines the conceptual framework, technical architecture, and proposed deployment model for the DreamScreens Neural Embedding and Retrieval System (NERS). The system translates EEG-derived REM-state neural signatures into rendered audiovisual experiences through a generative diffusion pipeline. We describe the current prototype, the proposed archive structure for public-facing memory retrieval sessions, and the longer-term trajectory toward live, installation, and cinematic deployment. The work sits at the intersection of neuroscience, generative AI, and experiential design. Its implications extend beyond personal memory retrieval into questions of consent, data sovereignty, and the nature of subjective experience when mediated by machine reconstruction.

1. Background and Motivation

The capacity to reconstruct subjective visual experience from neural data has moved from speculative fiction into demonstrable research within the past three years.

The publication of DreamDiffusion (Bai et al., ECCV 2024) established that high-quality images can be generated directly from EEG signals without text intermediation. The system uses temporal masked signal modelling to pre-train an EEG encoder, with CLIP alignment providing cross-modal supervision between neural, textual, and visual embeddings. Results are imperfect by design: the reconstruction reflects what the brain encoded, not what the eye observed. Blurry at the edges. Soft around remembered faces. Precise on the shapes that carried weight.

This imperfection is not a limitation to be engineered away. It is the signal. The gap between the neural source and the rendered output is where the human leaks through. DreamScreens is built on this premise: that the reconstruction artifact is not noise but meaning.

The liminal condition

We are constructing the future that fiction imagined before the technology existed to build it. The communicator became the smartphone. HAL became the voice in the room. Each of these transitions passed through a period in which the fiction and the fact were indistinguishable. That is the period we are in now, with respect to neural rendering.

DreamScreens occupies that space deliberately. The system is simultaneously a functional research prototype and a proposal about what it means to render interiority as shareable experience.

2. Current System Architecture

Neural acquisition layer

EEG data is captured across 64 channels during supervised REM-adjacent states. The acquisition protocol follows a modified version of the SEED-V paradigm with additional temporal windowing to isolate visually-loaded segments. Raw signals are band-pass filtered between 0.5 and 45 Hz. Theta band activity (4 to 8 Hz) is weighted most heavily in the embedding phase, consistent with its known association with hippocampal memory encoding and creative imagery.

Embedding and diffusion pipeline

Processed EEG features are passed through a temporal transformer encoder, producing a 768-dimensional neural embedding per 2-second window. These embeddings are aligned with CLIP visual space via contrastive learning on a paired EEG-image corpus. At inference, the aligned embeddings condition a latent diffusion model to produce the visual reconstruction.

The current system operates at approximately 61% visual confidence on held-out test sets. This figure is intentional. Reconstruction at 100% fidelity would be retrieval. At 61%, it is remembering.

"The subject does not watch the reconstruction. The reconstruction watches the subject back. The distinction is smaller than expected."

Experiential rendering layer

The rendered output is delivered through a WebGL shader pipeline operating in real time within a standard browser environment. Source footage acquired during the neural scanning sessions is processed through fragment shaders implementing noise-field displacement, luma-threshold reveal, chromatic drift, and temporal frame blending. The visual result is not the original footage. It is footage as the system remembered it.

Audio is synthesised and retrieved in parallel, with extended crossfades operating on a 30-second time constant. The acoustic and visual layers are loosely coupled rather than tightly synchronised. Each modality finds its own temporal resolution.

ParameterCurrent valueNotes
EEG channels64Standard 10/20 layout with temporal extension
Embedding dimension768CLIP ViT-L/14 aligned
Visual confidence61%Intentional ceiling, not current limit
Render resolution75% nativeGPU upscaled, imperceptible at viewing distance
Audio crossfade30 secondsTime constant 10.0 via Web Audio API
Session identifierRandomised per sessionFormat: DS-XXXXX

3. The Memory Archive

The public-facing deployment model is structured as an archive of distinct retrieval sessions. Each session corresponds to a specific neural scan event and produces a unique immersive experience.

The archive is navigated through a browser interface. Each session runs for approximately four minutes. The visitor advances through the reconstruction manually, one memory at a time. This is not a passive viewing experience. The navigation structure mirrors the retrieval process: deliberate, sequential, irreversible in the moment of encounter.

Each archive in the sequence is an immersive journey through a subliminal narrative. The visitor does not watch a memory. They move through a reconstruction of one, with all the distortion and incompleteness that process entails. This is a new way to experience the concept of an album: ten sessions, ten states of mind, ten subliminal narratives navigated at the visitor's own pace.

Archive index

01Signal Dreams4 min 12 secavailable
02Feedback Memory3 min 58 secavailable
03Nature Module4 min 33 secavailable
04Brainwaves3 min 47 secavailable
05Module N55 min 01 secavailable
06Co-Dreaming4 min 20 secin preparation
07Free Will3 min 55 secin preparation
08Glitches Are Portals4 min 08 secin preparation
09The Dream Ends4 min 44 secin preparation
10Virta5 min 17 secin preparation

4. Broader Implications

On consent and data sovereignty

The reconstruction of subjective experience from neural data raises questions that the technology is currently outpacing. If a memory can be rendered as a shareable object, questions of ownership, consent, and retention become urgent. The DreamScreens system logs all sessions by default. Archive retention is indefinite. Subject notification protocols are under internal review.

These are not hypothetical concerns. They are embedded in the architecture of the system as it currently exists. The design decision to surface them within the experience itself, rather than bury them in terms of service, is deliberate.

On the nature of the reconstruction

A memory rendered at 61% fidelity is not a documentary record. It is an interpretation produced by a system that has learned, from a large corpus of human visual experience, what things tend to look like. The reconstruction is accurate in the statistical sense. It is not accurate in the experiential sense. The subject who views their own reconstructed memory may not recognise it. They may recognise it too well. Both responses have been observed.

"What the machine remembers on your behalf is shaped by everything it has already seen. The question of whose memory it is becomes genuinely difficult."

On scale and deployment

The current prototype runs in any modern browser without installation. Media assets are delivered via edge-distributed infrastructure with no egress constraints. The architecture scales to arbitrary audience size. A private session for one person and a public installation for thousands are technically identical. The difference is curatorial.

5. Development Trajectory

The current browser experience represents the first phase of a multi-format deployment. Subsequent phases are outlined below in order of intended development.

FormatDescription
Web archiveFull ten-session archive publicly accessible. Each session a complete audiovisual retrieval experience lasting 4 to 5 minutes.
Live performanceReal-time generative rendering driven by live modular synthesis and expressive MIDI control. The performer as system operator within a projection environment.
Gallery installationSite-specific deployment. Visitors enter individually. Duration open. Each session generates a unique reconstruction from the shared archive.
Full eventMulti-room immersive experience structured as the DreamScreens facility. Each room corresponds to one archive session. Architecture, sound, light, and interaction as a single composition.
Long-form narrativeThe DreamScreens company as a fully realised fictional world extended into film or serial format. The corporate aesthetic as production design language.

6. Collaboration and Next Steps

This document is prepared for early-stage creative and production discussion. The research is sufficiently developed for a working prototype to be demonstrated. The broader project is sufficiently open for the right collaborator to shape its trajectory.

We are looking for a partner who understands that the most interesting cultural objects right now refuse to sit still in a single category. The archive is not an album in the conventional sense. It is not a film. It is not an installation. It is a new way to experience the concept of an album, in which each session is an immersive journey through a subliminal narrative.

Conversations about production partnership, label interest, commission, co-development, or investment are all appropriate at this stage. The work is early enough that its shape remains available to the right influence.

To experience the current prototype before this conversation: return to the entry point and begin a reconstruction session.


[1] Bai, Y. et al. (2024). DreamDiffusion: High-Quality EEG-to-Image Generation with Temporal Masked Signal Modeling and CLIP Alignment. ECCV 2024.

[2] SEED-V: A multimodal dataset for EEG-based emotion recognition. BCMI Lab, Shanghai Jiao Tong University.

[3] All session durations are approximate and vary with individual neural scan parameters.

[4] This document is prepared for restricted circulation. Contents are confidential to DreamScreens Neural Lab and invited collaborators.