Bridging the gap between human intuition and machine intelligence through real-time responsive art.
🚀 View Project ArchitectureThis project explores real-time AI systems that adapt to human input, creating installations where visitors can influence the behavior of neural networks through movement, voice, and touch.
Using depth sensors and microphones, the system captures audience interactions in real-time. This data becomes the input that shapes the AI's behavior and visual output.
The AI processes interactions through a reinforcement learning model, generating instant visual and auditory feedback that evolves based on audience patterns and environmental conditions.
// Real-time interaction processing const audioContext = new AudioContext(); navigator.mediaDevices.getUserMedia({ audio: true }) .then(stream => { const source = audioContext.createMediaStreamSource(stream); source.connect(analysisNode); }); function updateAI(data) { inferenceModel.predict(data).then(response => { updateVisuals(response); }); }
This simplified example demonstrates the audio input pipeline that drives real-time generative visuals in our installations.
An immersive sound installation where music is generated by AI based on the emotional tenor of the audience.
🔍 View ProjectAn interactive light dance where color patterns emerge from human movement patterns detected by LIDAR sensors.
🔍 View ProjectA voice-activated installation where spoken words are transformed into evolving visual metaphors through emotional analysis.
🔍 View ProjectThe key challenge was ensuring the AI responded intuitively to audience input while maintaining aesthetic coherence. We experimented with feedback loops to let the AI "learn" what patterns audiences found engaging.
These installations are touring globally. If you'd like your venue to host one of these immersive AI experiences, let's connect.