Interactive Art Research
Designing immersive digital experiences that respond to user behavior through AI and real-time interactivity. This research focuses on creating dynamic installations that transform environments through generative art.
Objective
This research explores how real-time user input can dynamically influence generative art through machine learning models. The system enables installations that evolve based on audience interaction, creating unique experiences for each participant.
Key Technologies
- • Neural networks for real-time generation
- • WebGPU accelerated rendering
- • Motion sensor integration
- • WebAudio API for sound spatialization
Features
- • Multi-user gesture recognition
- • Environment-aware lighting
- • Procedural soundscapes
- • Adaptive visual complexity
Art Installation Simulator
Requires WebGPU enabled browser for optimal interaction
Art Research
Whitepaper
Technical exploration of interaction models in art installations with performance benchmarks across different input modalities
Download PaperCodebase
Open-source implementation including machine learning models, shader libraries, and sensor input frameworks for interactive environments
View on GitHubExhibit Guide
Interactive web application demonstrating our core research in digital art through live interaction samples without requiring special hardware
Open Interactive Guide