← Back to Projects

Magic Window

Magic Window project image

2015 · Immersive Displays · Real-Time Graphics · Computer Vision · WebGL

Magic Window

The Magic Window project at the Georgia Institute of Technology is an immersive, walk-up visualization system developed within the Institute for People and Technology (IPaT) and the Research Network Operations Center (RNOC). The installation transforms a large display surface into a spatially responsive “window” into remote environments, enabling viewers to experience perspective-corrected, real-time visualizations that respond to their position in space.

Magic Window has been featured as part of Georgia Tech’s advanced networking and visualization research initiatives, including demonstrations of software-defined networking (SDN) and high-performance media transport. The system integrates sensing hardware, GPU-accelerated rendering, and network infrastructure to create a seamless illusion of looking through a physical aperture into a distant or synthetic scene.

Technical Overview

At a systems level, Magic Window combines:

The core interaction principle is perspective tracking. As a viewer moves laterally in front of the display, the rendered scene updates in real time to preserve correct parallax. This produces the perceptual effect of depth and spatial continuity rather than a static flat projection.

Because the camera system relies on wide-angle and fisheye lenses, raw image feeds require geometric correction before display. Achieving this correction at interactive frame rates requires careful mapping onto GPU shaders and efficient handling of texture coordinates.

My Contributions

I was responsible for two key subsystems in the Magic Window implementation:

1. GPU-Accelerated Fisheye Undistortion (Three.js / WebGL)

I implemented the underlying fisheye undistortion pipeline using Three.js and WebGL, executing the correction directly on the GPU.

This involved:

Rather than performing CPU-side pixel remapping, the solution leverages programmable shaders to compute inverse distortion mappings per fragment. This approach enables full-resolution correction at display refresh rates and supports dynamic viewpoint adjustments.

The result is a visually stable, perspective-corrected image that preserves the illusion of a physical window while operating within a web-based rendering stack.

2. Kinect Hardware Interface and WebSocket Communication

I implemented the hardware interface layer for Microsoft Kinect depth sensing and built the WebSocket communication bridge between the sensing subsystem and the browser-based rendering client.

This included:

The WebSocket layer allowed the browser-based Three.js renderer to receive real-time viewer position updates, enabling perspective-correct rendering without requiring native plugins.

This architecture separated sensing, networking, and rendering concerns while preserving real-time responsiveness.

Software-Defined Networking Context

Magic Window demonstrations were also used in conjunction with Georgia Tech’s software-defined networking initiatives to showcase advanced network orchestration and dynamic media routing. The system provided a compelling visualization endpoint for high-performance, programmable network research.

Impact

Magic Window represents an intersection of:

My contributions focused on the graphics and sensing pipeline that made the spatial illusion technically viable in a browser-based environment. By implementing GPU-accelerated fisheye correction and the Kinect-to-WebSocket bridge, I helped establish the real-time, perspective-aware rendering core of the system.

The project illustrates how modern web graphics, depth sensing hardware, and programmable networks can converge to produce immersive, responsive visualization environments without relying on proprietary rendering engines.