Pages

Saturday, August 7, 2010

Interactive Architecture: Behind the Scenes with Modular, Proximity-Sensing Display Tiles

Interactive Architecture: Behind the Scenes with Modular, Proximity-Sensing Display Tiles:



Brooklyn-based interactive artist Robert Stratton writes to share his interactive, modular LED display system, currently on view through the end of August in a window on 53rd Street in Manhattan, between 5th and 6th across from the Museum of Modern Art. The project uses proximity sensors built by Sensacell.
This installation is an interactive l.e.d. triptych on display on 53rd St bet 5th and 6th through August 2010. Children were prompted to make various expressions and funny faces. The video plays on two layers and participants can manipulate rectangular “holes” in the upper layer to partially reveal the video in the lower layer, creating a portrait of a hybrid person conveying a hybrid emotion.
This kind of installation has gone from being a novelty to a sort of medium in itself – and perhaps a new venue for visualist work. So I asked to hear more about how Bob put together the project and what he learned. He responds for CDM:
As I mentioned, the interactive modular L.E.D. system is made by my friends at NYC-based Sensacell Corporation. There is tons of information about their offering and the technical specs on their system on their site at www.sensacell.com but basically, they make modular L.E.D. tiles with built-in capacitive proximity sensors. I’ve been working with them doing content and display software for each successive version of the system since they started in the early 2000s (I was a partner with one of the principles, Leo Fernekes, in the technology/surveillance-themed nightclub Remote Lounge in the early part of the decade). The tiles can work in various “autonomous modes” where they simply light up when the sensors are triggered or (and this is where I come in) one can write software to read the sensors and to send display data to the tiles. I started Madbutter (www.madbutter.com) as a design and programming studio to develop content and programming for these interactive installations earlier this year. I’ve gotten a lot of favorable attention for the pieces I have done so far but I’m still looking to get the word out on the capabilities of this system to architects and designers looking to install interactive art that can work reliably and effectively at this “architectural” scale.
The current version of the system that I am using here is based on 6″ square tiles which has full RGB LEDs set at an inch pitch – for 36 independently addressable LEDs per tile – and 4 proximity sensors. These can be arranged into arrays (or irregular shapes for that matter) of virtually any size. We have put them in floors, in and on walls, in windows, in furniture, around pillars etc. The general idea is to layer the tiles with a translucent, non-conductive surface that protects the tile and diffuses the LED slightly. In the case of this installation the tiles are fully interactive through a half inch of frosted plexiglass and a quarter inch of plate glass.
I wrote the display system in Max/MSP/Jitter. I worked with Sensacell to develop a custom box (we call it a SensaNode) that sends to my control computer, over TCP/IP, the polled sensor data of the whole tile array as a Jitter matrix and also in turn reads in a matrix of display data sent from Jitter to cut up and process and route to the individual tiles. Currently we are comfortably and reliably getting 20 fps I/O on our read/write cycle, so I author display data accordingly. Because the pixel pitch is relatively large, the combined triptych (three 4 foot wide by 6 foot tall panels) in the 53rd St installation uses only a 144×72 display matrix, and the sensor matrix is only 48×24, so it is important to author content that is effective at that resolution! More importantly, of course, is to come up with interesting and immediately apparent ways to use the incoming sensor matrix to manipulate the outgoing display data in real time. I usually use some computer vision externals (the excellent cv.jit package) to process the sensor data to give me centroid or blob coordinates that I can use to interactively track some value I can manipulate in Jitter.
Aside from this particular work, I’m very intrigued in the potential of the technology and how a variety of artists might push it in different directions. I hope we can get a discussion going here; do join in.

No comments:

Post a Comment