physical

Siftable Computing

Monday, February 9th, 2009 | General, Navimation examples | No Comments

For a while I have been thinking about how the principles of navimation can be embodied in the physical world. There is no reason why navigation intertwined with screened movement should only happen on the screens of mobile phones and desktop computers. Then suddenly I stumbled upon this ‘Siftable Computing’ video, demonstrating a new mixed-reality interface:

This is a student project by David Merrill and Jeevan Kalanithi from the MIT Media Lab, presented at the TED conference (via Wired).

The interface (or should we call them interfaces?) consists of many tiny cubes with screens. Each cube has motion sensors, and they react in certain ways when they are placed together. The video also shows how these small boxes can relate to a larger screen for different types of interactions.

Obviously this platform opens up for a range of possibilities for interaction design, and new and interesting ways of communicating through the interface. It raises questions of what an interface can be, and, for example,  how the sensation of space can be manipulated. The cubes can be rearranged in the physical environment (restricted by laws of nature), while the screen spaces allow representations of all kinds of spatial environments. When interaction and movement (both real and screened) is introduced, a range of combinations are possible. There must be a lot of possibilities to explore beyond those presented in the video. I hope the inventors will be able to make this into a commercial product, as it will allow for a range of new and exiting interfaces to be designed.

UPDATE: Lise gave me a hint on the nice presentation by David Merrill at TED.

Tags: , , , , , , ,