Smart Room Technology

I work at the National Center for Supercomputing Applications.  A few years ago, I designed Smart Room technology. I used Java and Jini to create an environment of hardware and software services that was able to interact with people in a variety of ways.  I wrote everything below (except the RFID tag reading software) in less than three months, and demoed it at the NCSA  annual partner meeting.

That was all a few years ago. Since that time, I started an project using Jini to do Home Automation. There are even examples of how the voice synthesis sounds when speaking the time! You can hear examples here, here, and here. Yes, that's my real voice, and no that's not just a recording. I used FreeTTS (a follow-on software package that I used well after I wrote the original Smart Room. Basically, FreeTTS splits up portions of text, and reassembles them to say new phrases, which you can set up to play when you pass it English text. You have to train the software on certain domains (like Time, or for the original room, Weather), and it works very well.

Applications

RFID tag reader and software that was used to configure the room to user preferences.

Tag "A" gets within range of the RFID reader, and a light go on, and classical music starts to play. Tag "A" leaves the area and the light goes off and the music stops. Tag "B" gets within range, the light goes back on, but this time rock music starts to play. When both badges are within range, it looks at the music preference that both users have, and decides to play music the both like, in this case, blues.

The reader software was written by Mary Pietrowicz when she worked in the Computer Science department at the University of Illinois.  It used JavaSpaces to publish information about the badges. The reader software accesses services in the room, namely the light control and music services, and used those service as badges came and went..

PDA software that allowed control of room services

An iPAQ had it's OS removed, and a new Java OS installed on it, running a full implementation of J2SE. Communication between services and a GUI on the PDA allowed services to be controlled via a the PDA. Unfortunately, the Java OS is no longer available from the company that once distributed it.

Voice control software to activate anything in the room and to make queries

The application that used all of the services in the room was a Java Speech application. Commands for manipulating the lights in the room, controlling where the cameras were pointing, requests for weather conditions in 500 different cities in the US, responding to requests for the time, and more.

Controlling other equipment in the Smart Room

That's a picture of me standing in front of the NCSA visualization group's tiled display wall.  I worked on that while I was in that group. The wall is driven by a Linux cluster connected to 40 projectors, and runs at a resolution of 8192 by 3840. Since the wall is rear-projected, you can walk right up to it without casting a shadow over what you want to look at. You should note that this is much different than what you might have seen in at applicance stores that have huge displays playing movies. The resolution of those displays is the same as you have on your television, just scaled up to that size. If you were to get close to their screen, you'd see some pretty big pixels.

The desktop I'm standing in front of is running at a resolution of 8192 by 3840. If you look carefully, you can see the StartService GUI to the left of me, JBuilder running behind me, and a few browser windows and xterms.

You could use my software to bring up multiple applications on the wall at once, all via the smart room PDA and voice recognition applications.

Services

Other services in the room included:

Songs - played music CDs for the room through the JMF library.  This was controllable using RFID tags, voice control or by picking from a JList the GUI.

Lights - controlled the lights in the room, via X-10. Controlled via GUI or voice.

Weather - two services, one that received requests for weather conditions and returned them as text; the second that could take that information and speak the text in high quality synthesized speech. I used the Festival package to do the high quality text to speech.

Time - Used the same high quality speech backend mentioned above to speak the time in a high quality synthesized voice.

Cameras - I had some Pan Tilt Zoom cameras and a nice little video server that runs Linux that I can control via my voice. I had these cameras doing motion detection and movement tracking, so they would follow people around the room.

Visit my tech blog Future Steve.


All images and Text Copyright 2002-2009 Stephen R. Pietrowicz All Rights Reserved

Contact: Steve Pietrowicz
e-mail: srp@magiclamp.org