You will need:
- 1 x PC (which I assume you already have)
- 1 x Microsoft Kinect (about £120)
- 1 x Puffersphere (worth quite a lot, but you can hire them from Pufferfish)
Total cost: more than a few week’s pocket money…
There was a lot of excitement in the Technology Studio this week when a nice man with a van dropped off three large flight cases containing something rather special: a spherical display system called a Puffersphere. Pufferfish, the company which invented them, has been kind enough to lend us one for a week.
Things got even better when the man from Pufferfish turned up the following day, helped us put it together and took me through a slightly mind bending set of information about how to use the thing. He brought us some donuts though, so that made it a lot easier to deal with.
Although the brief I was given was to come up with something we could use as part of our upcoming appearance at the 2011 Association of the British Pharmaceutical Industry Annual Conference, the inner geek took over and we quickly decided the first project would be a massive eyeball.
To get an image onto the Puffersphere you start off with a panoramic image and use a polar distortion to get it into a form that the projector’s super-special Super Umami lens can then display onto the inside of the sphere. Before and after look like this:
I’ll post some more technical detail on this later – but the short version is that the top of the origin rectangle ends up as the centre point of the circle, and the bottom of the rectangle ends up as the outside of the circle.
Once you have the image it’s a simple matter to get it into a full screen WPF app, which can then be displayed on the sphere. The centre of the image ends up as the top of the sphere, and the edges converge on the bottom. This means that to get the image on the sphere rotating in the horizontal plane is as easy as applying a WPF rotate transform to the image.
For the next step, the Kinect. The process for getting this up and running on the PC, using OpenNI is well documented elsewhere, so I won’t repeat it. The OpenNI framework includes a user generator that uses the feed from the Kinect’s depth camera to detect individuals within the scene. Once that’s all in place, you can effectively convert the real-world co-ordinates of the user (given in cartesian co-ordinates) into polar coordinates, and use the angle to rotate the eye correctly.
As has been pointed out – not the most ground breaking use of a Kinect ever, but it does show how putting two pieces of technology can be combined to create something new, interesting and – although not immediately apparent – with genuine commercial applications.
For the interested, I’m keeping the code I write for this in the Earthware Github account. You can run this without having a Puffersphere, but you will need a Kinect. I’ll be putting up some more posts in the next few weeks about this and other cool and useful things we do with the sphere and Kinect.
We also took some video of the eye in action:
After we posted the video, we were very happy to see that it was picked up by Engadget. Many of the comments echoed something the more geeky among us had been thinking from the start: “we want the eye of Sauron”. And we’re nothing if not responsive to our customer’s requirements: