Fast Company this month has a full page ad for their new Fast CoCreate site. I’m the funny looking dude in the corner. Right under Martin Scorsese. Pretty neat!
My friends Garrett and Reichart have a new show on the History Channel called Invention USA, and on this week’s episode they’re going to be covering my Haptic Compass belt. Tune in Friday night at 10:30pm. You can go out to a party right after that.
For the show, I made a few improvements to the belt. Most notably, I integrated it with an overhead camera, which tracks the belt wearer. Combining the camera’s position information with the belt’s orientation information, I can “buzz” the belt in the direction of nearby obstacles.
The punchline? A blind man can navigate a crowded living room without his cane. Tune in and check it out.
I love Tropo’s WebAPI because it lets me code up complex voice interactions with my local services and data.
To initiate a call, I POST to the outbound session endpoint. Tropo POSTs back to my voice endpoint. I respond with a JSON encoded instruction. For complex conversation sessions, we’ll go back and forth like this many many times.
How do you as a programmer keep track of where you are in a conversation? Do you write a state machine? If you’ve ever gone down the state machine path, you’ll recognize that it can become a twisted mess of if-then-else statements and despair.
Instead, consider coding your conversations with Tropo’s API as python coroutines. Coroutines are a oft-overlooked feature of Python (and other languages). A coroutine looks like a function, but it can “suspend” mid-execution, and yield an intermediate result. Later, the coroutine can be re-entered where it suspended, and continue executing with some new data.
Our coroutine is created when Tropo first hits our voice endpoint. Because our coroutine will exist for the duration of our conversation, we keep it in a dictionary, indexed by Tropo SessionId.
When first entered, the coroutine will yield some Tropo JSON, which is sent as a reply to Tropo’s POST. After yielding, the coroutine is suspended mid-execution. Tropo acts on our commands, and if the session interaction continues it will POST the results back to our endpoint. At this point we send the new body of Tropo’s POST to our coroutine. The coroutine is resumed at exactly the line where it previously yielded.
As I hope you can see in the example below, this leads to a very natural and easily maintainable coding style for Tropo conversations.
I talk good and I look funny, so they put me on the TV.
If you have a television catch me, Brent Bushnell and Dan Busby on our television premiere with Extreme Makeover Home Edition on ABC Sunday night at 8pm.
We’re the new on-camera “high-tech” designers for the show, which means that throughout the season you can find us building awesome custom-built technology for people in need, and installing it in their brand new homes.
ShadowSmoke is a piece created in early 2009. It is a Navier Stokes fluid dynamics simulator that visualizes the movement of dense fluid with vivid colors. Its first incarnation was known as Besmoke, though with the addition of computer vision to detect human motion disturbing the scene, its now known as ShadowSmoke.
At its core it is a grid-based Navier-Stokes fluid simulation that approximates the fluid dynamics in a stable and computationally inexpensive way. Its based on Jos Stam’s “Real-Time Fluid Dynamics For Games” paper. Each grid cell has a density magnitude and a velocity vector. The algorithm evolves those parameters for each time step. The color blue represents areas of higher density and the color red represents areas of lower density. Black regions represent “obstacles” that refuse to permit a density or velocity. The obstacle map is loaded from a png file.
You can interact with the simulation in the most basic way by using the mouse. Using the left and right mouse buttons, you can set regions of high density and modify the velocity vector field. Shadowsmoke also listens for sound input to introduce new sources of dense fluid. “humps” (or low-passed sounds) like you might associate with music’s thumping bass causes dense fluid to be injected into the very middle of the screen. Loud sounds in any frequency band causes the emitter to eject dense fluid. The emitter is a point that moves clockwise around the perimeter of the screen at a fixed speed. As you can see from the video, loud low sounds trigger both behaviors. Shadowsmoke is also accelerometer-aware. In the video, you can see I’m using my iPhone to “change the gravity” in the simulation. I can hook any accelerometer up to this system, of course. The iPhone was a convenient source of data. I’ve connected the OCZ NIA as well, allowing me to set the overall agitation level of the fluid based on the alpha wave, beta wave, or EMG output of that device. In this video, you can see how my “agitation level” affects the fluid characteristics. I can also use multitouch input as a substitute for the mouse. I can use multiple fingers to “agitate” the fluid simulation by altering the velocity vector field.
Here are some technical details:
The simulation is based on the work of Jos Stam. This paper is a great read. I’m simulating a 128^2 cell grid. The graphics are rendered using OpenGL. I’m using OpenCV internally to represent the density and velocity vector fields. Though I’m not actually doing any computer vision here, OpenCV is good at manipulating large matrices. I’m using ChucK audio programming language for sound analysis. There are two shreds running; one for each of the sound behaviors described above. I’m running OSCemote on the iPhone to capture accelerometer and multitouch input. OSCemote is awesome, and I use it to configure and control most of my new projects. The components communicate using Open Sound Control. The multitouch events are represented using TUIO, so you should be able to run this on any touchlib/reactable device. I’m using liblo in C++ to receive OSC events.
The Identity Tree a commissioned art piece for The Leonardo, an art and science museum opening in the heart of Salt Lake City on October 8, 2011.
Your online identity has its roots in the many online services that deal in data about you. You seek to control your online identity… but your Facebook friends, Google’s algorithms, and the contents of a thousand databases define you just as strongly.
The Identity Tree draws these contradictory online selves together within its branches. Choose to grow or prune this tree, but ultimately the shape of this tree lies beyond your control.
I’m really proud of how this piece came out. It makes a discrete statement; its made with high-tech; it has a beautifully organic feel. If you’re in Salt Lake City, drop by the museum and check it out
.The leaf is a projection mapped surface, and is 9’ long
How it works…
So we were working at @syynlabs on a project that required me to make PWM controlled air valves; essentially letting me control the airflow through a nozzle with _some_ precision.
We had these fuel injector valves already connected to propane tanks courtesy of Croston, because that just the kind of shit we have lying around the shop.
And I was already operating the PWM code with OSC, so it was no thing to send data from the WiiMote’s accelerometer. And that’s a good thing, because I’m sure we we’d been drinking heavily by the time we got around to shooting of fireballs in the shop.
When the computer boots, it immediately loads a Google Chrome window in kiosk mode, and fetches the art installation HTML/js from a webserver on a dedicated linode. The visitor has no direct interaction with this computer. Rather, they enter their Facebook credentials on a nearby iPad in a kiosk. This iPad is running the native iOS Facebook SDK. On successful login, the iOS client hands off those credentials to the webserver via HTTP.
That server (which is python-tornado) then kicks off several Celery tasks which snatches the visitor’s data from Facebook as well as various other web services (including everybody’s favorite, the Internet Sex Offender Database). These data are cached in a mongodb.
When the visitor clicks the “grow” button on the iPad, the iPad uses PubNub (check it out!) to directly notify the code running in the browser. My webserver isn’t used as an intermediary—though I certainly could have used socket.io.
What is actually drawn to the screen is based on a Lindenmayer tree fractal generated randomly from some simple rules.
Successfully compiled openframeworks. It does everything Processing does, but its in C++. And its not in Java. Dead-simplest way to get an OpenGL context, with support for OpenCV and Arduino serial? This was made for me.
Also, downloaded and compiled Syphon and ofxSyphon. The example app builds out of the box, and it works with Madmapper, which I’m using for an upcoming project.
Interactive projection mapping, anyone?
RT @heatherknight: Our OK GO rube goldberg machine just won the British VMA for best rock video!! go @syynlabs!!! http://www.youtube.com …