Sunday, January 25, 2009

Improv tour diary

I went out checked out Jet City Improv in Seattle yesterday. I've been so obsessed with longform the past 2 years that I had pretty much forgotten that short form existed. My two favourite games currently are He Said She Said and Three Way Dub, mostly because they are split-control games. Here's 3 games I saw yesterday that I had never heard of/seen before and really liked:

Double Blind
2 improvisors (C,D) leave the room. The remaining two (A,B) perform a short scene from a suggestion. A leaves the stage and C is called in. C gets a new suggestion and now B and C perform a scene. B says the same lines and performs the same physicalization as before, while C improvises along. At then end, D is called in. D gets a suggestion and performs a scene with C, where C must use the same lines and physicalization as they just performed.

This is a great combination of broken telephone and the game Actor's Nightmare. I could easily see it being extended in a few ways. Like Actor's Nightmare, it is funny when moments align well as well as when they are non-sequiturs. However, the scenes need to be pretty short to be memorized. Bold choices in the scenes are really helpful here, as it makes the subsequent alignment/non-sequiturs much more entertaining.

This is pretty meta. 1 improvisor (A) leaves the room, while the other improvisors come up with a game. They come up with a name first ("Mish Mash" this time) and then simple rules, from the improvisors only. Then, A comes back, and they get a suggestion and perform a hilarious scene, after which A sees if they can guess the rules.

This time, the rules they made up were:
  1. If someone speaks, at least one other person must be squished into them
  2. If A raises their voice in pitch, everyone claps and we change scenes.

It makes more sense if the improvisors come up with the rules, since game rules are very important to make a scene "work." Also, its more entertaining if the rules have some dependency on what A does. Rule 2 didn't get triggered for a while, so one improvisor came in with the feed of "sex-changing powder" (the suggestion was "Renaissance", from yours truly) and in improv-land, the way to act female is to just pitch your voice up 3 octaves. This game is very familiar to Interrogation.

They played a game many people have seen, where an improvisor sits on the side of the stage and plays the scene like a video, "backwards" or "forwards". (Aside: do people ever say "ahead by one frame"). When going backwards, the improvisors would say their lines forwards. This could be a point of contention if you cared enough. Personally, I enjoy trying to make backwards vocalizations, but it helps for remembering where you're going if you actually say the lines.

After establishing the scene for about a minute, they did something I had never seen before: they rewinded to before the beginning of the scene! This provided some interesting exposition about one of the characters (he was undead, and could come back to life by inhabiting someone else's body) which "explained" some of the apparent intentions of the characters. The exposition was made more interesting because the first time we (the audience and improvisors) saw it, we were going through it backwards. This seems like a good reason for not making backwards vocalizations. 

The show also had a dedicated musician and lights controller. The lights didn't go just on and off either, but they had "disco", "horror", etc. modes. Check them out if you're ever in Seattle.

Sunday, January 18, 2009

Diamond Touch Projects

While working on 3 courses last semester, I got a head-start on some research ideas. When I was starting out, I was simply interested in the idea of "Miming and Mimicry" as an inspiration. After many discussions, I've developed further ideas, but still with Miming and Mimicry as  the seed. Here's the most tangible of what I've worked on: 

This first one, which I titled "Space Shift" certainly doesn't count as a finished Contribution To Research, but I whipped it together quickly to try an idea out.

This second video is more about manipulating objects on  desktop. Most people, when thinking of multitouch, think of the ubiquitous "Look ma, I'm moving photos" demo. This uses very literally realistic physics, which is great because it takes advantage of the physical intuition we already have. I played with a few very simple ways of using non-realistic physics. I am also working on (in progress) creating new "types" of physics by miming. At the end of this video, we see a copy gesture inspired by miming.

Thursday, January 08, 2009

Where I'm going, where I've been

Near the end of the first semester of my Master's, my research topic was becoming clearer - along the lines of "teaching the use of gestural interfaces". This was motivated by the proliferation of gestural interaction in devices of many form factors. Much of my thoughts on this were driven by my improv and theatre background, especially with respect to miming and mimicry. After some emails were sent, and calls made, I have been hired an an intern at Microsoft Research in Redmond, Washington from January to April to work with the Microsoft Surface team on teaching gestural interface use, and to measure how well a user has learned. With that in mind, here's the courses I took last semester and projects I did.

Computational Biology
This course takes problems solved in computer science and maps them onto problems in biology that are becoming harder as more data becomes available. I initially only took this course because I am required to take "breadth" in my courses, but it turned out to be pretty enjoyable. I was part of a sub-group that examined current research into codon bias. My final project was on modifying a gene from one organism to another to make it perform better, with respect to codon bias. I wanted a picture from each course I did, but since I don't really have a good one for Comp. Bio., here's a bunch of completely unexplained equations I made for my final project.

Ubiquitous Computing
Ubicomp helped me get more coverage in the literature of my field, although more on the ubiquitous, rather than interactive, side. I was interested in ambient displays, and my project partner was interested in household devices, so we came up with the humourous yet sincere project title: "Devices that Bruise: Battered Device Syndrome". The idea was to give everyday items the ability to give feedback if they received damage, either causing immediate harm, or causing harm if this behaviour was continued in the long run. Below is what we imagined a door would look like if it was slammed shut.

And here is a storyboard of some devices that have been "bruised".

In addition to giving the user's feedback about unintentional misuse, these devices could let you know when you are in a bad mood, as the user above realizes. This is much like a good friend would alert you to your mood. For our prototype, we wired an accelerometer up to some LEDs, which isn't visually interesting enough for me to show here. One concern with this sort of behaviour feedback is that it might encourage the inverse response to what we are looking for, like in the art project love hate punch, which was one of our inspirations. User testing would have to be done.

Machine Learning
This was mentioned in an earlier post. Upon learning about the ability of Restricted Boltzmann Machines, which are pretty good digit classifiers, to generate new digits, I immediately wanted to apply it to HCI somehow. These animations were a large part of my inspiration. Given a user's ambiguous digit, I wanted to show a user how they could improve their writing to make it easier to recognize by the device. The philosophy here isn't to show the user the "perfect" digit, but rather to show how their given digit could be improved. Personalized feedback. I was afraid that the animations wouldn't be smooth enough to be pedagogical, but my professor said he would be surprised if they weren't. It turns out that the animations were very smooth, but did not always work perfectly. Below shows examples of digits that were improved, starting with the original digit at the left and evolving right.

For the above digits, the classifier can easily tell what the digits were, but improvement is still possible. However, for some digits in the dataset, it isn't really clear what was meant, and feedback on how to disambiguate towards either alternative must be shown. Below, we see an ambiguous digit in the centre. To make it a better 1, the evolution goes left. To make it a better 2, the evolution goes right.

Although it doesn't look that cool, I'm pretty exciting by these results. Classifiers in machine learning haven't been used in this way before, and I would like to think of more interactive ways to apply them. Generally, Artificial Intelligence and Human-Computer Interaction are not as closely related as they should be, and I think they should be if we want to makes computers worth caring about.