January 5, 2010
Designing an interFACE
The interFACE began as a project that attempted to capture individual and group mood. In our first semester of graduate school, we all find it to be an odd experience to spend all of our days and nights in and out of class with the same set of 18 people.
Taking this idea, we began our research by trying to understand methods of mood capture. We started with geurilla approach. I asked people directly “how do you feel” and wrote down their responses. Carmen sent around an email questionnaire multiple times throughout the day, and Kristin captured and analyzed tweets from our studiomates during the day.
Our findings were diverse. I found mostly that people are resistant to giving up their personal mood status when I asked them point blank. Carmen had much better luck learning how people feel when they submitted answers privately and anonymously. Kristin’s twitter capture provided a good counter to what we had learned.
From this we decided to create a device that would capture an individual’s mood and aggregate it with the other responses in the group. We went back and forth and around several times, trying to decided how this thing could function.
When Clint and Evinn joined the group we started talking about building something using a potentiometer and a face on screen. Our first prototype consisted of a small face on screen with a question about mood.
The prototype got a few favorable reactions, but most people couldn’t figure out the purpose or outcome of choosing a response here.
The real breakthrough in our project came when we realized the concept of pareidolia. After reading an article from Berg about this phenomenon we started thinking about using an actual face as our input for people’s moods.
We headed to the hardware store to look for materials and found a piece of plumbing insulation that was flexible and could be used as a mouth. We found a couple other interesting pieces to act as eyes and brought them back to start construction.
Our first big prototype took the form of a console and interface. We had user’s approach the face and answer some questions about their mood. They could use the knob to select a question and the big yellow eye to press and submit their answer. Each answer is added to the group’s mood and a visualization is shown to that effect.
We were able to demo this video at our department’s open house and received a lot of useful feedback. We performed some user testing with paper prototypes to better understand the value of the interaction, and ended up with pretty big revision.
After our critique we made a few small changes, including some fairly significant visual design changes. We found that many users had trouble using the eyes as a rotate and press function so we altered the eyes to be static blue plates instead of controls. This led to some inspiration for the graphic design of the interface. We brightened the whole thing up and included a simpler progression of interaction with the device.
A Happy Ending
Overall the prototype was well received. Most people enjoyed the smiling faces, and even in the end the flexibility of the mouth and the visual response of seeing the digital mouth move was a thrill to many users. After our critique, our group agreed that more work could be done to make this a really useful tool, but that we need a bit of time to reflect before that can be a meaningful iteration.