Embodying a robot to do journalism

I had to share “MY” body with someone else.

Its not quite what it sounds like, so let me explain.

In a first-of-its-kind experiment held jointly between the Event Lab at ICREA-University of Barcelona and USC’s Institute for Creative Technologies, I donned an XSens motion capture suit and virtual reality goggles (an NVIS head mounted display), in order to “drive’ a robot more than 6000 miles away in Barcelona, Spain. The idea was to NOT simply experience what embodiment of the robot felt like, but to also complete a regular task in my work as a journalist: interviewing people for a story. In this case, I had two sets of interviews I wanted to do. The first was with Professor Javier Martinez-Picado, whose team reported an extraordinary breakthrough in identifying how HIV infection spreads by unlocking immune cells that then disperse the disease throughout the body. The second set of interviews was a discussion about the Catalonia independence movement, with three individuals from pro, anti and neutral positions prepared to discuss their rationale.

From start to finish, I was in the robot more than 3 hours and at some point during the experience, I began to adopt the robot body. Yes, there were constraints in movement, overcorrection from head turns or hand movement and a viewpoint that was much taller than my normal 5’ 3”. But the connection became so extensive that it wasn’t until more than 30 minutes after I took off the equipment that I realized I was only THEN re-entering my “normal” body.

And here’s where the body sharing business comes in: Bizarrely, when I looked at photographs of Professor Maria Sanchez-Vives operating the robot on one of the initial experiments, I unexpectedly shuddered. I felt like she was inside my body!

In fact, for days afterward, just thinking about the robot would involuntary cause my body to adopt the most comfortable position for matching the robot to my natural body stance. My arms would bend at the elbow, with my hands outstretched ready to wave, shake hands or gesture. My head would look upright and my back would stiffen in order to more readily walk forward, back or to swivel from left to right.

I can only describe the experience as trying to do a sit up for the first time – you have a concept of how to do it but no muscles to actually perform the task. My entire being had to work in a unified effort to occupy and embody a “second” self so I could conduct the type of interviews I have done over the past twenty years. Later, in another strange reaction, when I starting watching an interview with my robot-self in Barcelona, I found myself so upset about the viewpoint of the TV crew camera –it was at the wrong angle! – that I had to get up from my desk and walk away. I then had to force myself to sit back down to watch the whole video. (it is in both English and Spanish starting at 38 seconds): http://www.youtube.com/watch?v=FFaInCXi9Go&feature=share&list=UUU3lATTLOBgJeQbJom707eQ[/url]

In another post, I will detail the actual reporting, because the interviews were fascinating. In meantime, I want to thank everyone for their hard work in making this happen, especially Thai Phan, Xavi Navarro Muncunill, Raphael Carbonell and all of the incredibly helpful folks at the University of Southern California ICT Mxr Lab and the Universitat de Barcelona EVENT Lab for Neuroscience and Technology.

Using Motion Capture for a piece on Hunger and Overstrained Foodbanks

As part of a larger investigation on hunger in California out of USC’s Journalism School, I began collecting audio from food banks to see if we could create an immersive piece about the problem. Its been slow going without any budget but finally the project is taking shape. I’ll be able to embed a Unity version soon. In the meantime, we used motion capture to animate the characters who were on scene at the food bank. The audio is from the original scene and the animation will be soon married with the digital representation of a woman who was overwhelmed by the crowds at the food bank that day. I have been working with amazing artists Bradley Newman and John Brennan who have spent many late hours helping to pull this off.

Kinect Flappers – the palette for Immersive Journalism grows

I will be building a piece shortly using Kinect – but wanted to give you an idea of how physical the platform is: In this video, you must fly to pop floating bubbles and here’s what the new users look like.

Towards Immersive Journalism

This project investigates whether immersive journalism can be used to tell the story of detainees being kept for hours in a stress position.  We’ve heard or read the term “stress position” many times, but what does that really mean?  Using head mounted display technology, we created a virtual body of a detainee in a stress position and asked participants to experience what it might be like to be “in that body” while hearing an interrogation coming through a wall from another room.  Although all of our participants were sitting upright, after the experience each reported feeling as if they were hunched in the same position as the virtual detainee.

This project was a collaboration between Nonny de la Peña, Peggy Weil, Mel Slater and Slater’s team at the Event Lab in Barcelona, including Joan Llobera, Elias Giannopoulos, Ausiàs Pomés, Bernhard Spanlang, and Maria V. Sanchez-Vives.

El Pais article on experience: Presos de un Guantánamo virtual
Una instalación permite a las personas meterse en la piel de un prisionero

Baha Mousa

When we put participants into the virtual body of a detainee in a stress position, we could only recreate the scene based on the text of real interrogations.  Shortly after we finished, a video was released in the trial over the death of Baha Mousa, an Iraqi civilian who died in British custody. The video underscores the accuracy of our virtual reality piece.  WARNING: This video is very difficult to watch.