Project Syria: Premieres at the World Economic Forum

PROJECT SYRIA: AN IMMERSIVE EXPERIENCE
Nearly a third of the population has been displaced in the ongoing war in Syria and no group has been as severely affected as the children. According to a joint statement issued by the United Nations High Commissioner for Refugees (UNHCR) and UNICEF last August, one million children have already been forced to flee Syria since the start of the country’s civil war. Some news reports indicate that children are actually being specifically targeted in the violence.

The World Economic Forum has commissioned an immersive journalism piece to try to tell the plight of these children. Immersive journalism piece uses new virtual reality technologies to put the audience “on scene” and evoke the feeling of “being there.” This piece was requested to be displayed at the World Economic Forum by the Executive Chairman Klaus Schwab, in Davos January 21, with the idea of compelling world leaders to act on this crucial issue.

This is a two-part experience. Using first scene replicate a moment on a busy street corner in the Aleppo district of Syria. In the middle of song, a rocket hits and dust and debris fly everywhere. The second scene dissolves to a refugee camp in which the viewer experiences being in the center of a camp as it grows exponentially in a representation that parallels the real story of how the extraordinary number of refugees from Syria fleeing their homeland have had to take refuge in camps. All elements are drawn from actual audio, video and photographs taken on scene.

Utilizing the real time graphics of the Unity game engine and sense of presence evoked through high resolution virtual reality goggles and compelling audio, Project Syria takes the audience to the real events as they transpire.

Project Syria Written and Directed by: Nonny de la Pena – Nonny@ImmersiveJournalism.com – 310 749 0010

Produced by:
Nonny de la Pena
Vangelis Lympouridis

Additional Producer: Michael Licht

Co-Producer:
Shelbi Jay

Executive Producers:
USC School of Cinematic Arts
Dean Elizabeth M. Daley
Professor Scott Fisher
Professor Mark Bolas

Executive Producers:
World Economic Forum
MxR Studio

Lead Unity Developers:
Michael Murdoch
Mikkel Rasch Nielson

Sound Design:
Vangelis Lympouridis

Unity Developers:
Bradley Newman
Cognito Comics

Additional Unity Developers:
Logan Ver Hoef
Fotos Frangoudes
Evan Stern
Shiraz Akmal
Animation: (same card as above)
Vangelis Lympouridis

Animation:
Nicholas Palmer Kelly

Rigging:
Bradley Hiebert

Environment Modeling:
Michael Murdoch
Metal Rabbit Games
Alexa Bona Kim
Cognito Comics

Character Modeling:
Axyz Design
USC Institute for Creative Technologies
Matt Liewer
Arno Hartholt
Yehjean Kim
Joe Yip

Additional Textures:
Alexa Bona Kim
James Bowie-Wilson

MxR Lab:
Allison Aptaker
Mahdi Azmandian
Mark Bolas
Adam Jones
David Krum

MxR Lab:
David Nelson
Thai Phan
Ryan Spicer
Evan Suma
Rhys Yahata

Additional Sound and Video: Namak Khoshnaw

Additional Production Material:
Faisal Attrache
Heidi Hathaway

Motion Tracking Equipment:
Tracy McSheery
Phasespace

Hardware Development:
Steven Brown
Bradley Newman
Thai Phan

Original Music:
Gingger Shankar

Narration: James Hanes

Translation: Megan Reid

HMD Leep Lenses Provided by Fakespace Labs

Leep Lesnes Provided by Transpresence

Created on Unity 3D

Special Thanks to Mixamo

Special Thanks to:
Marientina Gotsis
The World Food Program
United Nations High Commissioner for Refugees
Klause and Hilde Schwab
Brenda Kenny – AudioMix Nation
Russel McNeal – Biosyn Systems

Special Thanks to:
Holly Willis
Elizabeth Ramsey
Stacy Patterson
Carolyn Tanner
Jerilyn “Cookie” Clayton
Ingrid DeCook
Creative Media and Behavioral Health

Video Produced with the support of:
Media Arts + Practice Division
USC School of Cinematic Arts
Gabriel Peters-Lazaro
Michael Bodie

Then USC School of Cinematic Arts

Embodying a robot to do journalism

I had to share “MY” body with someone else.

Its not quite what it sounds like, so let me explain.

In a first-of-its-kind experiment held jointly between the Event Lab at ICREA-University of Barcelona and USC’s Institute for Creative Technologies, I donned an XSens motion capture suit and virtual reality goggles (an NVIS head mounted display), in order to “drive’ a robot more than 6000 miles away in Barcelona, Spain. The idea was to NOT simply experience what embodiment of the robot felt like, but to also complete a regular task in my work as a journalist: interviewing people for a story. In this case, I had two sets of interviews I wanted to do. The first was with Professor Javier Martinez-Picado, whose team reported an extraordinary breakthrough in identifying how HIV infection spreads by unlocking immune cells that then disperse the disease throughout the body. The second set of interviews was a discussion about the Catalonia independence movement, with three individuals from pro, anti and neutral positions prepared to discuss their rationale.

From start to finish, I was in the robot more than 3 hours and at some point during the experience, I began to adopt the robot body. Yes, there were constraints in movement, overcorrection from head turns or hand movement and a viewpoint that was much taller than my normal 5’ 3”. But the connection became so extensive that it wasn’t until more than 30 minutes after I took off the equipment that I realized I was only THEN re-entering my “normal” body.

And here’s where the body sharing business comes in: Bizarrely, when I looked at photographs of Professor Maria Sanchez-Vives operating the robot on one of the initial experiments, I unexpectedly shuddered. I felt like she was inside my body!

In fact, for days afterward, just thinking about the robot would involuntary cause my body to adopt the most comfortable position for matching the robot to my natural body stance. My arms would bend at the elbow, with my hands outstretched ready to wave, shake hands or gesture. My head would look upright and my back would stiffen in order to more readily walk forward, back or to swivel from left to right.

I can only describe the experience as trying to do a sit up for the first time – you have a concept of how to do it but no muscles to actually perform the task. My entire being had to work in a unified effort to occupy and embody a “second” self so I could conduct the type of interviews I have done over the past twenty years. Later, in another strange reaction, when I starting watching an interview with my robot-self in Barcelona, I found myself so upset about the viewpoint of the TV crew camera –it was at the wrong angle! – that I had to get up from my desk and walk away. I then had to force myself to sit back down to watch the whole video. (it is in both English and Spanish starting at 38 seconds): http://www.youtube.com/watch?v=FFaInCXi9Go&feature=share&list=UUU3lATTLOBgJeQbJom707eQ[/url]

In another post, I will detail the actual reporting, because the interviews were fascinating. In meantime, I want to thank everyone for their hard work in making this happen, especially Thai Phan, Xavi Navarro Muncunill, Raphael Carbonell and all of the incredibly helpful folks at the University of Southern California ICT Mxr Lab and the Universitat de Barcelona EVENT Lab for Neuroscience and Technology.

Hunger in Los Angeles – An Immersive Journalism Premiere

This machinima video shows Hunger in Los Angeles from the viewpoint of a witness/participant who is experiencing the recreation of a real event that unfolded on a hot August day. The situation transpired at the First Unitarian Church on 6th Street in Los Angeles, when the woman running a food bank line is overwhelmed. “There are too many people,” she pants. And then, much louder, shouting in frustration, “There are too many people!” Only minutes later a man falls to the ground in a diabetic coma. The line is so long that his blood sugar has dropped dangerously low while waiting for food.

Invisible to the average American, hunger is an insidious problem facing this country. More than one in five adults suffer with food issues and the rates are worse for children. The events of that day are mirrored circumstances around the country. Food banks simply can’t keep up.

While it is a story well worth reporting, coverage about the issue in the newspaper, on TV or on the web barely seem to resonate. In response, HUNGER IN LOS ANGELES, puts the public on the scene at the First Unitarian Church and makes them an eyewitness to the unfolding drama by providing a powerful embodied experience. By coupling the latest virtual reality goggles with compelling audio, which tricks the mind into feeling like one is actually there, HUNGER provides unprecedented access to the sights, sounds, feelings and emotions that accompany such a terrifying event. In fact, when this revolutionary brand of journalism was showcased at the 2012 Sundance Film Festival, the strong feelings of being present made audience members try to touch non-existent characters and many cried at the conclusion of the piece.

The 6.5 experience has been edited down in this video and the impact is much greater wearing head mounted display virtual reality googles, as did the audience at Sundance New Frontier, http://www.sundance.org/festival/film-events/hunger-in-los-angeles/ With so many folks coming out of the experience crying, or otherwise deeply disturbed, I began to realize that part of good journalism, of being a civic partner to my audience, is to offer them ways to act. I don’t consider this activism but rather an appropriate interaction. One great organization in Los Angeles is called Mend and another is the Los Angeles Regional Foodbank.

Also, there are two websites that have comprehensive listings of charities and rate them for you, so you can give according to what moves you personally.
Guidestar
Charity Navigator

Using Motion Capture for a piece on Hunger and Overstrained Foodbanks

As part of a larger investigation on hunger in California out of USC’s Journalism School, I began collecting audio from food banks to see if we could create an immersive piece about the problem. Its been slow going without any budget but finally the project is taking shape. I’ll be able to embed a Unity version soon. In the meantime, we used motion capture to animate the characters who were on scene at the food bank. The audio is from the original scene and the animation will be soon married with the digital representation of a woman who was overwhelmed by the crowds at the food bank that day. I have been working with amazing artists Bradley Newman and John Brennan who have spent many late hours helping to pull this off.

Teaching with Technology

There have been many advances in teaching with technology. Here is the latest project in which I used immersive experiences to try to consider how we could continue to conduct classes if campus were closed due to an earthquke.

Kinect Flappers – the palette for Immersive Journalism grows

I will be building a piece shortly using Kinect – but wanted to give you an idea of how physical the platform is: In this video, you must fly to pop floating bubbles and here’s what the new users look like.

Towards Immersive Journalism

This project investigates whether immersive journalism can be used to tell the story of detainees being kept for hours in a stress position.  We’ve heard or read the term “stress position” many times, but what does that really mean?  Using head mounted display technology, we created a virtual body of a detainee in a stress position and asked participants to experience what it might be like to be “in that body” while hearing an interrogation coming through a wall from another room.  Although all of our participants were sitting upright, after the experience each reported feeling as if they were hunched in the same position as the virtual detainee.

This project was a collaboration between Nonny de la Peña, Peggy Weil, Mel Slater and Slater’s team at the Event Lab in Barcelona, including Joan Llobera, Elias Giannopoulos, Ausiàs Pomés, Bernhard Spanlang, and Maria V. Sanchez-Vives.

El Pais article on experience: Presos de un Guantánamo virtual
Una instalación permite a las personas meterse en la piel de un prisionero

Gone Gitmo – A virtual Guantanamo Bay Prison built in Second Life

A virtual but accessible version of the prison, in contrast to the real but inaccessible prison.  See more at the Gone Gitmo Blog.
Visit Gone Gitmo via this SLURL

Cap & Trade – An Immersive Journalism Experience

Built in collaboration with the Center for Investigative Reporting and based on Mark Schapiro’s “Carbon Watch for Frontline World”, his article, “GM’s Money Trees” in Mother Jones
and his Harpers Magazine article “Conning the climate:
Inside the carbon-trading shell game,”
, this piece calls attention to some of the human consequences and lack of regulation in the carbon offset trade market.

Walljumpers

Transmedia trailer