Hidden Voice – an AR story in development



 

What if a film could play out around you in your bedroom, and, depending on how you respond, the story will be customized for your experience?

The above question isn’t something that’s just easy to make possible…technical hurdles abound as it stands right now, particularly with my learning coding skill set. However, the technology assists to make that happen. You could be hearing a sound track on headphones, looking through your smartphone in google cardboard, when a voice starts talking to you. Depending on how you respond, the voice will provide more instruction or context to this story, to draw you in. This is what I wanted to start with my final project for Always On, Always Connected.

I used Google Cardboard and the camera from the phone. My initial plan was to ‘simply’ get the camera working within Phonegap, and be able to apply that image to an HTML canvas to start manipulating (and draw stereoscopic, like VR/AR). This proved to be the biggest challenge of the whole project. Android and iOS both make their cameras a little difficult to access to just use however one would like….outside of using their native camera app, taking the photo or video, and putting that back into the Phonegap app. Workarounds exist, but without using native coding language for either platform, it’s difficult.

I tried multiple plug ins for Phonegap (supposed to work on iOS and Android, though only ever worked for one or the other). None could function how I wanted. Getting access to WebRTC in Android, used for video chat, seemed to be the best option, because the getUserMedia function seemed to be pretty easy to use an manipulate. But unfortunately for my Android phone, that won’t work unless it’s in Chrome. That, along with all of Laura Chen’s amazing work using the browser, made me decide if this was going to happen, with my coding knowledge, it was going to happen in the browser.

So, after using the Chrome browser on my Android phone to get the camera, I used three.js to get it into a usable format for VR viewing. Then it’s just been a matter of starting to tell a story using audio and pictures. Unfortunately, that’s been an afterthought as I’ve tried to get the technology working, but it’s starting to come a bit more clear now.

I do know I want to use the sensors on the phone to get responses from the user (shaking head ‘no’ or ‘yes’), but the audio cues are currently not able to be triggered in a mobile browser using such sensor input (needs to be a click or something from the user). So, that needs to be addressed. I also wanted to use sensor tags with BLE to lead the user through a space, but that’s also not compatible with the browser either. So, my plan is to go back to trying to make it work in an app again, as I will have a few more tools at my disposal. I think there’s got to be a way to make it work, I just need to spend some more time on it, applying everything I’ve learned. One solution is something called Crosswalk, that i just couldn’t get working on my computer. But that would allow me to access WebRTC and the getUserMedia method within Phonegap, in theory.

I still need to really focus on the story as well, and that may require opinions and writing from other creative humans. That ultimately will show if this type of storytelling medium is effective. But I just think it’s tremendously interesting, brand new, and has such a variety of options.

Video teaser for the spring show is here, with more documentation to come:

YouTube Preview Image

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>