HOLLYWOOD, CALIF.—Virtual reality, augmented reality, mixed reality: Everyone attending SMPTE 2015 on this exciting field is aware of these terms. But what exactly do they mean? What particular attributes distinguish them? What standards can developers and artists consult to ensure a positive result and an experience that doesn't result in a headache or nausea?
The frustrating, but possibly exciting, thing about the symposium is the fact that the technology is so young that nobody really has all the answers. Despite early attempts at VR in the 1990s and Google Glasses more recently, none of the distinguished speakers at this symposium believes we've seen more than a glimmer of the promise they see in this type of media's future.
Layla Mah, lead architect for VR and advanced rendering at AMD, and the creator of the company's LiquidVR technology, used her keynote address to look at the question of when we might see that potential realized. She spelled out some milestones that would make her consider that the technology has arrived — a wearable device capable of one petaflop (a quadrillion floating point operations per second) untethered, in a form factor somewhere between a contact lens and a pair of regular glasses, among other equally ambitious goals — and said that this might be achievable but, "we have to improve more quickly than Moore's Law" suggested. "A straight line path and Moore's Law won't get us there in our lifetime." But to hear her tell it, people like Mah and companies like AMD are hard at work looking for ways to break out and innovate at a much faster rate.
She was followed by three more speakers, Jeffrey Wilkinson, creator of Smile Stimulating software; Andrew Dickerson, director of software development at Samsung Research America's Dallas office and Sajid Sadi, Tech Lead and Senior Director of Research, Think Tank Team. They discussed some of Samsung's efforts in developing tools for capturing and displaying this type of media.
The essential definitions for the three terms that are the subject of the symposium say that VR completely submerges the user in another world, totally disconnected from his or her actual surroundings; AR introduced virtual elements on top of the actual reality of the user's environment; and MR mixes the two. Sadi, who prefers MR, explained that his group is intent on giving users the tools to shoot their own material and to learn what approaches succeed and fail based on what people do with the technology.
"When 8mm film and cameras came out," he said, "there were quickly millions of hours of film." He hopes his group's beta project will be the 8mm of MR. "Our camera is still not there, but it's useable for some filming," he noted. "It's not just a research toy. We want to see how people shoot. What happens when you walk with the camera? What makes people get sick when they watch? Do you cut a lot or use long takes? Do you zoom in and out? Do you focus on a single point when the user can look anywhere? We're already learning a lot about all this."
Rick Johnson, chief software engineer, CastAR, USA, comes from the gaming world and he spoke primarily about his interest in AR, what he refers to as "wrapping a virtual world around real, physical objects." Johnson offered some "guiding principles" for what he sees as the successful AR experience: It should be fun, social and tangible.
"We want to create something that is seamless with the real world, that augments it in an entertaining way. Maybe I'm wearing the headset and I walk down the street and I see monsters coming out from behind buildings. Maybe I can spray paint [virtual] graffiti on the buildings that my friend can see when he walks down the same block."
Johnson prefers the concept of AR because its inclusion of the real environment, which, he says, "keeps my inner ear happy. With VR, it's more likely that what your eyes are seeing is going to be very different from what your body is feeling and then your inner ear is not always so happy."
But, he noted, "Expect AR development to always be 12 to 18 months behind VR. It's more complex. The virtual objects have to match the real conditions of whatever environment you're in. The lighting conditions have to match what's real. Tracking is much more complex."
But it's worth it, he sums up, because "in AR, I can reach over and pick up my drink. I can't do that in VR. And in AR, I'm not nauseous so my drink can even have alcohol in it!"
Pete Moss, whose self-chosen title VR Dude and Lead Engineer at Creative Content Studio of Unity Technologies, wrapped up the morning symposium with his thoughts. Moss, who joined the Seattle-based Unity technologies as a field engineer in the Simulation/Visualization division, after studying music, technology and art. Moss's time at Unity, which has specialized in creating game engines in the VR space, has helped him develop some strong ideas about the symposium's topic.
"We talk about VR as putting you in a world that may not have any relationship to this one," he said. "And AR anchors you to the real world and offers the idea of location-based simulation. But I think it will all blur in the future."
Moss also took time to stress what VR, AR and MR are not. "People who say, 'I tried it in the '90s' didn't really. Even now the experience is totally different." He likewise dismissed the introduction and subsequent fallout from Google Glass as irrelevant to the present discussion.
Finally, along those same lines, Moss declared, "Sorry Hollywood, but 360 video is not VR."