It is not everyday that you get to take something exciting like the Magic Leap One and creatively use it for fundamental social needs that are all too often relegated to an afterthought.
We are developing an assistive indoor visual navigator to aid persons with blindness/low vision. The nature of the Magic Leap One coupled with our application could be a game changer for the large number of veterans in rehab, the scores of corporations seeking workplace ADA Compliance, and for young adults having to build up their Orientation & Mobility skills.
The Magic Leap device is uniquely suited for this application as it is a fully self contained general purpose computing/sensory platform. Our working MVP- the Assistive Vision Navigator exploits depth mapping, audio, haptic feedback and instant calculations so a person with blindness can simply ask for example, "How far is the wall?" and get an instant spoken answer.
The video demo below shows an early working version.
For those interested, we will be presenting at the upcoming CSUN Assistive Technology Conference in California on March 14, 2019.
Further information can be found at:
You can also contact us in the form below:
Magic Leap has been in the press for a number of years now. They got investors to part with a lot of $$$ and pour it into a startup for a completely new kind of technology. There had to be something really compelling for this to happen. Even as the Magic Leap One has just been released into the wild, the company has made clear that this is just an interim product intended for developers, early adopters, and enthusiasts.
So why not wait for the product to sport improved features like wider field of view, offer a wealth of Apps, and of course, become more affordable? I will answer it in two short words: Spatial Computing.
What Magic Leap has done is to rethink computing so that the immersive environment is really as much an operating system as it is a platform for running apps. They took to heart the notion of seamlessly melding the virtual created content with the real world so that the two are not perceived separately. In that sense, the term MR for Mixed Reality (or as I sometimes think, "Melded" Reality) is more appropriate for the Magic Leap than AR or XR.
Instead of extolling the virtues or ranting about the product or technology, I'd like to delve into some of the ideas behind Magic Leap, why you should take interest in it, and where it may go.
This is not your father's VR gear
At first glance you may think Magic Leap is out to make a fashion statement. Perhaps it is, but the statement has to do with functional design. To begin with, you'll notice the visor is positioned above and not over the ears. There are several reasons for this. Magic Leap incorporates builtin headphones that can take advantage of Resonance Audio, Should you prefer to use your own headphone or earbuds, you can easily do so, as the Light Pack allows you to connect a standard 3.5mm jack. There is another not so obvious benefit to having the visor extending above the ears. It permits you to comfortably blend the ambient sounds in your environment with the audio output from the Magic Leap device. In VR you want to shut out the world, but in AR having the ambient sounds as they naturally occur is a plus.
Input, Sensors and Navigation
.It is fascinating how we so often derive from nature to inspire technology. Perhaps Magic Leap wasn't influenced by jumping spiders, but you have to admit that there's an uncanny similarity in the way biological and electronic sensors are assembled.
Mapping your physical space
First came the Tango tablet, then the Hololens and now Magic Leap. What characterizes these devices is that they know where they are in physical space. What's more, they can handle occlusion mapping dynamically. The video below shows the Magic Leap creating building a geometric mesh as you walk around. The color bands depicts the current distance from your view, and therefore the color of the mesh pattern can change as you move closer or farther away from the generated mesh.
Mapping Hand Gestures
One of the features of the Magic Leap One is the ability to track hand gestures, which includes a single finger, making a fist, pinching thumb and forefinger, a thumb's up, using the forefinger and thumb to form an "L", the back of an open hand, and "OK" gesture, and forming a "C" shape. The video below shows this in action.
Notice that the Magic Leap easily distinguishes between right and left hand, and can simultaneously track both hands. In the video you will see various hovering colorized balls. These indicate the relative positional mapping of the hand fingers and joints. As the captured video is not 3D (default video capture is from the left eye), the placement of the colorized markers don't quite match what you would see while wearing the Magic Leap.
All these features are easily programmable, as the Magic Leap SDK provides a fairly rich and easily accessible API (Application Programming Interface). This API works with Unity, Unreal, and Helio- a Magic Leap web browser. I will outline this in a future blog post. For the moment, I want to stay with the theme of Magic Leap sensory input features.
Another important aspect with Magic Leap is the ability to capture and perform data analytics. The Magic Leap API exposes internal sensor data sufficiently so I can write a program to produce a log file like the kind shown below which is from one of the sessions involving hand gesture recognition.
Eye of the Gazer
One of the modalities of user input is the ability to actively perform eye tracking. I am not talking about head pose, but rather what you are actually looking at as your eyes roam your field of view. The video below shows this in action.
In this rudimentary example I thought it would be useful to basic eye gaze activity in action. Unconscious eye gaze can influence behavior in the rendered scene. As the user glances at the cube on the right it turns red and rotates and while the user gazes at the sphere it turns green. These are simple static objects. We could also affect the animated behavior of avatars. We could influence the plot line in an interactive story. We could create new kinds of user interfaces. We could create training software for special needs such as for recovering stroke patients. We could combine eye gaze with voice input to do something like "Make that one red" where 'that one' is the object we are looking at.
More to follow
There's lots more I have to say about Magic Leap and will be doled out in future installments. In the meantime, we have an active meetup group (https://www.meetup.com/NY-Magic-Leap-Meetup) that you are welcomed to join.
In this demo I show two sets of objects- an animated gearbox and an engine which can be viewed in X-Ray mode. In both sets of objects I can cycle through selecting individual parts by tapping on the buttons located near the bottom left corner of the iPad.
Notice that as each item is selected, the corresponding text label appears in the panel.
We love to experiment and push the envelope with new technologies. ARKit is proving to be an exciting development in the world of AR and VR.
We built an AR application in Unity where we overlay a 3D map of midtown Manhattan. Notice how smooth the positional tracking is with the iPad.
On June 19th- 20th Chris and I had the pleasure of poring over the ins and out of Hololens technology at the Microsoft HoloHack-NYC.
We wanted to make a project that took advantage of as many features of the HoloLens as possible. However, the most interesting one for us was the spatial mapping.
In Augmented Reality you want people to explore the space around them and have the application change based on their location. We decided to build a fire fighter simulator.
We used the spatial processing to discover all the flat surfaces in the area you are in and virtually set them on fire. Each of those fires also had spatial audio on them to help you find them in the space.
Voice commands were used for activating the fire hose based on the keyword "fire" -- which is where the name comes from. The user ends up walking around the space, screaming the word fire over and over to put out the fires around them. For those people that were too shy to scream fire, they could also air tap on the fires to put them out, taking advantage of the gesture recognition.
Here is a sample video capture of the application:
Those of you who would like to play around with the application can go to our GitHub site.
At the end of the hackathon we had a fun time demoing the App for a bunch of students from Riverdale.
Since last summer we started a collaboration with storyteller Zohar Kfir building VR experiences. Our collaboration eventually led us to do a VR project sponsored by Oculus Studios and produced by Kaleidoscope VR - called Testimony.
Testimony is an interactive documentary for virtual reality that shares the stories of five survivors of sexual assault and their journey to healing. Testimony is an advocacy platform to allow the public to bear witness to those who have been silenced.
The world premiere was at Tribeca Film Festival in April. We published "Testimony" on the Oculus Store for both Gear VR and Rift, on June 1st. As of the date of this post (6/13/2017) It's already topped 15,000 downloads.
The Made in NY Media Center wrote a nice blog post about this project:
We recently built a virtual tour experience of Markthal Rotterdam - a recipient of the 2017 VIVA International Awards for the most cutting-edge design housing residential apartments with one of The Netherland’s largest food markets. The VR experience was exhibited at RECon 2017 at the Las Vegas Convention Center.
A number of visitors to the VR Zone inquired about how we put together the Markthal VR application. We'd like to give you a sneak peak under the hood.
Our process for creating virtual tours entails a four step process:
For clients new to VR and AR experiences we first like to show what can be expected in a finished product. Typically this involves demonstrating our previous work on similar applications. Then we work with the client to outline the key elements for their custom application. Clarity in the VR experience and the kind of user interaction/user experience is what makes it possible to deliver a solid and complete application on time and on budget.
For this application we wanted to create an authentic experience of what it's like to visit Markthal Rotterdam and explore its many facets. The concourse alone is over 100,000 sq ft.
To start this project we set out a few guidelines to direct our development. We combine client feedback with our own experience creating VR experiences. For Markthal we compiled a short list of design constraints:
Ease of use was most important. To simplify control in the VR space we decided that all user interactions would either be done through Gaze or a single tap. To provide feedback to the user that the gaze activation is working, each object that can be looked at always reacts to your gaze.
To better capture the experience of wandering through Markthal we build a simplified model of the space. We combine architectural drawings and observations of the space to map out all of the locations for the experience. Movement through the virtual space in most natural by jumping from one location to the next by line of sight. This avoids visual clutter and encourages the user to explore.
Considering the volume of people at RECon 2017 we also didn’t want people staying in the VR experience for too long, creating lines. We designed the App so that it could be easily reset, instantly ready for the next person. To gracefully end the experience for users that have spent too much time, we built in a 10 minute time out.
Ahead of the shooting we scout out the location so that we can reconstruct in VR all the images, videos, sound recordings and other features.
We arrived with film crew in Rotterdam for video interviews of the property developer and shop owners.
After three and a half days in Rotterdam we're on the plane headed back to New York. We now have 8 days to turn all the captured footage and recordings into our VR experience.
We captured approximately one hundred and seventy 360 images. Every image file was curated for color balance, brightness, contrast. Here is a sample of the Before & After.
With each image we also captured ambient audio for that location. The audio was mapped to a location on a virtual model of Markthal resulting in full spatial audio as you explore the space.
To navigate around the space we let you teleport from one location to the next. To allow that we use a gaze activated icon. The icons are placed in the space giving depth, so farther icons appear smaller and closer icons are bigger.
We use XpressVR - a custom tool designed in house to help us rapidly build the environment and place all of the assets into the space.
To keep users from ever being lost and give them quick access to new locations we made it easy to access the main menu. With a single tap the user can jump back to the main menu and explore a new location.
Perhaps the most fun part of building the application was creating the arch with the magnificent ceiling art.
Lastly, it comes time to press a magic button that deploys the App onto the VR device (in this case, the Samsung Gear VR).
We are always looking for new projects and interesting ideas. If you have an idea for a VR application for your industry or would just like to chat about the possibilities of using VR in your industry, reach out to us.
Please sign up to attend: RSVP HERE
Meetup takes place at the