After years of hard work around the world, we’re excited to offer a sneak peek into our new Augmenting Reality™ technology.
Pointr is the first company to successfully marry indoor location data capabilities with Augmented Reality. This means we can create an AR experience that is not bound by a limited space, such as a room or tabletop, but that can get people moving in ways few have ever imagined. Because it’s rooted in movement, we call it Augmenting Reality™.
We’ve built the first scalable AR location technology for indoor environments. Unlike current static AR experiences, Augmenting Reality™ is inextricably connected to the real world. It’s relatively simple to create an AR experience that is not dependent on the position of the user, but it’s incredibly challenging to create an experience that takes user position into account. Augmenting Reality™ makes AR navigation possible anywhere on Earth where our indoor positioning technology is available.
So how did we do it?
Good AR requires good positioning. You’re putting objects into the user's field of vision. To determine their coordinates, you need to have very accurate positioning and, most importantly, flawless orientation. Even if the orientation is only out by five degrees, which will hardly be noticeable on a map, the error will be very apparent to the viewer in an AR world. That small margin could cause a virtual object to suddenly appear inside a wall, or intersect some other object. This is why our accurate indoor positioning system is absolutely fundamental to our Augmenting Reality™ solution, and it is very difficult to do.
Coordinate Systems: absolute (GPS) vs relative (AR)
Indoor positioning uses coordinates on the map to indicate where the user is. Just like GPS, it has absolute coordinates, based on some reference on the earth (latitude/longitude). Augmenting Reality™ does it differently, using its own coordinate system that is entirely relevant to the user, instead of determining the user's actual position on the map.
The main challenge of the approach is to find a way to make a global coordinate system and the AR one work in harmony. We can coordinate the path with our own coordinate system, but how do we translate that to the AR path? We don’t know how the user's position translates into the content generated by the AR.
The naive solution is to use orientation technology to rotate the AR world. You simply overlay 2 coordinate systems on top of one another. The problem is that there may be some minor errors on your initial coordinate orientation, and this will cause the entire AR experience to be wrong.
We’ve found the solution to AR accuracy
We are the first company to create a viable solution to this problem, in the form of a working machine-learning algorithm that improves as the user moves around. As the user applies the AR, it very quickly ascertains a user's positioning and learns from any errors that occur. It continues to learn even when the matching between absolute and relative coordinates is accurate. Any bad data that comes into the equation can't break the system, as all the good data is preserved and can't be undone by the occasional instance of bad data.
One particular challenge of location tech is level-changing. We have worked hard to ensure our indoor positioning system works well across multi-levels to maintain a smooth AR experience even when the user is changing levels.
A highly scalable solution
Our machine learning algorithm requires no training process to learn, and this makes it extremely scalable. There is no extra step required to enable the AR - once the camera/AR session is activated, the algorithm uses what is in the camera's field of vision at that moment. Wherever the positioning tech is enabled, the AR is enabled - we do not require any prior information. The whole process is offline, so the machine learning algorithm doesn’t attempt to learn any global information, and as such it requires no training. This is why it’s scalable, which is an important priority with everything we do.
The current version isn't limited by the characteristics of the venue - it works even in the most challenging of venue environments. It’s all offline. Difficult lighting conditions and environments that have no distinguishable features are no barrier to this technology.
AR first vs AR compatible
When it comes to AR, the user interface is of the utmost importance. The UI needs to be adaptable to the challenges faced by AR positioning so that it never gets confusing for the user.
As a field that's still in its infancy, there’s currently no established design paradigm for how the AR user experience should be defined. As such, we have had to think from the ground up. The paradigms for mobile devices cannot be applied, since the majority of elements we present to users need to exist in the AR world. For instance, when attempting to find a destination, we need to show a virtual AR destination marker.
An AR navigation experience is about more than mere wayfinding - it needs to be an enjoyable one. The experience can be gamified to increase engagement and incentive (for example, an animal could lead the way, or there could be collectables to acquire as you go). To make AR stand out, there need to be elements that are enjoyable which simply aren't available in other technologies, and these elements should give an incentive to keep going.
If you want to learn more about what makes an AR experience successful, check out our AR design blog post.
Pointr's Augmenting Reality licence can be utilised on both iOS & Android through our Mobile Software Development Kit.
Future developments to look out for:
- Initial roll out to selected clients.
- We will keep working on improvements. Wayfinding is just the beginning; in the future, AR will many other applications in addition to this core function.
- The same algorithm can definitely be applied to outdoor environments. The potential for this has already been demonstrated in initial testing - the lack of initial training is an idea for this application.