Risks that come with Augmented Reality and How to Fix them

Augmented Reality (AR) is still in its infancy and has a very promising youth and adulthood ahead. It has already become one of the most exciting, dynamic, and pervasive technologies ever developed. Every day someone is creating a novel way to reshape the real world with a new digital innovation.

Over the past couple of decades, the Internet and smartphone revolutions have transformed our lives, and AR has the potential to be that big. We’re already seeing AR act as a catalyst for major change, driving advances in everything from industrial machines to consumer electronics. It’s also pushing new frontiers in education, entertainment, and health care.

But as with any new technology, there are inherent risks we should acknowledge, anticipate, and deal with as soon as possible. If we do so, these technologies are likely to continue to thrive. Some industry watchers are forecasting a combined AR/VR market value of $108 billion by 2021, as businesses of all sizes take advantage of AR to change the way their customers interact with the world around them in ways previously only possible in science fiction.

As wonderful as AR is and will continue to be, there are some serious privacy and security pitfalls, including dangers to physical safety, that as an industry we need to collectively avoid. There are also ongoing threats from cyber criminals and nation states bent on political chaos and worse — to say nothing of teenagers who can be easily distracted and fail to exercise judgement — all creating virtual landmines that could slow or even derail the success of AR. We love AR, and that’s why we’re calling out these issues now to raise awareness.

Without widespread familiarity with the potential pitfalls, as well as robust self-regulation, AR will not only suffer from systemic security issues, it may be subject to stringent government oversight, slowing innovation, or even threaten existing First Amendment rights. In a climate where technology has come under attack from many fronts for unintended consequences and vulnerabilities–including Russian interference with the 2016 election as well as ever-growing incidents of hacking and malware–we should work together to make sure this doesn’t happen.

If anything causes government overreach in this area, it’ll likely be safety and privacy issues. An example of these concerns is shown in this dystopian video, in which a fictional engineer is able to manipulate both his own reality and that of others via retinal AR implants. Because AR by design blurs the divide between the digital and real worlds, threats to physical safety, job security, and digital identity can emerge in ways that were simply inconceivable in a world populated solely by traditional computers.

While far from exhaustive, the lists below present some of the pitfalls, as well as possible remedies for AR. Think of these as a starting point, beginning with pitfalls:

  • AR can cause big identity and property problems: Catching Pokemons on a sidewalk or receiving a Valentine on a coffee cup at Starbucks is really just scratching the surface of AR capabilities. On a fundamental level, we could lose the power to control how people see us. Imagine a virtual, 21st century equivalent of a sticky note with the words “kick me” stuck to some poor victim’s back. What if that note was digital, and the person couldn’t remove it? Even more seriously, AR could be used to create a digital doppelganger of someone doing something compromising or illegal. AR might also be used to add indelible graffiti to a house, business, sign, product, or art exhibit, raising some serious property concerns.
  • AR can threaten our privacy: Remember Google Glass and “Glassholes?” If a woman was physically confronted in a San Francisco dive bar just for wearing Google Glass (reportedly, her ability to capture the happenings at the bar on video was not appreciated by other patrons), imagine what might happen with true AR and privacy. We may soon see the emergence virtual dressing rooms, which would allow customers to try on clothing before purchasing online. A similar technology could be used to overlay virtual nudity onto someone without their permission. With AR wearables, for example, someone could surreptitiously take pictures of another person and publish them in real time, along with geotagged metadata. There are clear points at which the problem moves from the domain of creepiness to harassment and potentially to a safety concern.
  • AR can cause physical harm: Although hacking bank accounts and IoT devices can wreak havoc, these events don’t often lead to physical harm. With AR, however, this changes drastically when it is superimposed on the real world. AR can increase distractions and make travel more hazardous. As it becomes more common, over-reliance on AR navigation will leave consumers vulnerable to buggy or hacked GPS overlays that can manipulate drivers or pilotsmaking our outside world less safe. For example, if a bus driver’s AR headset or heads-up display starts showing illusory deer on the road, that’s a clear physical danger to pedestrians, passengers, and other drivers.

How can we address and resolve these challenges? Here are some initial suggestions and guidelines to help get the conversation started:

  • Industry standards: Establish a sort of AR governing body that would evaluate, debate and then publish standards for developers to follow. Along with this, develop a centralized digital service akin to air traffic control for AR that classifies public, private and commercial spaces as well as establishes public areas as either safe or dangerous for AR use.
  • A comprehensive feedback system: Communities should feel empowered to voice their concerns. When it comes to AR, a strong and responsive way for reporting unsecure vendors that don’t comply with AR safety, privacy, and security standards will go a long way in driving consumer trust in next-gen AR products.

READ MORE ON >> Tech Crunch

Related posts

Leave a Comment