Readings | Augmented Reality Art & Applications | Week 9

Wednesday, April 3rd 2019, 7:26:55 pm

In the process of researching my City As Cite project, I’ve continued to cross reference academic ideas from Computers Helping People With Special Needs with less academic, pop-cultural thinking on how to combine public spaces with technology. For my “pop cultural” angle this week, I found some interesting ideas in an old article from Wired Augmented Reality is Transforming Museums.

The article deals with how various digital artists have used augmented reality to “enhance” the content of museums, often without the cooperation of museums themselves. One example they cited in the article is artist Jeff Koons’ attempt to impose a 3D “sculpture” of a baloon dog in folks’ geo-tagged photos:

In most cases, though, the artists use ARKit-powered apps to add different, often meta-layers of context to galleries as a source of commentary. For example the project AR(T) augments gallery content.

While certainly playful and interesting, after reading I asked myself if there were ways to turn ideas these playful exhibits into “environmental annotations” for people with some kind of special need. For instance, if ideas from concepts such as AR(T) can be combined with the, albeit dated ideas, from ENABLE – A View on User’s Needs, an article from Computers Helping People With Special Needs. Their philosophy is as follows:

The ENABLE project builds on the concepts of user centred design [2]. The goal is to build a system that meets user’s needs across Europe in the best possible way, to be able to help a large number of people to manage and maintain their daily life as independently as possible.

So one idea that stuck out to me was perhaps using a few features from Vuforia’s AR API such as Image Targets to place objects in the real world the, when viewed with certain mobile applications, trigger additional context to appear. For example, perhaps for the hearing impaired they could aim a phone app at some sticker or image on an object in the real world the depends on auditory data and have a visual illustration pop up that gives them some timely information about the object. Thus creating a possible implementation for “environmental annotations”.



Omar Delarosa avatar

Written by Omar Delarosa who lives in Brooklyn and builds things using computers.

Add me on LinedInFollow me on GithubFollow me on TumblrFollow me on Twitter