My ramblings on the stuff that holds it all together
Augmented Reality TFTLondon
I attended another session of The Fantastic Tavern London (#TFTLondon) this evening hosted by Matt Bagwell and Michelle Flynn – this evening’s event was centred around ‘realities’ and specifically augmented reality.
My post on the previous TFT London is available here; as you’ll see from that post Augmented Realty was voted as a hot topic so warranted further exploration.
If you’re not familiar with the concept of Augmented Reality (AR) – watch this video, it’s essentially adding to what you see and sense in the real world with useful information, to-date most implementations are geared around providing spatial information to maps or scenes such as finding the nearest tube station or restaurant.
The evening opened with some discussion of how the current ‘thirtysomething’ generation grew up with emersion through video games, Doom was one of my favourites and whilst I’m not still a gamer I can see how the experience is immersive for a lot of people to this day.
The reality of realities is that it’s not quite there yet to enrich our daily lives, the iPhone example is cool, but it still requires a device which isn’t ‘natural’ to operate, we don’t all walk around with our iPhones outstretched infront of us…
Well, maybe some fellow London commuters do, they should really watch out where they are going otherwise they could see the reality of a totally different kind of AR (Accident and emeRgency – sorry!:))
A lot of things that in the 1980/90’s were considered futuristic still aren’t mainstream technology today, for example the Terminator’s heads-up type display but they are in some places…
Several cars manufacturers have this sort of option today (and some had it in the late 80’s) and it’s had a military application for a long time, these technologies will eventually become commoditized and thus cheap and accessible to all, much like the mobile phone has become almost ubiquitous.
Paul Dawson of EMC consulting put forward the view that there is also something missing, most current AR implementations only operate in the 4 dimensions (the 3 dimensions of physics and the dimension of time) but they don’t really address the 5 senses, and considering this is how we, as humans really experience our environments AR isn’t really contributing much in the way of real augmentation.
AR can point out linear things like a tube station entrance or a dog wearing a wig, but it can’t contextually give you information that is relevant to you; for example – There is a Marks & Spencer branch, it’s lunch time, you’re hungry and it’s queue is only 30 seconds, compared to your usual sandwich shop which has a queue of 5 mins.
Additionally current AR is very device bound; it’s not really a natural way of giving you information
Imagine an implementation in the built environment around you that listens to actual conversations and displays them in a type of tag cloud or some embedded displays that recognise your face and some attributes about you and where you are going, offering some advice on the quickest way; or maybe even the closest gym to lose some of that weight?🙂
That’s quite an interesting proposition to me, my personal favourite example of an AR implementation is the Lego Augmented Reality Kiosk, which is available in all good Lego shops (it is said that I have an unhealthy, mildly OCD-type interest in Lego)
There is also an application available at Tesco that will allow you to take a photo of a bottle of wine and have it provide further information (more info here), this sort of application has been around for a while but this uses visual recognition, rather than traditional bar-code scanning – so imagine the wider application of this concept to the environment around you – rather than relying on traditional GPS, barcode, tagging type technology image recognition is used, which is potentially far more accurate as it’s based on what you actually see from your view point and position.
Johannes Kebeck from the Microsoft Bing Maps team talked about how geo-spatial information and public mapping and are being merged with crowd-sourced information, tagging and imagery to produce rich, sources for augmented reality solutions.
He also talked about how Microsoft have a preview of a commercial data catalogue for the Windows Azure Platform codenamed Dallas – where people can find, buy and sell data sources for these sort of applications, leveraging the scale of Azure for the analysis and processing of large data sets.
Several interesting use-cases were demonstrated;
- Using Flickr integration with Bing maps to provide historical photographs of buildings, allowing a time-slider control to see what something looked like 50 years ago or at night-time.
- The large number of free online data sources means there is a large amount of information available, most of this is historical or static but increasingly with crowd-sourcing, microblogging and sensor type networks these are being augmented with real-time information, for example fuel prices.
- There was an example of a crowd-sourced maps mashup that was done for the Haiti disaster (my thoughts on some kind of emergency infrastructure for these situations here) in a matter of days people contributed significant real-time information on local conditions, aid levels and casualties to allow better targeting of relief.
- Or on a more local area, feeding real-time crime statistics into a map to show crime hot-spots.
A lot of this seems to be discussed in this TED session, I haven’t watched it yet; but it looks very interesting
Up until now most of these tools and technologies haven’t been easilt accessible to the typical consumer and end-user, I’ve written about Microsoft Photosynth before but it’s an example of an easy to use end-user interface into this sort of AR technology and a lot of work is going into this area.
Neogeography is a new word to me, but you can read about it at wikipedia, I like the idea.
Lastly some of the UX team from EMC Consulting’s Studio had to tiptoe around an NDA to talk at a high-level about some of the work they are doing at the moment with customers in the Augmented Reality space for industrial customers, augmenting engineers viewpoint of plant with relevant information, as well as real-time 2-way feeds of critical safety information.
Good quality immersion for the end-user is considered a key to making people feel empowered by the AR tools, rather than merely using them as a tool, to achieve this you need a good quality experience and they have coined the phrase High-Fidelity 3D, particularly for applications like military or surgical practice where for the end-user to get the most benefit it has to seem real; there is obviously a lot that has been pioneered in the video game industry in this space.
They have built some interesting PoC systems, notably around smart-metering for the home, with a flexible UI that allows the end-user to dive in and out of the represented house and appliances and customise it to represent their house.
For me one of the most interesting parts of this session was that with immersive/AR type applications there are a lot more end-user factors to consider like ergonomics, RSI and complementing learnt muscle-memory type skills.
Most AR applications will require one of more multi-touch type input devices, the traditional keyboard and mouse are well known quantities but new devices are less field-proven, there are also health & safety implications – you need to ensure the solution doesn’t encourage people to take risks or put them in danger.
For many people vehicle control skills (like a car steering wheel and pedals) are well learnt (muscle-memory) type skills so adopting something radically different makes it hard for people to switch between and slows adoption.
Almost playing back to the behavioural architecture concepts of the previous TFT evening – the team discussed the concept of built-in rewards, or status levels within industrial type AR applications to make it a more engaging experience for the end-user and encourage adoption – I could imagine how an example displaying a user’s skill level operating equipment (n00b, speed-demon, Fork-lift Ninja etc.) helps to encourage development and keep people operating within allowable parameters (speed limits for example).
This may sound a bit airy-fairy (for want of a better term) but consider this, 1993’s 15 year old playing Doom is now 32 and as time passes the percentage of people brought up with this sort of gaming experience and expectation grows. Today’s upcoming generation is already fully immersed in social media – maybe it’s not such an alien concept for the professional world of the near future after all.
I’d like to thank Matt and Michelle for an interesting evening, and a bit of a break from the norm of my day-job and this blog – if you’re interested in this sort of thing – the next event is on 19th August (location TBC) and is likely to be a full day called “The Lock Inn” – keep an eye on Matt and Michelle’s blogs for more details, I also appear to have been volunteered by a colleague to speak about clouds or something at the event, so look out for that🙂
I also had a go on an iPad (not yet released here in the UK) thoughts – very nice screen and UI (as you’d expect) but it was a lot heavier than I was expecting, I only had it for a couple of minutes but it wasn’t very comfortable.
Best quote of the evening: “Let the sausages flow…” Matt Bagwell, Creative Director, EMC Consulting🙂