Practical UX Design for Augmented Reality Data Visualisation

Practical UX Design for Augmented Reality Data Visualisation

12th March, 2020
Michael Forrest

Back in 2010 I helped work on the design of an operating system. We had the opportunity to come up with something truly novel and we looked to cinema for inspiration. We soon realised that what looks good on screen is usually the opposite of what works in real life. As computer users we generally prefer things not to move randomly around or make bleeping noises when we’re working on a spreadsheet or laying out a document.

Augmented reality (AR) has been a chance for me to revisit these insights and think about whether some Hollywood influences might make sense now that we have three dimensions and some use cases where thing flashing and bleeping and moving around might be quite useful or interesting. In this article I’ll explain some of the insights I found when working on my latest app (Changes - available in the App Store now) to help you shape your thoughts around AR design and hopefully help you avoid some of the pitfalls.

Don’t end up with this! Don’t end up with this!

Research

I'm building a data visualisation app so I pulled together loads of images where data is represented in three dimensions (and some stuff where things look good in two dimensions). I’ve thrown up my research as a Sketch click through “prototype” here. (Sorry about the jumping next button and note that my original plan was to import tweets and do sentiment analysis to give new users instant gratification, but I subsequently decided to use the data visualisations as motivation to keep using the app - it’s a mood tracker that needs a certain amount of data to be useful).

Explore my notes here Explore my notes here

I reached out to Ian of Over & Over for UX and visual design support, sharing this document as a starting point.

I like using Sketch to pull together images like this, but for this project I wish I could have embedded videos too. Augmented reality on mobile will always be moving (unless the user has an exceptionally steady hand) and it’s important to think about this from the beginning.

Ok let’s get this listicle started. What do we need to think about when designing for AR?

1. Think about your canvas

Mobile augmented reality interfaces don’t just sit on the surface of a phone screen. They sit within the real world environment. The first step in any augmented reality experience is to figure out where and how the user is going to place your content into the world. Frameworks like ARKit scan the environment for flat surfaces where you can anchor content. Think about where your content fits best. Would it work on a wall, hanging like a painting? Or are you showing furniture that sits on the floor? Or do you want to let the user place your content onto a nearby table? That’s what I went for. In practice, this is not optimal, because uncluttered tables are a rare occurrence in real life, unlike a WWDC stage.

What a convenient, perfectly empty table… What a convenient, perfectly empty table…

Let’s just hope our users forgive us for making them tidy up.

With this first step in mind, here’s how I approached the wireframes for my project

2. Design in three dimensions

Designing in 3D is a bit of an overhead and I haven’t found a particularly good workflow yet. Maybe I’ll build a tool (email me if that sounds useful). I wouldn’t recommend Blender unless you have a couple of months free to learn how to use it, and even then it’s pretty awkward to visualise things.

I ended up going ultra-low-fi so I could quickly share my thoughts with Ian. Here’s another Sketch Prototype. (Note, again, that this harks back again to my abandoned Twitter import plan.)

Click for Sketch prototype Click for Sketch prototype

My initial idea was to recreate the user experience of Iron Man 2 where he starts with a complex data set and gradually removes things until he finds what he’s looking for.






View this post on Instagram

#prototyping some #augmentedreality #datavisualization for the next version of my Happiness app. Here are hundreds of tweets with #machinelearning used to infer mood from text but when used with private diary data it will be very insightful.

A post shared by Michael Forrest (@michaelforrest) on

Ian came back with a different approach, preferring to focus on a more immediately comprehensible “additive” user experience, letting the user build up complexity on a simple canvas.

This was great for setting a direction and instituting a more gradual introduction of complexity, helping us avoid making something too unapproachable for new users.

However, there were a few too many drag and drop interactions and fixed screen elements for my liking. Nevertheless I started prototyping, solving technical problems of how to move elements between SwiftUI’s screen space and the 3D environment.

When we’re inventing new user experience paradigms, prototyping is important. Here’s one of the first things I learned as I improved my early prototypes’ fidelity.

3. Take into account the physicality of the phone

I loved the appearance of this design:

There is an idea that as you move your phone towards an element, a speech bubble pops out to reveal more detail.

What we didn’t take into account was the way that the physicality of the phone made this user interaction impractical.

When we’re close to the edge of the table, the proximity effect is no problem, but when we try to reach a distant data point…

It doesn’t work! The phone hits the table!

I wanted the bubbles to be vertical and this just wouldn’t work without somehow lifting everything up off the table, but then we’d lose the neat aesthetic.

I ended up finding a way to use this interaction later on. Here’s a picture.

Why does this version work better?

  1. You can only hover over the tall bars (entries that have been starred) and these have enough height that the phone has a bit of table clearance
  2. These bars are kept to the front of the composition (you can’t focus on the circular entries towards the back)
  3. A little crosshair element makes it clear exactly where the user is focused

Prototype early and often if you want to figure this sort of thing out.

Ok onto the next realisation.

4. It’s basically a camera app

It took me a while to realise that an augmented reality app is more like a camera than anything else. It’s not a grid or a table view, it’s a viewfinder that adds extra stuff into the world. Start with the camera “snap” button and go from there, thinking in terms of camera overlays for your controls. Also, check out this word cloud design Ian came up with:

When I realised that this was a camera UX, lots of things dropped into place. We have a natural feeling sharing workflow, the ability to record video, and visualisation-specific overlays like a timeline or “happy”/”unhappy” toggle.

I hoped to use a WhatsApp-style “hold to take video” interface but unfortunately I didn’t find a way to capture the composite SceneKit frames for a video without resorting to ReplayKit. Unfortunately, this framework prompts the user for permission to record their screen every time which interrupts and cancels the press-and-hold interaction. I’ll come back to that when I have a bit more time!

You may have noticed that we managed to ditch the blue overlay as we got the hang of all this.

5. Shadows make all the difference

Shadows bring everything together when you’re rendering an AR view. Without shadows the 3D models feel disconnected from their environment, even with ARKit rendering camera grain and performing light estimation. With shadows, your eyes suddenly accept what you’re seeing.

You can bring your own shadow-casting light to a scene - a user won’t really notice that it’s not motivated, as long as it’s subtle. After a lot of jumping through hoops trying to retrieve and process light estimation data from Apple’s SDKs I realised I could get great results simply by adding my own light like this.

The light source I added is shown in yellow. Note how even though this ended up hovering above the table, it still works because of the shadows. The light source I added is shown in yellow. Note how even though this ended up hovering above the table, it still works because of the shadows.

You’ll need to look into “occlusion surfaces” too so that your content doesn’t go through the table but there’s lots of information out there about that already.

It’s cool to make things look realistic, but sometimes the fact it’s not real is very useful.

6. Where artificiality helps

When a real camera gets too close to an object it blurs out of focus. With artificial content, this won’t happen (unless you write code to make it happen).

This means that coming up close becomes a powerful interaction mechanic. It also means you can make things pretty small and still make sense.

You could add tiny text in the top of objects that is only visible as the phone comes close. I’m gonna call this “Inter-Scale Disclosure”. I didn’t end up doing this in Changes (not yet anyway) but I think there’s something to it. Actually we need a better name that evokes exciting 3D graphics. I’ll think of something…

7. Some notes on data visualisation, SceneKit and D3 libraries

It wasn’t easy to get this word cloud to work properly. Here you can see my debugging tools as I cast a spiral around the origin looking for places where each word will fit.

I’ve become quite dependent on D3 as a data-visualisation framework, but here I’m working with SceneKit, not a WebView. This means that something as straightforward-seeming as a word cloud requires porting JavaScript code and untangling cryptic algorithms in the process. I spent a couple of days trying to make this word cloud look right. My implementation is not fast, but I decided instead of pre-calculating the layout, I’d iterate frame by frame, adding words as I figured out where to put them, resulting in the animated reveals you see.

The animation is staggered because the layout is calculated asynchronously The animation is staggered because the layout is calculated asynchronously

8. Animation, 3D geometry, game engines

You need some 3D game design and development skills to make a decent augmented reality experience. Apple’s SceneKit is pretty solid and provides decent tooling for authoring and debugging. I wouldn’t recommend using RealityKit yet (early 2020). For now, it’s a more accessible way to get content into an AR app but it’s limited in terms of bespoke content.

That said, it’s not that hard to get nice animations if you use the tools Apple provides. If I’ve learned one thing from watching hours of GDC videos it’s that custom tooling is essential when working in this field. Apple provides some strong, if generalised authoring and debugging tools that you’ll lean on pretty heavily if you’re building an augmented reality app.

Debugging a scene in Xcode Debugging a scene in Xcode

I was glad I’d spent so much time working in 3D and refining my photography skills for my YouTube channel; everything I’d learned came into play on this project.

In terms of design tools, I know that Ian got a lot out of Adobe Illustrator, mostly using some simple extrusion effects to convey depth.

Meanwhile I tinkered with Blender but it ended up being counterproductive. Pen and paper or crude 2D mock-ups ended up being better in the end.

Conclusions

The physicality of the device on which your augmented reality app runs very much determines the user experience.

Google Glass required completely different UI to an iPhone being waved around like a camera. An iPad creates different pressures as the user needs to use both hands to move it around and it is more likely to catch on tables.

In my opinion, augmented reality as a technology is not going anywhere - it feels inevitable that somebody will find a socially acceptable way to replace hand-held screens with in-eye content. Apple Lens anyone? A lot of tech will be needed to make this work - at the moment a cluttered table or anything other than a feature-detected human will interfere with your nice design. As time goes on I’m sure Apple will add more features to make AR content fit in more seamlessly with the camera’s viewfinder.

I am finding AR data visualisation to be a fertile field and I’m excited to keep adding more data visualisations into my Changes App.

Tweet me at @michaelforrest if you have questions!


About Michael Forrest
About Ian Mizon

Changes is available now on the App Store

Get your free book

Sign up for my newsletter to get weekly tips on mood tracking, happiness data analysis and forming better habits.

I will send you a copy of my eBook about mood tracking.

Find out more

*
indicates required