Ludwig Wendzich

Building a DSLR Simulator

I work at a strategy and design agency called Lee ter Wal Design. We’re based in the heart of Auckland and we deliver top-notch work for our clients. Our directors like to encourage making and so naturally we decided to come up with a project we could work on in between client work. A project where we could play and experiment. And hopefully a project that would result in something useful.

The gem of an idea was an app that taught you how to use your DSLR on manual. We called it GetSharp.

##The Idea

I remember how in University my photography class and I struggled to grasp exposure and how it worked. We had exercises where for hours on end we were asked to “play” with the settings and get a feel for what they do. I don’t like getting a “feel” for something. I like knowing how it works.

Exposure Triangle

When I started reading about exposure I discovered that it’s actually a simple formula; once you understand stops and how they work together. I started talking to my classmates about the exposure triangle and how when you move one corner (say “Aperture”) and you want to maintain the same exposure (area of the triangle) then you need to move one of the other vertices an equal number of stops in the opposite direction.

The trick was learning what each of the vertices afforded and cost you. That was the key to our DSLR simulator.

##The Competition Upon starting to research the space we discovered 3 main competitors. They each had their own problems.

CameraSim

CameraSim was probably the most robust simulator we’d seen. It scared us a little bit. Except there were a few problems:

Camera Simulator

SimCam

Winning meant:

  1. Providing lots of examples people could play with. This means developing a system that was easy to add to. Inexpensive to add to.
  2. Real-time visual feedback. We wanted people to be able to see the results of their changes without committing to the shutter. The chances of figuring out the stop system seemed a lot higher this way.
  3. Post-shutter teaching. We wanted our comments post-shutter to be useful and to teach the user something.
  4. Teaching how to use “stops””, not how to use a DSLR. Skeuemorphism was a big deal when we started developing GetSharp. It was tempting to make the UI an actual DSLR camera. In fact, we started there. Testing with real users however; they were freaked out by a camera. “I don’t know how to use those!” instead of “I can totally play with these.”

Take the GetSharp Test

The Design

We started off with a very flat prototype. To test the idea of paper. But the idea was to become more skeuomorphic once we had a digital prototype.

DSLR Design

As mentioned earlier, our first digital prototype of the UI actually looked like a DSLR. We thought this would make sense to people, that they would recognise the DSLR like interface and know what to do. Turns out we were wrong for two reasons:

The problem with lots of skeuomorphic implementations is that the digital design that’s mimicking the physical design doesn’t in fact work in the same way as the physical design. In our case, the process of using our app was actually extremely different to using a DSLR, so mimicking that interface was not only daunting for users but also lied to them. It suggested they should use this like a camera, which they shouldn’t.

Design Progress

We ended up with a very “physical” feeling interface though. The skin, theme or style of it is still very camera inspired. That feeling is important. (You’ll notice that in iOS7, the Notes app still has a paper texture, for the feel, without pretending to be a legal pad.) We’re using these textures in a far more heavy-handed manner than the iOS7 Notes app but we’re doing it for the same reasons.

Our final design ended up being a “digital device” that one used to change the photograph. Everything that modifies the appearance of the photo is in that “device”. It feels physical. Everything else, the digital flatness, are pieces of UI that progresses one through the app, none of that changes the output of the photos.

Our wheel design was very much inspired by the iPod, of course. But the origins of it were:

Many of the things we learnt throughout the design process didn’t spring from our own use and testing of the app (in fact, we thought the original skeuomorphic DSLR interface worked pretty well.) We learnt these lessons by watching people use the app. Seeing them struggle. (Believe me, watching someone struggle with a design you thought was pretty awesome is a very humbling experience!) I can’t exaggerate the value we discovered in user-testing. This app has been tested and tested. Tweaked and tested again.

That’s not saying it’s perfect. We’re going to need to keep improving, testing and tweaking.

The Technology

I don’t know how to use Flash. That might be the cause of my disdain for it, or I might not have bothered to learn Flash because of it. Either way, when I was supposed to learn Flash, I didn’t, instead I learnt how to trick my lecturer at university into thinking I had built something in Flash when instead I had just harnessed the power of a new series of technologies, collectively known—bizarrely—as HTML5.

Clearly we weren’t going to be using Flash. At the start, it was unclear that we’d be using Canvas though.

Our first prototype involved a system where we’d take a photo, blur it progressively in Photoshop and then adjust the opacity of different “layers” (<img>s) to attempt to find the right exposure. This ended up looking bad, much like “Camera Simulator”’s implementation. You can’t fake depth-of-field. This was the lesson.

Relying on Photoshop also meant that we’d need to do a lot of processing of images before hand, creating LOTS of images and increasing download sizes. Maybe I could fake everything else?

This is where we decided to try using Canvas. My only other experience with canvas was when we had to figure out how to draw Adobe Illustrator files to Canvas and animate them for the 2012 Gather website. That’s worth it’s own blog post but it didn’t involve photos.

I figured I would need to find a library to help. We ended up using these two projects:

Based on the Caman.js website I knew that we could simulate both exposure and grain using Caman. I would need to write my own implementation of motion blur though. We would take real photo to demonstrate depth-of-field. One image at the correct exposure for each f-stop.

I was quickly able to plug our UI into the Caman engine but it proved extremely slow to chain the standard filters. We used the following tricks to speed up the app:

Implementing the thumbnails made a big difference in the perceived speed of the app but we’d like to find a way to make the rendering happen even faster.

The next step is probably moving the rendering out into a web worker. Javascript being single-threaded means the entire UI locks up for the period of time when the rendering happens. This can be very frustrating. Asynchronous rendering might help relieve this frustration.

Testing the waters

We learnt a lot from user testing. We feel like we can learn a lot more from having this app used by a lot more people before we start building the native implementation. We started with lots of preconceived ideas about what this app would be and how it should work. Lots of those ideas have fallen flat and so we think it’s important to keep testing and see what is actually needed and wanted.

We’ve started dabbling in making this app work on native platforms (who am I kidding, on iOS) both using the existing Caman-based JS engine and writing a new one in Objective-C using Apple’s frameworks.

But we’ve made no hard and fast decisions. We need to learn some more about how people use this app and what they need it to be.

Please go have a look, and let us know what you think: GetSharp