Building a DSLR Simulator
I work at a strategy and design agency called Lee ter Wal Design. We’re based in the heart of Auckland and we deliver top-notch work for our clients. Our directors like to encourage making and so naturally we decided to come up with a project we could work on in between client work. A project where we could play and experiment. And hopefully a project that would result in something useful.
The gem of an idea was an app that taught you how to use your DSLR on manual. We called it GetSharp.
##The Idea
I remember how in University my photography class and I struggled to grasp exposure and how it worked. We had exercises where for hours on end we were asked to “play” with the settings and get a feel for what they do. I don’t like getting a “feel” for something. I like knowing how it works.
When I started reading about exposure I discovered that it’s actually a simple formula; once you understand stops and how they work together. I started talking to my classmates about the exposure triangle and how when you move one corner (say “Aperture”) and you want to maintain the same exposure (area of the triangle) then you need to move one of the other vertices an equal number of stops in the opposite direction.
The trick was learning what each of the vertices afforded and cost you. That was the key to our DSLR simulator.
##The Competition Upon starting to research the space we discovered 3 main competitors. They each had their own problems.
CameraSim
CameraSim was probably the most robust simulator we’d seen. It scared us a little bit. Except there were a few problems:
- CameraSim didn’t teach; we wanted to teach
- CameraSim didn’t give real-time feedback, we felt that was important to learning
- CameraSim didn’t have many example shots. The cost of producing each one seemed extremely high. We wanted to provide the best experience for a low “per-shot” cost. It needed to be easy to add pictures to our simulator
- CameraSim used Flash. They needed apps to work on devices like the iPad and iPhone.
Camera Simulator
- Camera Simulator seemed a lot simpler than CameraSim
- Camera Simulator did give real-time feedback
- Camera Simulator didn’t do real depth-of-field (they used transparency combined with blurred versions of the image to fake it. We tried that, it sucked. We had to do it better.)
- Camera Simulator was Flash-based
- They only offered 3 images.
SimCam
- SimCam seems like it came out of a different era. Native form controls. Page reloads. It isn’t competing
- SimCam do offer very valuable teaching.
Winning meant:
- Providing lots of examples people could play with. This means developing a system that was easy to add to. Inexpensive to add to.
- Real-time visual feedback. We wanted people to be able to see the results of their changes without committing to the shutter. The chances of figuring out the stop system seemed a lot higher this way.
- Post-shutter teaching. We wanted our comments post-shutter to be useful and to teach the user something.
- Teaching how to use “stops””, not how to use a DSLR. Skeuemorphism was a big deal when we started developing GetSharp. It was tempting to make the UI an actual DSLR camera. In fact, we started there. Testing with real users however; they were freaked out by a camera. “I don’t know how to use those!” instead of “I can totally play with these.”
The Design
We started off with a very flat prototype. To test the idea of paper. But the idea was to become more skeuomorphic once we had a digital prototype.
As mentioned earlier, our first digital prototype of the UI actually looked like a DSLR. We thought this would make sense to people, that they would recognise the DSLR like interface and know what to do. Turns out we were wrong for two reasons:
- If people were comfortable using their DSLRs controls already, they probably had little use for this app. DSLRs scared the audience we were working with.
- Our app didn’t work like a DSLR. Unlike CameraSim, we don’t show a “live” moving scene which gets converted into a “photo” once the user clicks on a Capture button. Our app just showed you “photos”: it allowed you to change settings and see the effect of that change on the “photo”. There was no need for a shutter button.
The problem with lots of skeuomorphic implementations is that the digital design that’s mimicking the physical design doesn’t in fact work in the same way as the physical design. In our case, the process of using our app was actually extremely different to using a DSLR, so mimicking that interface was not only daunting for users but also lied to them. It suggested they should use this like a camera, which they shouldn’t.
We ended up with a very “physical” feeling interface though. The skin, theme or style of it is still very camera inspired. That feeling is important. (You’ll notice that in iOS7, the Notes app still has a paper texture, for the feel, without pretending to be a legal pad.) We’re using these textures in a far more heavy-handed manner than the iOS7 Notes app but we’re doing it for the same reasons.
Our final design ended up being a “digital device” that one used to change the photograph. Everything that modifies the appearance of the photo is in that “device”. It feels physical. Everything else, the digital flatness, are pieces of UI that progresses one through the app, none of that changes the output of the photos.
Our wheel design was very much inspired by the iPod, of course. But the origins of it were:
- What UI would be easy/fun to use on an iOS device. Sliders are meh, steppers are tedious. Scroll wheels seemed to work.
- What UI could be made physical and make sense. The scroll wheel is easily representable as a dial. Everyone knows what it is and how to use it. Also, most dials don’t require a button to be pressed for their changes to be implemented (there’s no “Engage this volume change” button on a stereo.) And, often dials are used for many different settings, toggled via the use of buttons (like we’re doing with Aperture, ISO and Shutter speed above the dial.)
Many of the things we learnt throughout the design process didn’t spring from our own use and testing of the app (in fact, we thought the original skeuomorphic DSLR interface worked pretty well.) We learnt these lessons by watching people use the app. Seeing them struggle. (Believe me, watching someone struggle with a design you thought was pretty awesome is a very humbling experience!) I can’t exaggerate the value we discovered in user-testing. This app has been tested and tested. Tweaked and tested again.
That’s not saying it’s perfect. We’re going to need to keep improving, testing and tweaking.
The Technology
I don’t know how to use Flash. That might be the cause of my disdain for it, or I might not have bothered to learn Flash because of it. Either way, when I was supposed to learn Flash, I didn’t, instead I learnt how to trick my lecturer at university into thinking I had built something in Flash when instead I had just harnessed the power of a new series of technologies, collectively known—bizarrely—as HTML5.
Clearly we weren’t going to be using Flash. At the start, it was unclear that we’d be using Canvas though.
Our first prototype involved a system where we’d take a photo, blur it progressively in Photoshop and then adjust the opacity of different “layers” (<img>
s) to attempt to find the right exposure. This ended up looking bad, much like “Camera Simulator”’s implementation. You can’t fake depth-of-field. This was the lesson.
Relying on Photoshop also meant that we’d need to do a lot of processing of images before hand, creating LOTS of images and increasing download sizes. Maybe I could fake everything else?
This is where we decided to try using Canvas. My only other experience with canvas was when we had to figure out how to draw Adobe Illustrator files to Canvas and animate them for the 2012 Gather website. That’s worth it’s own blog post but it didn’t involve photos.
I figured I would need to find a library to help. We ended up using these two projects:
- Caman.js provided the rendering engine we needed.
- Exvanvas.js provided IE support.
Based on the Caman.js website I knew that we could simulate both exposure and grain using Caman. I would need to write my own implementation of motion blur though. We would take real photo to demonstrate depth-of-field. One image at the correct exposure for each f-stop.
I was quickly able to plug our UI into the Caman engine but it proved extremely slow to chain the standard filters. We used the following tricks to speed up the app:
- We combined the exposure and grain filters into one filter. This is involved diving into the math used by Caman’s filter to figure out how they worked and merge the two processes. (Caman’s exposure filter doesn’t match to stops, so I had to fix that.)
- We had to add in motion blur for low shutterspeeds. A number of implementations was tried before we found something that worked fast enough.
- We decided to render out a small thumbnail at the new exposure, blur it and display it fullscreen while the larger preview was being processed. It makes the app feel faster (providing some kind of feedback, quickly) but also allows you to immediately see if you’ve made a big mistake.
Implementing the thumbnails made a big difference in the perceived speed of the app but we’d like to find a way to make the rendering happen even faster.
The next step is probably moving the rendering out into a web worker. Javascript being single-threaded means the entire UI locks up for the period of time when the rendering happens. This can be very frustrating. Asynchronous rendering might help relieve this frustration.
Testing the waters
We learnt a lot from user testing. We feel like we can learn a lot more from having this app used by a lot more people before we start building the native implementation. We started with lots of preconceived ideas about what this app would be and how it should work. Lots of those ideas have fallen flat and so we think it’s important to keep testing and see what is actually needed and wanted.
We’ve started dabbling in making this app work on native platforms (who am I kidding, on iOS) both using the existing Caman-based JS engine and writing a new one in Objective-C using Apple’s frameworks.
But we’ve made no hard and fast decisions. We need to learn some more about how people use this app and what they need it to be.
Please go have a look, and let us know what you think: GetSharp