I've bemoaned the state of current Android camera apps and so, in the first of this two-parter, I interviewed Martin Johnson, the programmer behind the well respected Snap Camera HDR. As a follow-on, this article interviews Mark Harman, developer of Open Camera.
Open Camera has an enviable reputation in the Android camera app world. Google Play shows it with over 10 million downloads and a rating of 4.3 based upon 125,000 comments. That's quite an achievement by any app standards. It's achieved such a standing for a range of reasons, the primary of which is that it's open source, and so, it's free to install and use. I asked Mark why open source and he responded:
It seemed to me that something as fundamental as a camera app should also be free.
Of course, free is all well and good, but it also needs to be a good camera app, and judging by the reviews, it achieves this admirably. This is in part due to the extensive set of features including support for the Camera2 API, manual mode, HDR, auto-leveling, and noise reduction, plus a few others.
Mark Harman is a programmer by trade; he started back in the 1980's on a ZX Spectrum. He has a casual interest in photography, although this has introduced him to a range of computational areas such as focus bracketing and HDR, which have then fed back into his programming. He sticks to his smartphone for photography now and doesn't use a separate camera. I asked Harman why he developed Open Camera:
In 2013, my phone of the time (a Galaxy Nexus) developed a problem where the stock camera would sometimes crash the phone. Given no one else seemed to have the problem, it was perhaps a hardware fault, but third-party camera applications didn't have the problem, so I started looking at them and decided I didn't like any that were around at the time. Even aside from the fault with my phone, Google's camera at the time was limited in terms of the range of options that my phone was capable of. I saw that the Camera API offered a lot more. I also had the idea of auto-leveling a photo based on the phone's orientation; (the photo is rotated so the horizon is exactly level), which at the time was, I think, a unique feature on Android cameras. I wanted to write an application for that, and from there, it turned into a general purpose camera.
Harman is modest about the capabilities of Open Camera, reluctant to pick out any single feature that impresses, but feels that the breadth of capabilities, particularly linked to the CameraAPI, is what many users like. For those interested in what's coming up, he is currently working on panorama stitching, an on-screen histogram, zebra stripes, and focus peaking. So, there is plenty to look forward to in what is an actively developed product. Then, there is his closely linked Vibrance HDR app for creating HDRs from bracketed exposures. It uses the same algorithms as those in Open Camera, but gives the user greater control over how these are parameterized.
The GUI of any camera app can be difficult to develop due to the sheer number of options. I asked Mark what his approach was here.
It can be difficult handling competing requests: some people want more options/features [such] as on-screen buttons, others want it as simple as possible. More configuration options is the obvious solution to keep everyone happy, but I've yet to get round to doing that, plus I suspect people would still disagree on what the default user interface should look like.
Perhaps this philosophy shows across the camera app market there are a range of approaches, and users can be quite entrenched in what they prefer, which means that wanting a "better UI" is unlikely to result in success simply because there are so many competing demands. Harman isn't a fan of swipe-based interfaces, preferring accessibility via icons.
Open Camera comprises over 62,000 lines of code (which you can inspect yourself over at Sourceforge). About 40,000 lines are actual code (including 10,000 lines of tests), with 18,000 lines of XML to support data. The recently released Noise Reduction feature took a year of development, with bug fixes and improvements to continue for some time. At the other extreme, the ghost image (multiple exposure) feature was added in a matter of hours and comprised around 100 lines of code.
While users might be interested in the capabilities of third-party camera apps, smartphone manufacturers present problems. Not only is there a wide array of hardware from single through to quad cameras, but manufacturers can decide how much of the hardware to expose to developers. Harman is positive about Android 9, which introduces support for managing them and may lead to some standardization, but there are currently few devices with this installed. Likewise, the Camera2 API has been successful in meeting the demands of developers with Google able to expand its capabilities without the need to release a Camera3 API. He also believes Google is at the forefront of feature development with their HDR+ being a good example of this.
Thinking about the immediate future of camera apps naturally led on to where Harman thinks manufacturers are leading smartphone camera development:
I think Google is right in that there is a lot of scope in computational photography, and continued advancement in processing power and memory will help.
This is something I've touched upon when thinking about the future of smartphone cameras and pressing for camera manufacturers to integrate computational platforms into their devices. This would allow them to leverage the power of computational photography and link it to the best quality raw imagery. Given Sony's broad technical prowess, they are perhaps best placed to achieve this initially. However, Harman takes a different approach. Given that smartphones account for the mass market and that "for most people, a phone camera has become good enough," where will this go?
Long-term is the question of whether [smartphones will] in turn be replaced by something else. The future of wearables is still unclear, but imagine a 'Black Mirror' style future where wearable devices record everything, and AI picks out shots for your photo collection.
Maybe that's a dystopian future where the photographer no longer exists! What's for sure is that the camera remains one of the cornerstone features of the smartphone, and development and innovation are accelerating. These are exciting times to be a developer and photographer.
Lead image courtesy of Cameron Kirby via Unsplash, used under Creative Commons.