Photography is hard. It's even harder when you forget your camera. But, as Chase Jarvis always says, the best camera you have is the one you have with you and with the rise of computational photography that is becoming more true than ever.
Every week, I photograph an open mic for magicians here in Toronto. It's great fun, and doesn't pay super well, but definitely a good way to spend a Tuesday evening. A few weeks ago I got out of my Lyft and realized that things were... lighter than I thought. I forgot my camera. After a half-second panic attack I took a deep breath and walked inside. I explained the situation and knew that my phone, the Pixel 2XL, would be "Good enough" for this sort of event.
Thankfully the Pixel 2XL has a feature called Night Sight, which is similar to the Olympus pixel-shift, where it takes multiple photos and uses the shifting of your hand and the accelerometer to merge photos together for more sharpness, dynamic range, and less noise. Basically any time I shoot on my phone, I turn on night sight, even during the day. So in the dark theater and with Night Sight, I got some okay images of the local magicians doing their acts for the small audience.
Being able to shoot a (small) event on my phone made me realize just how far computational photography has come and I really feel as though this is where photography is headed. Computational photography with things like Night Sight, Portrait Mode, HDR+, or even Canon's dual-pixel AF allows you to shift the bokeh a little. Olympus, Panasonic, and Pentax all have some form of pixel-shift technology, with Fujifilm and Phase jumping on board as well in the larger-than-full-frame market.
With megapixels getting to a point of diminishing returns (Show me a photo taken at 50 MP and 26MP and show me which is which without zooming in to 100%), and even cheap computers being able to run Photoshop and Lightroom I can see computational photography really taking off in the enthusiast market. Even Sony with their new AF algorithm's are a big leap forward in machine learning and photography able to use face-detect on animals — I foresee things like better DOF simulation on high-end cameras so we can have the shallow DOF of a f/1.2 while stopped down and actually shooting at f/2.8. Also things like reducing noise, better auto for street shooters and beginners, etc.
While there will always be purists I feel like we are reaching a point where lenses are sharp enough, camera's are good enough, and lights are bright enough we are going to be shopping for computational features rather than just resolution or ISO performance. The Light L16 was purely computational with a whole lot of cellphone cameras put together for some bokeh, resolution, and zooming magic. These features are starting in the consumer market and slowly moving towards the more professional market.
What do you think about computational photography? Flash in the pan? Or the future?