Computational Photography, Artificial Intelligence and Alice Camera
The Four Rules of Computational Photography
Computational Photography and Sensor Size
It is an incontrovertible fact that larger camera sensors produce objectively better image quality in a single capture than smaller ones. Their larger surface area means they get hit with more photons from the scene and capture more useful information about it. This translates into a better signal-to-noise ratio, more dynamic range, more accurate colours and better low light performance. And, if the photographer is any good, a better image.
This logic, which has driven the digital camera market since its inception, was upended by smartphone companies who realised they could use another dimension to improve the performance of their tiny sensors: time.
By capturing a sequence of images in rapid succession and merging them with an algorithm, smartphone cameras capture more useful information about the scene, improving their signal-to-noise ratio, dynamic range, colour reproduction and low light performance. They use computational photography to produce the image quality of a much larger sensor at the cost, size and weight of a much smaller one.
The Micro Four Thirds system uses a sensor about eight times bigger than a typical flagship smartphone, but four times smaller than a full-frame camera. By applying similar computational photography techniques to a sensor of this size, there is no reason that the already-excellent image quality of an MFT camera cannot be increased beyond the level of a larger full-frame camera, at a significantly lower cost and weight.
That is exactly what we are doing with Alice.
What about AI?
Multi-shot stacking algorithms are not the only computational photography techniques capable of increasing image quality. Artificial intelligence algorithms, or more specifically image processing techniques powered by deep neural networks and machine learning, are also capable.
There is a lot hype and fear around AI, often justifiably so. There is also a lot of confusion, disinformation and misunderstanding. In the wrong hands it can be used to deceive and fabricate dangerously convincingly, but when used well it has the power to profoundly change photography for the better, to an extent not seen since the transition from film to digital technology.
The magic of AI comes from the fact that a programmer does not have to reason about the logic of an algorithm and communicate that explicitly to a machine which will then follow their instructions precisely.
Instead, an AI algorithm just has to be shown what its inputs will be like and what its outputs should be, and it learns the algorithmic logic automatically. By training on a large number of input/output pairs AI algorithms are able to replicate almost any image-to-image transformation, even complicated, rich, subjective transformations like those performed by a skilled human retoucher.
Instead of directly capturing more light from the scene, AI algorithms draw from the data provided to them during training to infer more information about the scene. They can be used to objectively improve image quality in the way that using a bigger sensor does, but they can also be used to improve image quality in a far more subtle, expressive and subjective way that was previously only possible for those deeply skilled in post-processing techniques.
With Alice, these algorithms can be run in real-time on the camera, improving the image quality of an already exceptionally high quality MFT sensor in ways that have not been possible before.
The Four Rules of Computational Photography
AI-driven computational photography algorithms will only be effective if they are built responsibly and with the genuine needs of content creators in mind. To that end we have developed the following design principles for our core algorithms, heavily inspired by those stated by Hasinoff et al in their seminal work on the HDR+ algorithm:
Be Natural. The algorithms must be faithful to the scene and not distort reality by warping, hallucinating details or otherwise deceiving the viewer.
Be Conservative. The result of the algorithms should always be at least as good as the original image, and in extreme situations should degrade to a conventional photograph rather than producing any sort of artefact.
Be Expressive. The algorithms should always be aimed at increasing the creative options available to the photographer, never restricting them or homogenising their output.
Be Controllable. It should always be possible to adjust the strength of the effect of the algorithms and easily switch them off when they are not wanted.
There is plenty to fear about AI but its power is undeniable, and one way or another it’s going to change photography and content creation in a very profound way. With Alice we are embracing it to increase the expressivity of our tools and drive the photographic art form and modern content creation forward.
Written by Dr Liam Donovan, CTO of Photogram.
Applying computational photography techniques to images from a larger sensor is exactly what got me excited about Alice. I want an MFT camera that uses computational photography techniques like the ones in my Google Pixel. But I'm not hearing a clear statement that you are planning to use multi-shot stacking algorithms (such as those used to create HDR+ images) in Alice. I'm happy that you will use other computational techniques on the images, but I'm skeptical that any of them could matter as much as multi-shot stacking. Will Alice use multi-shot stacking, and if not, why not?