There are typically two approaches to taking usable photos in low-light conditions. You can either use a slow shutter, which requires a tripod to eliminate blur, or electronically increase the sensitivity of a camera’s sensor, which introduces ugly noise artifacts.

But there’s now a third approach that takes advantage of machine learning to artificially boost the brightness of a dark photo afterwards—with stunning results. Researchers at Intel and the University of Illinois Urbana–Champaign have come up with might be the ultimate post-production tool for photographers who often find themselves shooting in low-light scenarios like performances at concert venues, or capturing nocturnal wildlife at night.

But it can even be used to improve the quality of the smartphone photos you snapped at a dark and seedy bar. As with countless other image processing innovations as of late, the research, which was recently published in a paper titled “Learning to See in the Dark,” takes advantage of deep learning techniques to train an algorithm on how a poorly exposed image should be properly brightened and color-corrected during post-processing.

The researchers provided a neural network with a dataset containing 5,094 overly dark short-exposure images, as well as an equal number of long-exposure images that showed what the scene should look like with proper lighting and exposure. The images were snapped with Sony α7S II and Fujifilm X-T2 cameras, which use different sensor technologies.

As someone who’s long battled with Photoshop to fix dark and grainy images, the results from this algorithm, even in its early research stages, is staggeringly impressive. The photos go from something destined for a computer’s trash can, to images that are genuinely usable, to a certain degree. Read more from gizmodo.com…

thumbnail courtesy of gizmodo.com