This process actually relies upon the most severe sort of reflections as those are easier to pick out. Specifically, anything taken through double pane glass or very thick glass tends to have two overlapping reflections, one from the inner surface and one from the outer one. If you only have one reflection, it’s a very computationally complex problem for an algorithm to figure out what’s a reflection and what isn’t. With two identical offset images that are reflections, suddenly the computer has a point of reference.
The system created by the MIT team relies on finding the edges of the reflection by looking for a repeating offset pattern. The image is split into 8×8 blocks of pixels and calculates the correlation between the pixels. When the parts of the image comprising the reflection have been identified, the computer can then selectively tune the levels to make the reflection less pronounced. You can see an example of the results in the image above. It’s far from perfect, but this is just the first iteration of the technology.
This process currently requires the offset of the double reflection to be rather large to ensure the algorithm can recognize them as distinct shapes, but things might improve in the future. The team sees possible applications for this technology in consumer imaging technologies. Your phone might just know how to reduce reflections when you snap a photo through the window. These algorithms could also be of use in improving computer vision. Despite all the research that has gone into it, computers are still pretty bad at making sense of an image.