Samsung says it adds fake details to moon photos via “reference photos” – Ars Technica

Enlarge / Samsung’s Galaxy S23 ad, featuring the moon photography mode.

Taking a picture of the moon on a Samsung device will give you a detailed picture of it the moon. Some people are crazy about this.

The problem is that Samsung’s software spoofs some details that the camera can’t actually see, leading a Reddit user named ibreakphotos to accuse the company of “faking” moon photos. The user’s post claims to be able to trick Samsung’s moon detection, and it went so viral that Samsung’s press site had to respond.

Samsung’s incredibly niche “Moon Mode” will perform some photo processing if you point your smartphone at the moon. In 2020, the Galaxy S20 Ultra launched with a “100x Space Zoom” (it was really 30x) with this moon feature as one of the marketing gimmicks. The mode is still heavily used in Samsung’s marketing, as you can see in this Galaxy S23 ad, which shows someone with a huge telescope on a tripod envious of the supposedly incredible lunar photos a pocket-sized Galaxy phone can make.

We’ve known how this feature works for two years now: Samsung’s camera app includes AI functionality specifically for moon photos, though we got a little more detail in Samsung’s latest post. The Reddit post claimed that this AI system can be tricked, with ibreakphotos saying you can take a picture of the moon, blur all the details out and compress it in Photoshop, then take a picture of the monitor and the Samsung phone will add the detail again. The camera would have been caught making up details that didn’t exist at all. Couple this with AI being a hot topic, and the voices for counterfeit moon photos started pouring in.

On the one hand, using AI to come up with details applies to all smartphone photography. Small cameras make for bad photos. From a phone to a DSLR to the James Webb telescope, bigger cameras are better. They just pick up more light and detail. Smartphones have some of the smallest camera lenses in the world, so they need a lot of software to take photos that are decent quality.

“Computational photography” is the phrase used in the industry. In general, many photos are taken quickly after you press the shutter button (and even for you press the shutter button!). These photos are aligned into a single shot, cleaned up, denoised, run through some AI filters, compressed, and saved to your flash storage as a rough approximation of what you pointed your phone at. Smartphone makers need to throw as much software at the problem as possible, because no one wants a phone with a giant, protruding camera lens, and normal smartphone camera hardware can’t keep up.

On the left, Redditor ibreakphotos takes a picture of a computer screen with a blurry, cropped, compressed photo of the moon, and on the right, Samsung makes up a bunch of details.
Enlarge / On the left, Redditor ibreakphotos takes a picture of a computer screen with a blurry, cropped, compressed photo of the moon, and on the right, Samsung makes up a bunch of details.

But light aside, the moon basically always looks the same to everyone. As it rotates, the Earth rotates and the two orbit each other; gravitational forces put the moon in a “synchronous rotation”, so we always see the same side of the moon, and it only “wobbles” relative to the earth. If you create an incredibly niche camera mode for your smartphone, specifically aimed at lunar photography only, you can do a lot of cool AI tricks with it.

Who would know if your camera stack is just lying and fitting professionally shot, pre-existing photos of the moon into your smartphone image? Huawei was accused of doing just that in 2019. The company reportedly put photos of the moon into its camera software, and if you took a photo of a dim lightbulb in an otherwise dark room, Huawei would put moon craters on your lightbulb .

That would be pretty bad. But what if you took a step back and just used an AI intermediary instead? Samsung took a number of photos of the moon, trained an AI on those photos, and then applied the AI ​​to users’ photos of the moon. Is that crossing a line? How specific can you be with your AI training use cases?

Samsung’s press release mentions a “detail enhancement engine” for the Moon, but doesn’t go into much detail about how it works. The article contains a few useless diagrams about moon mode and AI that all boil down to “a picture goes in, some AI stuff happens and a picture comes out”.

In defense of the business, AI is often referred to as a “black box.” You can train these machine learning models for a desired result, but no one can explain exactly how they work. If you’re a programmer writing a program by hand, you can explain what each line of code does because you wrote the code, but an AI is only “trained”: it programs itself. This is partly why Microsoft has such a hard time getting the Bing chatbot to behave.

from Samsung
Enlarge / Samsung’s “Detail Enhancement Engine” is fed with a number of pre-existing moon images.

The press release mainly talks about how the phone is recognizes the moon or how it adjusts the brightness, but those points aren’t the problem – the problem is where the detail comes from. While we can’t make a choice, the image above shows pre-existing lunar images being fed into the “Detail Enhancement Engine”. The whole right side of this diagram is pretty suspect. It says Samsung’s AI compares your moon photo to a “high-resolution reference” and sends it back to the AI ​​detail engine if it’s not good enough.

That feels like Samsung is cheating a bit, but where exactly should the line be for AI photography? You definitely wouldn’t want an AI-free smartphone camera – that would be a camera of the worst class. Even non-AI photos from a big camera are just electronic interpretations of the world. They are not “correct” references to how things should look; we are just more used to it. Even objects viewed with the human eye are just electrical signals interpreted by your brain and look different to everyone.

It would be a real problem if Samsung’s details were inaccurate, but the moon really do look like this. When a photo is completely accurate and looks good, it’s hard to argue with it. It would also be a problem if the moon detail were imprecisely applied to things that aren’t the moon, but taking a photo of a photoshopped image is an extreme case. Samsung says it will “improve Scene Optimizer to reduce potential confusion between taking a photo of the real moon and an image of the moon,” but should it even do that? Who cares if you can fool a smartphone with Photoshop?

The AI ​​black box in action.  It starts with a photo, all sorts of things happen in that neural network, then a moon is recognized.  Very helpful.
Enlarge / The AI ​​black box in action. It starts with a photo, all sorts of things happen in that neural network, then a moon is recognized. Very helpful.

The key here is that this technique works on its own the moon, which looks the same for everyone. Samsung can be super aggressive about generating AI details for the moon because it knows what the ideal end result should look like. It feels like Samsung is cheating, as this is a hyper-specific use case that doesn’t provide a scalable solution for other topics.

You could never use an aggressive AI detail generator on someone’s face, because everyone’s face looks different, and adding detail would make that photo look nothing like the person. The equivalent AI tech would be if Samsung specifically trained an AI on it your face and then used that model to enhance photos it detected you were in. One day, a company may offer hyper-personalized AI training for the home based on your old photos, but we’re not there yet.

If you don’t like your enhanced lunar photos, you can simply disable the feature – it’s called “Scene Optimizer” in the camera settings. Don’t be surprised if your moon photos look worse.

Leave a Reply

Your email address will not be published. Required fields are marked *