Been a hot minute again eh? Well a lot has changed in the last year and since my previous post on AI two years ago.
In 2023, AI was a source of consternation during the 4 month strikes in America by actors and writers wanting to stop their likeness effectively being stolen or recreated by others without financial recompense. Those strikes, though essential, fuelled a downturn in my tv vfx work in the UK. Ours is a US-funded world it seems. Nevertheless technology marches on and one can’t be complacent.
Drawing Lines
I asked in my previous post if AI generated imagery could actually be photographic. The tech got to that level shortly thereafter.

At the camera club I belong to we have a strict rule on not using generative AI to create images. It’s a woolly definition but in simple terms one could argue that if a tool is AI-powered forget it. So what about Topaz denoise or sky replacement? The former is relatively harmless I’d say and the latter has been done since the dawn of photography. So is that an AI tool we can’t use? Well, maybe…
The Photoshop Generative Fill has a clue in its name as to its methodology. Clearly it’s not a tool for camera clubs, even in creative competitions, known for their Photoshop shenanigans and massive multi layered files full of intricate texture work.

A wolf in sheep’s clothing is the Photoshop Remove Tool. It’s generative but only based on your current image. I’ve found images using this tool get flashed on Instagram as AI. It’s in the jpeg metadata somewhere. This may be an issue when entering some photographic competitions. In this photo of a telegraph pole I removed clouds that were distracting rather than waiting for another sunny, less cloudy day. I could have cloned or painted them out but that wasn’t the tool I chose and I needed to cook dinner.
So where do we draw a line? Instead of a remove tool, is it more honourable to clone things out for hours, matching image noise, bokeh and tonal ranges? Should we go further back, shoot on film, create our own masks in the darkroom, using a library of negatives of clouds to replace skies over many painstaking hours? No. That’s unnecessary now.
What about if we want to add elements to our images that weren’t there before? It’s disingenuous to the spirit of a camera club which should be about taking and sharing beautiful images with one another. Beautifully de-noised, cleaned-up-with-a-stamp-tool, worked-up-in-photoshop mis-mashes of methods that may or may not be ai driven.
The future
The key problem to me would be creating the lion’s share of an image without taking a shot. Say you’ve a street scene that is yours and you add a walrus to it using gen AI. Clearly that is incongruous and now becomes the subject, drawing the eye and exciting our minds. Currently we’d look at that and denigrate it for using AI, it’s just so incongruous. Currently in competitions some of my images miss, “an extra point of interest to draw the eye,” according to judges. Mostly that’s byword for a person or animal. Give it a decade. We’ll all have walruses in top hats in every woodland, sat atop mushrooms or sitting on No Fishing signs at RSPB reserves.
So in answer to the question at the head of this post – where? AI is already everywhere. It’s coming to your phones, it might be in your smart tv, it’s training marketing bots, it sends you spam, waters content down and pretty soon it’s gonna plonk a walrus in the background of your Welsh landscape photos, “for balance”.
Not allowing generative AI tools in competitions makes total sense right now, especially with abundant evidence of these tools being trained on stolen images and a kind of dishonesty about the output too. (See my DT image at the top) The future may well see us all soften our stance on using the tools in our own photographs, though I predict a huge wealth of disinformation will cause us all to either believe or deny everything we see in photos, according to our own beliefs or political leanings.