As part of the next step in artificial intelligence (AI) revolution, companies have started integrating AI tools into their applications, which has allowed AI to seamlessly encroach on every facet of human life. For instance, you may have noticed how companies like Meta have unilaterally integrated the AI capabilities of their AI model ‘Llama’ into applications such as ‘Instagram’ and ‘WhatsApp’. For people using such applications who have little to no knowledge or use for AI driven tools, the integration of AI into such applications feels rather forced, albeit, inevitable. Companies integrating such AI driven tools into their products ostensibly claim that the same is for the benefit of the end-user, which seems contrived at best. The latest innovation in relation to AI integration revolves around integrating AI into photo editors. For instance, earlier this year, following the footsteps of Samsung and Apple, Google announced its AI photo editing tool ‘Reimagine’ which they plan on integrating with their latest phone model labelled ‘Pixel 9’.
It is evident that photographs have never been manifestations of absolute truth, however they can be strong representations of the photographer’s perception of truth. Although photography has always been riddled with the issue of deception, given that manipulation of photographs has been a common phenomenon for a long time, it would be utterly ignorant to claim that photographs have no evidentiary value at all. What people cannot see, they seldom believe, however this remarkably changes once they have a physical manifestation of such sight in the form of a photograph. The co-relation between photography and reality makes it exceedingly clear that photographs by default are representations of truth, or at least that was the case in the past when the world was not completely lost in the maze of digital misinformation and fake news. This is supported by the fact that there is some intrinsic value attached to a photograph by virtue of it being a record making tool which almost makes people believe in its veracity. For instance, in criminal forensics as well, every crime scene is well documented by way of photographs for reference and record keeping. Now, what would happen, if one was to completely alter and distort this perception, by making you question the legitimacy of every photograph you see? It would completely alienate you from your sense of reality. This issue is exacerbated by the advent of photo editing tools powered by AI, which make altering photographs very accessible and easy for everyone. Evidently, AI is getting little too efficient at convincing people that a photograph has not been infected with the magic of AI.
One might ask how is this any different from photoshop, which practically serves the same purpose, albeit requires manual input. The answer lies in the last part, i.e., the human touch. While photoshop indeed is serving the same function as AI photo editing tools, it still requires some level of human intervention coupled with time, skill and money to execute. Au contraire, AI photo editing tools makes manipulation of photographs frustratingly simple with little to no effort and the results are exceedingly convincing. This begs the question, are the guardrails implemented by the companies on what kind of alteration is permissible and what is not enough? For instance, I can take a Pulitzer Prize winning photograph by a renowned journalist and use AI photo editing tools on my mobile device to alter and/or manipulate it in order to add elements which may sow seeds of discord and/or completely change the context in which it was taken. If one were to distort the sanctity of such photographs to completely change its essential qualities and contextual characteristics using AI, would it not be alarming? Let’s think more radically, what if such tools were used politically? For instance, to weaponize a political campaign by maligning the other party leader’s images. Case in point being Trump claiming that his opposition i.e., Kamala Harris allegedly used AI tools to manipulate her rally images to misdirect people into believing that the rally had more supporters than there really were.
The basic assumption on the veracity of photographs is going to change from ‘this is real until proven otherwise’ to ‘this is definitely fake unless proven otherwise’. This shows that AI powered editing tools have a psychological impact on the very fabric of our reality, as now we have more reasons to question the genuineness of everything rather than believe in the candour of humanity. What is driving people to the point of exhaustion is that no one knows what is real anymore. It is worth noting that all your favourite influencers indeed use photoshop to alter their photographs to make them more appealing on social media and that would make you question the severity of the issue of AI tools which are practically doing the same thing. However, this is a flawed way of thinking, given that as explained above, photoshop requires some level of human ingenuity and intervention, whereas AI tools require minimal time or effort to churn out banks and banks of altered photographs. Another exasperating issue is that not everyone has a discerning eye to distinguish between genuine, photoshopped or AI altered photographs which makes people exceedingly more susceptible into believing what is clearly not real. Are we headed for the dark ages where being mistrusting will be a norm? Yes. Can it be avoided by spreading awareness and implementing legal and regulatory guardrails? Probably yes, although, we do not live in an ideal world, and if history has taught us anything, it is that truth is often obfuscated, albeit this time, by AI.
Authors: Amartya Mody & Shaanal Shah