top of page

The Hypocrisy of Convenience: The Selective Outrage Against Artificial Intelligence

  • Apr 6
  • 3 min read

Kaelyn Yu '29


In 2022, an AI-generated artwork won the Colorado State Fair art competition, upsetting not

just participants, but also its followers from around the globe. To an artist who has spent

years upon years honing their skills and techniques, this medal is an insult, one in which a

lifetime of human experience has been hollowed down to a cheap mimicry. The backlash

materialised immediately: accusations of soullessness, theft, and disrespect were shot at AI

companies by some of the biggest news outlets, including New York Times and CNN. Yet, at

this very same moment, millions of students are using artificial intelligence to draft their

essays, and professionals are using them to build meeting plans, all of which under the

cover of “working smarter” and that “AI is the future”. This is a defining contradiction that

exists of AI, where we have a brightly drawn ethical line blocking off one form of it, while we

also eagerly erase that same line for the other.


The current debate over AI ethics is emotionally compelling yet inconsistent. For instance,

critics castigate generative AI for lacking a “soul”, where it condemns AI pieces as something

completely derivative and devoid of any actual human feeling. This can also include the

argument that AI art trains entirely on copyrighted images, labelling it as theft.

But when put in comparison to using AI for other functional and convenient tools in our daily

lives, this moral scrutiny suddenly vanishes.


Ignoring this contradiction uncovers the flaws and misallocation of our moral concern, which

is problematic for being able to build a future with technology. It’s important to point out that

the core mechanisms of an AI, like ChatGPT, are almost identical to those of an AI art

generator. Both are trained on large datasets of human-created works. This can include

things such as books and articles, that of which are almost always taken without the

creator's consent/permission nor any sign of compensation. That is to say that when put up

against the “ethical violations” of AI art, to say that generative artists steal from others is

arbitrary. Aside from this, writing an article or book from AI represents the same “soulless”

quality that is set upon AI art, making it a non-exclusive point as well. To pretend that taking

a writer’s excerpt is any less “stolen” or “soulless” than taking a painter’s art piece is a direct

representation of humanity cherry picking a selective interpretation of the same action under

different contexts.


I believe that this selective nature undermines our ability to set fair, ethical rules for AI by

fostering a more hypocritical standard. This happens because society is intrinsically biased

towards their own convenience. The quantity and frequency in which the world uses

programs like ChatGPT or DeepSeek compared to Imagine AI or Grok Imagine is glaringly

obvious. Not only this, but after two years, almost everyone is dependent on using AI to

complete tasks, assignments, and generating ideas. Students, for example, can’t even see

themselves writing an essay without using the tool -- a few generations ago, they would have

done it just fine with no problem. We fear the replacement of something we are reliant on.

But that doesn't mean that society is blind to immoral things, they understand AI is stealing,

that it is unethical, and dishonest, yet they choose to ignore for their self-convenience.

This double standard is especially harmful because it inflicts on society over the long term. I

believe that this hypocrisy is damaging our community's ability to sprout discourse on AI

related topics, where we stifle the attempts to address its very much real negative impact on

human labour and creativity. By focusing all our negativity on AI art, we ignore how every

other generative AI models also devalue specific, different fields of human effort. Not only

are artists getting less commissions, but students lose critical thinking, and junior writers lose

out on a job.


To summarize these points, our choice of response to AI is flawed. We cling to unusually

inconsistent moral outlines that “protect creativity” but also automate and exclude all other

kinds of labour. If we could apply a single, clearer ethical standard to AI, instead of just

policing over the art community, we can have actual developments in the general

conversation of AI and its future uses.


The real issue isn’t about whether AI can paint a sunset, but whether we can protect our jobs

and build a future where AI can elevate human potential instead of replacing it.

Comments


bottom of page