Essay
Advising Reasonable AI Criticism
- 1696 words
AI is an incredibly divisive technology with a lot of issues that deserve analysis, discussion, and criticism. Unfortunately, this criticism is often poorly critical and fails to actually achieve anything.
I’m firm of the opinion that people should voice their opinions about AI, whether positive or negative. However, these opinions shouldn’t be attacks and should be framed with constructive intent. It is true that those vehemently for AI have been hostile to those against it, but the opposite is also true – those vehemently against AI have been hostile towards those for it. In the end, nobody gains anything from a flame-war shouting match, but both sides gain from reasonable, measured criticism and analysis.
On the AI spectrum, I attempt to be realistic about application and usage. This article sways more towards advising constructive input from AI haters because those are the people I have most frequently had negative run-ins with. In the communities I frequent, they are more vocal and, in my opinion, have the most to gain from expressing their criticism with further nuance. My intent with this article is not to insult or belittle, and I’d appreciate if you didn’t misuse this article to do so.
I should be clear that I am, for the most part, talking about generative AI here, though certain portions can be extended out to more general discussion regarding machine learning. This article is not about trying to critique or bolster the topics surrounding AI but instead an attempt to discuss the poor discourse that takes place surrounding artificial intelligence technology itself, and how it can be improved.
False Criticism
Due to the hate AI garners, many people feel the need to criticise it. AI has a lot of issues. Training’s disregard for intellectual property, energy usage and its impact on the climate, hallucinations, disruption of jobs amidst a cost-of-living crisis, unwanted shoehorning of AI features into existing products, use for mass surveillance, ensloppening of the web, slimy big-tech actions, et cetera, et cetera.
These are absolutely issues – big issues – and they need addressing. They shouldn’t cloud objective discussion of the technology itself, though. A mere mention of something positive regarding AI shouldn’t net someone the label of ‘Nazi’ or ‘shill’, and the mention of these issues shouldn’t label someone a ‘Luddite’ or ‘fearmonger’. Some people extend this too far. Their hate becomes a blind one, and they lose the ability for objective evaluation. AI has issues, but it also has benefits.
On platforms like Bluesky 1 I frequently see AI called an entirely failed technology with words to the effect of “It is completely useless”. There are also frequent comparisons to previous hype-driven drivel such as NFTs. I agree, there are a lot of similarities to NFTs. A ton of overhyped tech bros shilling the ‘next big thing’ and grifters coming out of the woodwork to pump their egos and wallets. Complete domination of the news cycle and the legally dubious acquisition of intellectual property.
However, AI has actual valid applications and can be genuinely useful. Yes, it is absolutely misused by people, but so are knives, and I don’t see them being called useless or otherwise completely dismissed. AI can’t objectively be considered useless as a whole; it obviously has uses. To lie and exaggerate beyond belief harms an argument and allows it to be easily dismissed.
As a developer and writer, I’ve found uses of AI. I’ve also found a lot of issues with AI. I think anyone who approaches AI without preconceptions and with a neutral approach would find the same and come away thinking it is an incredibly interesting technology with great potential but which is plagued by many issues and is generally misunderstood.
I’ve disappointedly watched as people who are fairly realistic about AI attempt to discuss it and are dogpiled, while people who are meaningfully critical of it are slandered and told they’re replaceable and that their creations have no worth.
Don’t be angry at the technology of AI; be angry at those that abuse and misuse it.
People Intentionally Misunderstand
Some people hate AI to the point that they are blind in their hate. Intense negativity often manifests in a refusal to engage directly with the technology, to the point that their opinions cannot be entirely comprehensive and informed. This leaves their main sources of information as sensationalised second-hand retellings sourced through news and social media, often biased towards their existing views and thus providing information they already agree with, or omitting information they don’t. It creates an echo chamber, reinforcing existing biases and hindering a nuanced understanding.
On the opposite side of the discussion, some people love AI to the point they are blind in their love, and this intense positivity manifests as refusal to consider the opinions of and impact towards those who have gripes. Their love locks them into their own echo chamber.
There are people who don’t care about AI and thus don’t have formed opinions other than what they might have seen in passing, and there are other people who follow AI closely but attempt to stay central and impartial, however successful they may be. Realistically, with the astounding rate of AI progress and development at the current time, it is impractical for anyone who doesn’t dedicate large amounts of time to AI to stay realistically up to date by themselves if they wish to.
Unfortunately, many of the people who attempt to be candid about AI’s abilities and only wish to inform find themselves in the crossfire between the pro and anti crowds, which more often than not drives them away. They’re not AI evangelists, so they fail to fit in with that crowd, and they don’t despise AI, so they aren’t welcome in that crowd.
I’m reminded of this quote from Plutarch’s Lucullus which applies to both sides of the discussion:
The first messenger that gave notice of Lucullus’s coming was so far from pleasing Tigranes that he had his head cut off for his pains; and no man daring to bring further information, without any intelligence at all, Tigranes sat while war was already blazing around him, giving ear only to those who flattered him…
It is fashionable in indie circles to hate on big tech companies, and given that AI isn’t a perfect technology and is being hamfisted by so many of them, it is a perfect target. People complaining about these large technology companies are also safe in doing so, as big tech won’t fight back at an individual scale due to the huge public relations risk. I imagine quite a few people who jump on these hate threads do so for the wish to fit in with peers holding similar opinions.
Whether you are for or against AI, it is worth considering that by attacking people you will not be listened to, and your input, no matter how valid, will be ignored. The desirably unrealistic dystopian future I wrote about in Creativity Came to Pass was fiction, but I wrote this paragraph with current reality in mind:
There were many more nuanced takes on personal websites and in private discussions, but they never made it into training data because creatives prevented scrapers from scraping it in an effort to protect their work. Unwittingly, they removed their work, as well as their values, opinions, and thoughts from AI training, which led to the creative mindset being under-represented in the data on which the models we now rely upon so much were shaped.
As AI becomes more and more of a common way to get information (as we’re seeing with things like Google’s AI Overviews, ChatGPT Search, Perplexity, etc, whether you agree with it or not), making opinions available, correct, and reasonable is important to having them be accessible. Doing so also improves the civility and benefit of discussion, which helps everyone take more measured approaches. Neither side of the discussion benefits from a flame war, and the people in the centre would prefer measured discussion as well.
Many anti-AI proponents are proud to never touch AI systems and wear their ignorance of the current state of the technology as a badge of honour. Likewise, many AI evangelists refuse to acknowledge the flaws of AI models and view it uncritically without care for the flaws. From both sides, this is an embracing of anti-intellectualism. It isn’t cool to be misinformed.
I do not wish to single out any individual posts, but when AI sentiment with claims that could easily be refuted within a single web search see high engagement on social media, it isn’t necessarily an indication of correctness as much as it is an indication of agreement or virality. When people point out these inaccuracies, they are often lambasted for being dismissive of all AI’s flaws or ignorant of its benefits.
The reasonable pointing out of an error would ideally be followed with, “You’re absolutely right. I was incorrect there. Thanks for pointing it out, I’ll issue a correction”, rather than “idk ai does so much damage and bolsters the pockets of the rich. they lie all the time are you gonna tell them off too lmao?” Or from the other side: “cope. ur gonna be replaced with ai soon anyway. ur art isn’t even good”. Nobody benefits from either of the latter responses, beyond perhaps the replier’s ego.
Reasonable constructive criticism is far more likely to be heard and valued, which makes it far more likely to be addressed. Shouting matches don’t lead to actual improvements.
Please discuss AI’s issues, and please acknowledge its successes. Please discuss the ethical implications. Please call out inaccuracies, poor behaviour, and shortcomings. Please be realistic. Please celebrate cool technology. But please don’t uncritically jump on the dogpile or stumble around the hype train with your eyes closed off to reality.
If you wish to take action, write to your government representatives, put together measured responses, voice your opinions to those doing wrong, and engage in respectful discussion. Don’t attack people simply for discussing AI (whether for or against), and if you do take issue with a discussion point, voice those opinions with grace. Don’t shoot the messengers.
Let’s strive for informed discussion, respectful critique, and a focus on addressing the real issues surrounding AI rather than going at each other’s throats.
Footnotes
-
Bluesky is notably very anti-AI. ↩