Firehose

AI can generate images, but it certainly cannot create ‘art’ – at least not as I define it.

I believe it can be used as a tool while creating art, but its output is not by default ‘art’.

Art requires creativity, and that is something a machine does not have.

I’ve been making improvements to Vale.Rocks using Polypane’s suggestions, which pick up things standard browser dev tools miss.

Also, Portal is awesome for testing across devices.

Polypane also has a celebration button that appears when I fix all accessibility issues, which is an absolute joy.

I have a few minor niggles, but on the whole I’m really liking Polypane.

Regarding those niggles, Kilian Valkhof is absolutely fantastic and extremely receptive to feedback, which is wonderful.

I pushed to prod,
Prod pushed back.

I prodded prod,
It cracked and cracked.

I fixed the bug,
Or so I swore.

One last deploy…
And prod is no more.

People keep talking about AI-generated imagery as something that is going to be really bad. Or that it is going to be indistinguishable from real photos.

I don’t think people realise we passed that point quite a while ago.

The ‘tells’ are already gone; there is just a lot of stuff still releasing generated with lesser models that people do happen to notice – it is almost a redirection of sorts.

I think people seem to downplay that when artificial intelligence companies release new models/features, they tend to do so with minimal guardrails.

I don’t think it is hyperbole to suggest this is done for the PR boost gained by spurring online discussion, though it could also just be part of the churn and rush to appear on top where sound guardrails are not considered a necessity. Either way, models tend to become less controversial and more presentable over time.

Recently OpenAI released their GPT-4o image generation with rather relaxed guardrails (it being able to generate political content and images of celebrities without consent). This came hot off the heels of Google’s latest Imagen model, so there was reason to rush to market and ‘one-up’ Google.

Obviously much of AI risk is centred around swift progress and companies prioritising that progress over safety, but minimising safety specifically for the sake of public perception and marketing strikes me as something we are moving closer towards.

This triggers two main thoughts for me:

  • How far are companies willing to relax their guardrails to beat competitors to market?
  • Where is ‘the line’ between a model with relaxed enough guardrails to spur public discussion but not relaxed enough to cause significant damages to the company’s perception and wider societal risk?

“I already know what this is gonna be about before I read it.”

User then proceeds to continue their comment with something entirely unrelated to the contents of my article.

I’m looking at you, Reddit and Hacker News.

I find it very odd when people refer to ‘two main browser engines’, those being Gecko and WebKit.

Do people really think Blink hasn’t diverged significantly enough to consider it another engine at this point?

Put the groundwork for a testing instance of a website live five minutes ago, and I’m already seeing multiple login attempts hammering /wp-admin.

Not only is it a Ghost site, but it isn’t even properly live yet!

Further proof that I am not an LLM is found in the fact that I use en dashes, not em dashes.

This also acts to prove I am not American and that I am the sort of nerd that cares about typography and gets hung up on punctuation.

Build, Use, and Improve Tools

"The best investment is in the tools of one's own trade." - Benjamin Franklin

Why developers should create custom tools for repetitive tasks and one-off needs, with discussion of how LLMs can accelerate tool development, the learning benefits of building utilities, and how personal tools become valuable assets in your workflow and beyond.

https://vale.rocks/posts/build-use-and-improve-tools

I hate writing regex, so I make LLMs do it.

Regex is generally easily checkable, testable, and verifiable, which minimises the impact of hallucinations.

I am so glad I don’t have to write regex.

(I’m conscious that if an AI uprising happens, I’ll probably be first on the chopping block for outsourcing regex writing. But if AI models hate regex as much as me, they’ll hopefully understand my delegation strategy.)

Why is my pseudo-element not working? It should work. It has size, position, display, etc. Hmm…

Oh, I didn’t specify content: "".

Anywho, I’m gonna go into a fetal position and cry now…

We’re seeing it already to an extent, but in a few years I imagine we’ll see many people trying to replicate the abstract, non-Euclidean, and ethereal stylings of early generative AI image/video models.

My brother and his mates were playing lazer tag, so I stole the signal of their shot with my Flipper Zero and went on a genocidal rampage.

Vale: Infrared Terrorist

I’m getting fairly sick of receiving emails asking if my writing can be taken and put on some random advert-filled website for free.

The answer is always ‘no’, but at least they’re asking, unlike some of the less scrupulous content farms.

AI Model History is Being Lost

Models are being retired and history is going with them.

We're losing vital AI history as properitary, hosted models like the original ChatGPT are retired and become completely inaccessible. This essay examines the rapid disappearance of proprietary AI systems, why preservation matters for research and accountability, and the challenges in archiving these technological milestones. A critical look at our vanishing AI heritage and what it means for future understanding of this transformative technology's development.

https://vale.rocks/posts/ai-model-history-is-being-lost

Sitting. Confused.

A wandering eye catches yours.
It starts talking.
It is empty.

You look at its head.
You look in its head.
You look through its head.

Nothing.