Firehose

I’ve been thinking about loanwords in language. How long have we got these words on loan for? Will they be taken back? What is the fee to buy them outright? Who should I talk to about this?

I just wanted to let everyone know that . is the root of all TLDs.

This means that, for example, vale.rocks. is a valid domain. vale is a child of rocks, which is itself a child of ..

You are now burdened with this knowledge.

I’d like to thank the HTTP ‘referer’ header field for permanently ruining my ability to correctly spell the word ‘referrer’.

Elle just added a blog feed to her website and included me in it, along with a little pixel me!

I love the entire feed reader! Her attention to detail with having all the buttons work and adding the ability to save drawings is fantastic! Her site is a treasure trove of little interactive gadgets and gizmos that is well worth checking out!

Pixel art of my head on a beige background. My hair is brown with a strand covering my pale face, upon which there are two eyes. I have no mouth.

Creativity Came to Pass

Creativity /kriːeɪˈtɪvɪti/ n. Obsolete. The process or act of a human engaging in artistic or expressive production.

A story where human creativity and art disappear as a result of artificial intelligence usage and prevalence. Written from the perspective of someone in the future.

https://vale.rocks/posts/creativity-came-to-pass

I often have issues with sharing cross-origin resources, which results in me throwing my computer out a window.

This is referred to as CORS and effect.

I hate the argument, ‘Humans are bad at X, so LLMs must be really bad at X.’

There are flaws with LLMs, but this is a poor argument. They are fundamentally different to humans, and just because we fumble at something doesn’t mean LLMs do (and vice versa).

Cloudflare, its interface a mess.
It’s breaking my site and causing duress.

These settings confuse me, I must declare;
Working this out is quite the affair.

I hate it so much, but I don’t care.
Actually, I do, and I’m pulling out hair.

How I'm Using AI

As long as AI isn't using me...

An overview of my personal usage of Large Language Models (LLMs) and other generative AI. Tracking my experiences with AI tools, specific models (ChatGPT, Claude, Gemini, etc), applying them practically, and realistic perspective on their strengths and limitations over time, from coding attempts to language learning assistance.

https://vale.rocks/posts/ai-usage

Well now look what you’ve gone and done! You’ve spilt JavaScript all over my lovely semantic HTML. Dammit! It leaked onto my CSS as well.

Go grab a cloth and clean up this mess.

As it is getting closer to publication, it seems prudent to advertise that I’m writing a novel!

Tad unusual for me, but it’s a proper, comprehensive work of fiction.

I’ll have more details to share in time, but it is pretty much Fear and Loathing in Las Vegas meets the corporate shittery of Silicon Valley venture capital.

‘The web is my canvas,’ I confess as the people watching over my shoulder observe me open a text editor and tile a browser window. ‘CSS is my brush; MDN, my muse.’

Why Video Isn't My Publishing Preference

I am a writer of words, not a vidier of videos.

Why I prefer not to publish video, including practical, technical, and personal reasons I find text a more maintainable, accessible, and effective medium for my online content.

https://vale.rocks/posts/why-not-video

I’ve been testing the new Qwen3 today. I don’t have the compute for the higher parameter models, but I’ve been having a lot of fun with 0.6b.

It is such a derpy little model.

Here is a thinking extract I found rather humorous:

Okay, the user said, “Hi Qwen. Nice shoes.” Let me think about how to respond.

First, the user greeted me with a friendly “Hi” and complimented my shoes. I need to acknowledge their compliment politely.

Since I’m an AI, I don’t have shoes, so I should explain that. Maybe say something like, “Hi there! I don’t have shoes.”

Why I Write

There are words here, and I'm justifying them.

Unpacking my compulsion and will to write, including its benefits in forming thought, letting me express myself, hone in a craft, and publish accessible content.

https://vale.rocks/posts/why-write

I… uh…

Somebody opened an issue on GitHub where they just sent through a conversation with Grok regarding what action we should take to make improvements…

Predicting AGI/ASI timelines is highly speculative and unviable. Ultimately, there are too many unknowns and complex variables at play. Any timeline must deal with systems and consequences multiple steps out, where tiny initial errors compound dramatically. A range can be somewhat reasonable, a more specific figure less so, and accurately predicting the consequences of the final event when it comes to pass even further improbable. It is simply impractical to come up with an accurate timeline with the knowledge we currently have.

Despite this, timelines are popular – both with the general AI hype crowd and those more informed. People don’t seem to penalise incorrect timelines – as evidenced by the many predicted dates we’ve seen pass without event. Thus, there’s little downside to proposing a timeline, even an outrageous one. If it’s wrong, it’s largely forgotten. If it’s right, you’re lauded a prophet. The nebulous definitions of ‘AGI’ and ‘ASI’ also offer an out. One can always argue the achieved system doesn’t meet their specific definition or point to the AI Effect.

I suppose Gwern’s fantastic work on The Scaling Hypothesis is evidence of how an accurate prediction can significantly boost credibility and personal notoriety. Proposing timelines gets attention. Anyone noteworthy with a timeline becomes the centre of discussion, especially if their proposal is on the extremes of the spectrum.

The incentives for making timeline predictions seem heavily weighted towards upside, regardless of the actual predictive power or accuracy. Plenty to gain; not much to lose.

I wake delirious from an uneasy slumber. Beads of perspiration rest upon my forehead.

A distant horn sounds, then a second slightly closer.

I’m wide awake now. ‘The Vengabus’, I hear a woman scream, ‘It’s coming!’

Screams echo out around me. Pandemonium.

If many independent actors are working on AI capabilities, even if each team has decent safety intentions within their own project, is there a fundamental coordination problem that makes the overall landscape unsafe? A case where the sum of the whole is flawed, unsafe, and/or dangerous and thus doesn’t equal collective safety?

The misquote ‘write drunk, edit sober’ is often incorrectly attributed to Ernest Hemingway.

He actually believed the opposite, and, if you’re wondering, that advice is crap – especially for anything formal, structured, or academic.

Sometimes I find myself wanting (or needing) to write about accessibility, but I shy away from it.

The negative impact of giving incorrect advice scares me away from giving any advice at all. I fear doing more harm than good.