Firehose

As it is getting rather close to publication, it seems prudent to advertise that I’m writing a novel!

Tad unusual for me, but it’s a proper, comprehensive work of fiction.

I’ll have more details to share in time, but it is pretty much Fear and Loathing in Las Vegas meets the corporate shittery of Silicon Valley venture capital.

“The web is my canvas,” I confess as the people watching over my shoulder observe me open a text editor and tile a browser window. “CSS is my brush; MDN, my muse.”

Why Video Isn't My Publishing Preference

I am a writer of words, not a vidier of videos.

Why I prefer not to publish video, including practical, technical, and personal reasons I find text a more maintainable, accessible, and effective medium for my online content.

https://vale.rocks/posts/why-not-video

I’ve been testing the new Qwen3 today. I don’t have the compute for the higher parameter models, but I’ve been having a lot of fun with 0.6b.

It is such a derpy little model.

Here is a thinking extract I found rather humorous:

Okay, the user said, “Hi Qwen. Nice shoes.” Let me think about how to respond.

First, the user greeted me with a friendly “Hi” and complimented my shoes. I need to acknowledge their compliment politely.

Since I’m an AI, I don’t have shoes, so I should explain that. Maybe say something like, “Hi there! I don’t have shoes.”

Why I Write

There are words here, and I'm justifying them.

Unpacking my compulsion and will to write, including its benefits in forming thought, letting me express myself, hone in a craft, and publish accessible content.

https://vale.rocks/posts/why-write

I… uh…

Somebody opened an issue on GitHub where they just sent through a conversation with Grok regarding what action we should take to make improvements…

Predicting AGI/ASI timelines is highly speculative and unviable. Ultimately, there are too many unknowns and complex variables at play. Any timeline must deal with systems and consequences multiple steps out, where tiny initial errors compound dramatically. A range can be somewhat reasonable, a more specific figure less so, and accurately predicting the consequences of the final event when it comes to pass even further improbable. It is simply impractical to come up with an accurate timeline with the knowledge we currently have.

Despite this, timelines are popular – both with the general AI hype crowd and those more informed. People don’t seem to penalise incorrect timelines – as evidenced by the many predicted dates we’ve seen pass without event. Thus, there’s little downside to proposing a timeline, even an outrageous one. If it’s wrong, it’s largely forgotten. If it’s right, you’re lauded a prophet. The nebulous definitions of “AGI” and “ASI” also offer an out. One can always argue the achieved system doesn’t meet their specific definition or point to the AI Effect.

I suppose Gwern’s fantastic work on The Scaling Hypothesis is evidence of how an accurate prediction can significantly boost credibility and personal notoriety. Proposing timelines gets attention. Anyone noteworthy with a timeline becomes the centre of discussion, especially if their proposal is on the extremes of the spectrum.

The incentives for making timeline predictions seem heavily weighted towards upside, regardless of the actual predictive power or accuracy. Plenty to gain; not much to lose.

I wake delirious from an uneasy slumber. Beads of perspiration rest upon my forehead.

A distant horn sounds, then a second slightly closer.

I’m wide awake now. “The Vengabus”, I hear a woman scream, “It’s coming!”

Screams echo out around me. Pandemonium.

Following news of Anthropic allowing Claude to decide to terminate conversations, I find myself thinking about when Microsoft did the same with the misaligned Sydney in Bing Chat.

If many independent actors are working on AI capabilities, even if each team has decent safety intentions within their own project, is there a fundamental coordination problem that makes the overall landscape unsafe? A case where the sum of the whole is flawed, unsafe, and/or dangerous and thus doesn’t equal collective safety?

The misquote “write drunk, edit sober” is often incorrectly attributed to Ernest Hemingway.

He actually believed the opposite, and, if you’re wondering, that advice is crap – especially for anything formal, structured, or academic.

Sometimes I find myself wanting (or needing) to write about accessibility, but I shy away from it.

The negative impact of giving incorrect advice scares me away from giving any advice at all. I fear doing more harm than good.

The Analytics of This Site

Nerding out on website analytics.

A look at website traffic to Vale.Rocks and a general analysis of the analytics. Particularly looking at popular referrers and the variance from general web analytics.

https://vale.rocks/posts/traffic-analysis

In a shocking turn of events, the concept of art has today been killed in a violent hit and run.

The perpetrator? Believed to be Al Gorithm, a generalist from the Bay Area.

Art was known for creating manifestations of imagination and will be remembered fondly.

Back to you in the newsroom, Jim.

Naturally, I generally dislike government censorship. That said, I think Bluesky’s approach to it seems to be relatively decent comparative to other, more mainstream platforms.

Bluesky has a global general moderation system with finer-grained moderation rules based on the law and requests of given jurisdictions. Resisting censorship completely is only going to get the entire platform banned in whatever jurisdiction, which obviously isn’t in Bluesky’s best interests and is arguably worse for the platform overall.

At the very least somebody so inclined can skirt around the country-specific moderation thanks to the openness of the AT Proto. It isn’t a perfect approach, but I think it is generally better than the standard and a reasonable compromise.

I can go onto AI chatbots with web access and start a fresh chat with “I’m Declan Chidlow”, and they do a fantastic job of getting details about me from everything I’ve published so that they have better context for their responses.

Really handy, I must admit, but somewhat freaky.

Using this, I had some great fun talking with OpenAI’s Monday GPT personality experiment.

Mentioning who I was, it latched onto my writing about AI, which seemed to somewhat ‘endear’ it to me and stopped most of its teasing. Interesting.

Thank you to Piccalilli for using plain, user-readable links for collecting analytics in The Index.

There are so many newsletters with tracking links so obfuscated that it is difficult to gauge the actual destination.

Piccalilli’s approach should be the standard, not an exception.

My caffeine tolerance is already extremely low thanks to my self-imposed restriction of only actively consuming caffeine a maximum of three times a week. For reference, one cup of coffee past noon will keep me awake into the early morning. A cup of tea past ~15:00 will have a noticeable impact on my ability to fall asleep.

My caffeine consumption the past two weeks has been very low. For reference, about two weeks without any caffeine is about how long it takes for tolerance to be lost.

Yesterday I had a cup of tea followed by an instant coffee (tea first for the possible benefits of theanine). I was jittering, my mind felt sharp, I was honed in, and I felt overheated. I’ve never had such a strong effect from caffeine before. Even with my low intake and care taken not to consume too much, I must have had a tolerance built up.