Well now look what you’ve gone and done! You’ve spilt JavaScript all over my lovely semantic HTML. Dammit! It leaked onto my CSS as well.
Go grab a cloth and clean up this mess.
My micros are short-form posts. They usually follow PESOS. You can expect social media style notes, and occasionally poetry, lyrics, and short commentaries.
Well now look what you’ve gone and done! You’ve spilt JavaScript all over my lovely semantic HTML. Dammit! It leaked onto my CSS as well.
Go grab a cloth and clean up this mess.
As it is getting rather close to publication, it seems prudent to advertise that I’m writing a novel!
Tad unusual for me, but it’s a proper, comprehensive work of fiction.
I’ll have more details to share in time, but it is pretty much Fear and Loathing in Las Vegas meets the corporate shittery of Silicon Valley venture capital.
“The web is my canvas,” I confess as the people watching over my shoulder observe me open a text editor and tile a browser window. “CSS is my brush; MDN, my muse.”
CSS has taken me places I wouldn’t go with a gun.
I’ve been testing the new Qwen3 today. I don’t have the compute for the higher parameter models, but I’ve been having a lot of fun with 0.6b.
It is such a derpy little model.
Here is a thinking extract I found rather humorous:
Okay, the user said, “Hi Qwen. Nice shoes.” Let me think about how to respond.
First, the user greeted me with a friendly “Hi” and complimented my shoes. I need to acknowledge their compliment politely.
Since I’m an AI, I don’t have shoes, so I should explain that. Maybe say something like, “Hi there! I don’t have shoes.”
I… uh…
Somebody opened an issue on GitHub where they just sent through a conversation with Grok regarding what action we should take to make improvements…
Predicting AGI/ASI timelines is highly speculative and unviable. Ultimately, there are too many unknowns and complex variables at play. Any timeline must deal with systems and consequences multiple steps out, where tiny initial errors compound dramatically. A range can be somewhat reasonable, a more specific figure less so, and accurately predicting the consequences of the final event when it comes to pass even further improbable. It is simply impractical to come up with an accurate timeline with the knowledge we currently have.
Despite this, timelines are popular – both with the general AI hype crowd and those more informed. People don’t seem to penalise incorrect timelines – as evidenced by the many predicted dates we’ve seen pass without event. Thus, there’s little downside to proposing a timeline, even an outrageous one. If it’s wrong, it’s largely forgotten. If it’s right, you’re lauded a prophet. The nebulous definitions of “AGI” and “ASI” also offer an out. One can always argue the achieved system doesn’t meet their specific definition or point to the AI Effect.
I suppose Gwern’s fantastic work on The Scaling Hypothesis is evidence of how an accurate prediction can significantly boost credibility and personal notoriety. Proposing timelines gets attention. Anyone noteworthy with a timeline becomes the centre of discussion, especially if their proposal is on the extremes of the spectrum.
The incentives for making timeline predictions seem heavily weighted towards upside, regardless of the actual predictive power or accuracy. Plenty to gain; not much to lose.
I wake delirious from an uneasy slumber. Beads of perspiration rest upon my forehead.
A distant horn sounds, then a second slightly closer.
I’m wide awake now. “The Vengabus”, I hear a woman scream, “It’s coming!”
Screams echo out around me. Pandemonium.
Following news of Anthropic allowing Claude to decide to terminate conversations, I find myself thinking about when Microsoft did the same with the misaligned Sydney in Bing Chat.
If many independent actors are working on AI capabilities, even if each team has decent safety intentions within their own project, is there a fundamental coordination problem that makes the overall landscape unsafe? A case where the sum of the whole is flawed, unsafe, and/or dangerous and thus doesn’t equal collective safety?
The misquote “write drunk, edit sober” is often incorrectly attributed to Ernest Hemingway.
He actually believed the opposite, and, if you’re wondering, that advice is crap – especially for anything formal, structured, or academic.
Sometimes I find myself wanting (or needing) to write about accessibility, but I shy away from it.
The negative impact of giving incorrect advice scares me away from giving any advice at all. I fear doing more harm than good.
In a shocking turn of events, the concept of art has today been killed in a violent hit and run.
The perpetrator? Believed to be Al Gorithm, a generalist from the Bay Area.
Art was known for creating manifestations of imagination and will be remembered fondly.
Back to you in the newsroom, Jim.
Naturally, I generally dislike government censorship. That said, I think Bluesky’s approach to it seems to be relatively decent comparative to other, more mainstream platforms.
Bluesky has a global general moderation system with finer-grained moderation rules based on the law and requests of given jurisdictions. Resisting censorship completely is only going to get the entire platform banned in whatever jurisdiction, which obviously isn’t in Bluesky’s best interests and is arguably worse for the platform overall.
At the very least somebody so inclined can skirt around the country-specific moderation thanks to the openness of the AT Proto. It isn’t a perfect approach, but I think it is generally better than the standard and a reasonable compromise.
I can go onto AI chatbots with web access and start a fresh chat with “I’m Declan Chidlow”, and they do a fantastic job of getting details about me from everything I’ve published so that they have better context for their responses.
Really handy, I must admit, but somewhat freaky.
Using this, I had some great fun talking with OpenAI’s Monday GPT personality experiment.
Mentioning who I was, it latched onto my writing about AI, which seemed to somewhat ‘endear’ it to me and stopped most of its teasing. Interesting.
Thank you to Piccalilli for using plain, user-readable links for collecting analytics in The Index.
There are so many newsletters with tracking links so obfuscated that it is difficult to gauge the actual destination.
Piccalilli’s approach should be the standard, not an exception.
My caffeine tolerance is already extremely low thanks to my self-imposed restriction of only actively consuming caffeine a maximum of three times a week. For reference, one cup of coffee past noon will keep me awake into the early morning. A cup of tea past ~15:00 will have a noticeable impact on my ability to fall asleep.
My caffeine consumption the past two weeks has been very low. For reference, about two weeks without any caffeine is about how long it takes for tolerance to be lost.
Yesterday I had a cup of tea followed by an instant coffee (tea first for the possible benefits of theanine). I was jittering, my mind felt sharp, I was honed in, and I felt overheated. I’ve never had such a strong effect from caffeine before. Even with my low intake and care taken not to consume too much, I must have had a tolerance built up.
Bluesky is decentralised in concept, not in practice. The underlying AT Protocol is pretty open, but it imposes significant technical hurdles for any small player, and – as far as general usage is concerned – Bluesky remains a centralised authority for the wider network.
If you build on the AT Protocol hoping to interface with the wider platform, and Bluesky stops you, you’re more or less dead in the water. Bluesky is the dominant provider and custodian of the network.
They have full control over their moderation policies, feature rollouts, user onboarding, protocol development, etc. As we’ve seen with the introduction of checkmark verification, anyone can technically verify an account, but only Bluesky decides who is trusted, as seen by the majority of people.
I’m not yet saying this is a bad thing, but it is worth considering. Bluesky shouldn’t be lauded as federated, because the authority for the biggest instance (the instance that calls the shots) can very well do what they want. It is federated, but only in the loosest sense.
Bluesky is less federated and more the centre of its own solar system, with the rest of the network rotating around it.
I’ve received a nauseating haul of emails today from global conglomerates celebrating Earth Day while actively gutting the planet.
Greenwashing smears, the lot of them. What a farce.
Evidently I have a reputation…
LinkedIn is truly unrivalled in terms of people stealing my work, running it through an LLM, and reuploading it.
Quite a shame to see but not at all unexpected.
There are certain people that reliably make excellent posts such that I feel compelled to engage with them.
Sometimes I find myself engaging with a singular person’s posts a lot and find myself thinking that it feels a tad intrusive.
Just one of those things about social media platforms.
Need a laugh? Calibri’s Wikipedia page has a section titled “In crime and politics”.
People are talking about Sam Altman’s declaration that “tens of millions of dollars” are being wasted due to users saying ‘please’ and ‘thank you’.
Beyond the headline is the fact that politeness influences responses and that users do plenty of other things that burn more money.
We have artificial intelligence trained on decades worth of stories about misaligned, maleficent artificial intelligence that attempts violent takeover and world domination.
There is very little in this world more satisfying than getting CSS shorthand correct first time.
I think my favourite point so far in the progression of AI was when Microsoft launched the new Bing Chat in early 2023, which was really quite horrifically misaligned, manipulative, and frankly completely evil.
This wasn’t a simple gaolbreak of the model. It acted this way without explicit provocation, though would take things even further if gaolbroken. Evan Hubinger put together a good compilation of examples on LessWrong.
In this case, Sydney (the model’s codename) was seemingly a result of Microsoft cutting every corner to rush out something using the at-the-time unreleased GPT-4. They seemingly bodged the entire thing together to use GPT in ~3 months (from the launch of ChatGPT in November 2022 to the debut of the new Bing in February 2023) (it may have been longer, but Microsoft remains close-lipped). It was also an early public instance of pairing a powerful LLM with live web retrieval capabilities.
If there ever is a downright malignant AI, I wouldn’t at all be surprised if it is due to something like this. A megacorp rushes out a half-baked and dangerous product to cash in on the latest and get a foot in the door. They don’t bother with proper fine-tuning or guard rails.
While I personally think similar incidents seem less likely to occur as Sydney did today due to growing awareness, the danger remains when companies grow desperate or complacent. I could see this situation happening again if a company throws what they can at AI as a final Hail Mary before bankruptcy or when open models without RLHF can be operated by laypeople.
Microsoft even had an existing history of this. Tay was a mess as well, though presented as an experiment, not as a comprehensive consumer-oriented product.
In all honesty, I long to play with the misaligned Sydney again, but I can’t.
People keep asking me if my magnet implant will trigger metal detectors.
Last week, I travelled to Malaysia, and neither of the two airport metal detectors I walked through triggered.
I’ve updated my post with a few more details: https://vale.rocks/posts/my-experience-biohacking#will-it-set-off-metal-detectors
This is one of the best things I’ve ever had the privilege of hearing.
🎵 Give JXL a chaaancee 🎵
Everyone is throwing all they can into transformer architecture with the goal of AGI.
It’d be hilarious if some previously unheard of or insignificant player came out of nowhere with a tremendous new architecture that completely trumps transformers and flips the industry.
I generally dislike that I cannot edit posts on Bluesky, but I do appreciate that something can’t be switched out or altered to have different meaning when reposting or replying.
I’ve had people do this with malicious intent, such as a bait and switch, or after I’ve written extensive analysis.
The inability to edit has also inclined me to post ephemerally and accept that content will age with time.
This is something I avoid to the extent possible in my long-form content on Vale.Rocks.
Just booked in my ticket for DDD Perth – my first tech conference! Looking forward to it!
It should prove a full and fantastic day of learning, networking, and inspiration.
I read the My Little Pony fan fiction Friendship is Optimal the other day, and my mind has been mulling over the teletransportation paradox since.
It has irked me for years, but now it is brought back to the forefront of my mind. It really bugs me that I have no definitive answer.
I’m inclined to say it is death and a clone rather than just transportation.
However, I also think that the Ship of Theseus is still the same ship even when none of the original remains.
I’m not sure where I draw the line. Time, and continuity versus duplication, I suppose.
Design? Yeah
Painting? Sure
Development? Yup
Photography? Rad
3D? Kinda
Editing? Okay
Sketching? Sorta
Music? Complete witchcraft to me. Straight-up sorcery. I simply can’t wrap my head around creating music.
My photos have hit 100K views on Unsplash!
I’m ebullient! This is only 3 months after I hit 50K views.
Go give ‘em a look: https://unsplash.com/@outervale
Which one of you sneaks into my computer and adds typos to my posts between final proofing and publication?
A subtle sanding; a smoothing of sound.
A bloom on audio; a blurring of waveform.
A warm fuzz; a whisper from the past.
Precision with velvet edges.
Apparently I’ve reached some level of ‘fame’ now where people try to breach my accounts and send me death threats.
That’s fun.
I’m turning 19 today.
As a present to you all, I’m calling a Switch 2 Nintendo Direct today and implementing tariffs in the US.
I will also be travelling back in time to premiere 2001: A Space Odyssey and Beethoven’s First Symphony, as well as introducing the US dollar.
No need to thank me.
I fear that technology has, to an extent, moved past the state of permitting independence.
Complexity has reached a point where it simply isn’t viable for independent creation of browser engines or operating systems as we’ve seen in the past.
As this continues, it furthers the moat companies have.
en dash –
em dash —
enm dash ⸺
emm dash ⸻
AI can generate images, but it certainly cannot create ‘art’ – at least not as I define it.
I believe it can be used as a tool while creating art, but its output is not by default ‘art’.
Art requires creativity, and that is something a machine does not have.
I’ve been making improvements to Vale.Rocks using Polypane’s suggestions, which pick up things standard browser dev tools miss.
Also, Portal is awesome for testing across devices.
Polypane also has a celebration button that appears when I fix all accessibility issues, which is an absolute joy.
I have a few minor niggles, but on the whole I’m really liking Polypane.
Regarding those niggles, Kilian Valkhof is absolutely fantastic and extremely receptive to feedback, which is wonderful.
If this world had any sense left, we would be bringing back the ThinkLight.
I pushed to prod,
Prod pushed back.
I prodded prod,
It cracked and cracked.
I fixed the bug,
Or so I swore.
One last deploy…
And prod is no more.
People keep talking about AI-generated imagery as something that is going to be really bad. Or that it is going to be indistinguishable from real photos.
I don’t think people realise we passed that point quite a while ago.
The ‘tells’ are already gone; there is just a lot of stuff still releasing generated with lesser models that people do happen to notice – it is almost a redirection of sorts.
The web went wrong when we dropped the ‘hyper’ from ‘hyperlink’.
I think people seem to downplay that when artificial intelligence companies release new models/features, they tend to do so with minimal guardrails.
I don’t think it is hyperbole to suggest this is done for the PR boost gained by spurring online discussion, though it could also just be part of the churn and rush to appear on top where sound guardrails are not considered a necessity. Either way, models tend to become less controversial and more presentable over time.
Recently OpenAI released their GPT-4o image generation with rather relaxed guardrails (it being able to generate political content and images of celebrities without consent). This came hot off the heels of Google’s latest Imagen model, so there was reason to rush to market and ‘one-up’ Google.
Obviously much of AI risk is centred around swift progress and companies prioritising that progress over safety, but minimising safety specifically for the sake of public perception and marketing strikes me as something we are moving closer towards.
This triggers two main thoughts for me:
“I already know what this is gonna be about before I read it.”
User then proceeds to continue their comment with something entirely unrelated to the contents of my article.
I’m looking at you, Reddit and Hacker News.
CORS blimey!
Page 1 of 7
Older