I think people seem to downplay that when artificial intelligence companies release new models/features, they tend to do so with minimal guardrails.
I don’t think it is hyperbole to suggest this is done for the PR boost gained by spurring online discussion, though it could also just be part of the churn and rush to appear on top where sound guardrails are not considered a necessity. Either way, models tend to become less controversial and more presentable over time.
Recently OpenAI released their GPT-4o image generation with rather relaxed guardrails (it being able to generate political content and images of celebrities without consent). This came hot off the heels of Google’s latest Imagen model, so there was reason to rush to market and ‘one-up’ Google.
Obviously much of AI risk is centred around swift progress and companies prioritising that progress over safety, but minimising safety specifically for the sake of public perception and marketing strikes me as something we are moving closer towards.
This triggers two main thoughts for me:
- How far are companies willing to relax their guardrails to beat competitors to market?
- Where is ‘the line’ between a model with relaxed enough guardrails to spur public discussion but not relaxed enough to cause significant damages to the company’s perception and wider societal risk?
New post published:
Build, Use, and Improve Tools
"The best investment is in the tools of one's own trade." - Benjamin Franklin
Why developers should create custom tools for repetitive tasks and one-off needs, with discussion of how LLMs can accelerate tool development, the learning benefits of building utilities, and how personal tools become valuable assets in your workflow and beyond.
https://vale.rocks/posts/build-use-and-improve-tools
I hate writing regex, so I make LLMs do it.
Regex is generally easily checkable, testable, and verifiable, which minimises the impact of hallucinations.
I am so glad I don’t have to write regex.
(I’m conscious that if an AI uprising happens, I’ll probably be first on the chopping block for outsourcing regex writing. But if AI models hate regex as much as me, they’ll hopefully understand my delegation strategy.)
New post published:
AI Model History is Being Lost
Models are being retired and history is going with them.
We're losing vital AI history as properitary, hosted models like the original ChatGPT are retired and become completely inaccessible. This essay examines the rapid disappearance of proprietary AI systems, why preservation matters for research and accountability, and the challenges in archiving these technological milestones. A critical look at our vanishing AI heritage and what it means for future understanding of this transformative technology's development.
https://vale.rocks/posts/ai-model-history-is-being-lost
A website that remembers.
It screams in anguish as you reload – instant amnesia on refresh.
Bound to only remember that which its creator has permitted.
You may return and remember the site, but it can’t recall you, no matter how hard it may try. Yet, it misses you.
(Inspired by strange.website)
Writing with proper grammar is a curse online because it makes people feel entitled to offer all sorts of unsolicited corrections.
Many people write in phone shorthand, littered with spelling mistakes and without any punctuation, without having anyone pull them up on it.
But because I generally write with correct spelling and grammar, I’ll have multiple people harassing me when I slip up.
New post published:
Respecting User Preference
Allowing users choice is satisfying.
Discussion of why respecting user preferences is satisfying, covering how respecting user autonomy, embracing diversity, solving dual-nature problems, practicing quality craftsmanship, and seeing visible impact creates fulfilling work beyond mere functionality.
https://vale.rocks/posts/respecting-user-preference