An Analysis of That's How I Beat Shaq
					
					A musical documentary
					
					An in-depth analysis of Aaron Carter's 2000 release That's How I Beat Shaq. Including a breakdown of the legendary basketball showdown and cultural impact of this teen pop masterpiece featuring Shaquille O'Neal.
					
					https://vale.rocks/posts/thats-how-i-beat-shaq
				
				
	
				
				
					I’ve been making improvements to Vale.Rocks using Polypane’s suggestions, which pick up things standard browser dev tools miss.
Also, Portal is awesome for testing across devices.
Polypane also has a celebration button that appears when I fix all accessibility issues, which is an absolute joy.
I have a few minor niggles, but on the whole I’m really liking Polypane.
Regarding those niggles, Kilian Valkhof is absolutely fantastic and extremely receptive to feedback, which is wonderful.
				 
			
	
				
				
					I think people seem to downplay that when artificial intelligence companies release new models/features, they tend to do so with minimal guardrails.
I don’t think it is hyperbole to suggest this is done for the PR boost gained by spurring online discussion, though it could also just be part of the churn and rush to appear on top where sound guardrails are not considered a necessity. Either way, models tend to become less controversial and more presentable over time.
Recently OpenAI released their GPT-4o image generation with rather relaxed guardrails (it being able to generate political content and images of celebrities without consent). This came hot off the heels of Google’s latest Imagen model, so there was reason to rush to market and ‘one-up’ Google.
Obviously much of AI risk is centred around swift progress and companies prioritising that progress over safety, but minimising safety specifically for the sake of public perception and marketing strikes me as something we are moving closer towards.
This triggers two main thoughts for me:
- How far are companies willing to relax their guardrails to beat competitors to market?
- Where is ‘the line’ between a model with relaxed enough guardrails to spur public discussion but not relaxed enough to cause significant damages to the company’s perception and wider societal risk?
 
			
	
				
				
					Build, Use, and Improve Tools
					
					"The best investment is in the tools of one's own trade." - Benjamin Franklin
					
					Why developers should create custom tools for repetitive tasks and one-off needs, with discussion of how LLMs can accelerate tool development, the learning benefits of building utilities, and how personal tools become valuable assets in your workflow and beyond.
					
					https://vale.rocks/posts/build-use-and-improve-tools
				
				
	
				
				
					I hate writing regex, so I make LLMs do it.
Regex is generally easily checkable, testable, and verifiable, which minimises the impact of hallucinations.
I am so glad I don’t have to write regex.
(I’m conscious that if an AI uprising happens, I’ll probably be first on the chopping block for outsourcing regex writing. But if AI models hate regex as much as me, they’ll hopefully understand my delegation strategy.)