You are here, on my personal web log
A note of warning, before you proceed: this is a journal entry, at a difficult time.
It’s very hard to think or act when you can’t tell if you’re about to lose your job, have your research killed off, have your healthcare terminated, witness unstoppable crimes, or just experience extended and apparently unescapable moral injury.
—Erin Kissane, Against Entropy
TL;DR – This is a post about billionaires who love eugenics, support a pro-eugenics government, and sell us a product that they promise will help their longterm eugenic goals. But really this is a post about how I feel, when colleagues treat that product as though it might have merit, if we just give it a chance.
For some reason, I find that opinion to be in bad taste. I know I shouldn’t yuck your yum, or whatever, but I don’t like eugenics.
Reader, it fucks me up.
Chill out, it’s just a tool
For years we’ve been saying that tech is political, and that tech is not neutral. But I don’t know if we’re communicating the full nuance of that adage. It’s not just a warning about bad Apples (or Palantirs) who might use code to dabble in evil extracurriculars. More important to me is the understanding that technologies often carry an ideology inside them:
It is something of an amusing curiosity that some AI models were perplexed by a giraffe without spots. But it’s these same tools and paradigms that enshrine normativity of all kinds, “sanding away the unusual.”
—Ben Myers, I’m a Spotless Giraffe
Tools tend to exist between us and a goal, and the shape of the tool tells us something about how to proceed, and what outcomes are desirable. Tech enacts and shapes our world, our lives, and our politics.
Guns don’t kill people, guns are designed to help people kill people.
Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!
Every tool is a hammer,
with context lost to history –
and it’s up to us
to determine individually
what looks like a nail.
There is no system, no society,
no marketing department, no regulation.
Each of us is an island of isolated trolly conductors
hammer enthusiasts.
Once we’ve established some useful norms – a ‘best practice’ or two – I can’t imagine anyone [crowd cheers for CEO giving sieg heil salute].
Meanwhile, back at the hammer factory…
The AI projects currently mid-hype are being developed and sold by billionaires and VCs with companies explicitly pursuing surveillance, exploitation, and weaponry. They fired their ethics teams at the start of the cycle, and diverted our attention to a long-term sci-fi narrative about the coming age of machines – a “General Intelligence” that will soon “surpasses” human ability.
Be it god or demon, only the high priests of venture capital can summon and tame such a powerful being for the good of humanity! It will only cost you all your labor (past and present), a reversal on climate policy, and a rather large fortune.
What does that mean? Hand-waving eugenics. We have no way to measure intelligence, no idea what it means to surpass humans, and no reason to believe that ‘intelligence’ might be exponential. Unless you rely on debunked race science, which many of these CEOs seem obsessed with. Now they are eager to jump on board an authoritarian movement that wants to exterminate trans and disabled people, fire black people, and deport all my immigrant friends and colleagues.
It’s wild to see major tech companies throwing out all pretense – giddy to abandon previous commitments around diversity, equity, inclusion, or accessibility. Run free, little mega-corps! Be the evil you’ve always dreamed for the world!
Surely this has nothing to do with their products, though.
But her use-cases
I know that ‘AI’ broadly has a long history, with ‘language models’ and ‘neural nets’ developing real use-cases in science and other fields. I’m not new here. But this background level of validity-by-association is used to prop up absolute garbage. The chatSlop we’re drowning in now is clearly designed and deployed for a different purpose.
Haven’t you heard? They’re building a digital god who will lead us to salvation, uploaded into the virgo supercluster where we can expand the light of exponential profit throughout the cosmos! This is the actual narrative of several AI CEOs, despite being easy to dismiss as hyperbolic nonsense. Why won’t I focus on the actual use-cases?
Why won’t you focus on the actual documented harms? Somehow there is always room for people to dismiss concerns as “overblown and unfounded” past the first attempted coup, and well into an authoritarian power grab.
But the bigger issue is that they don’t have to be successful to be dangerous. Because along the way, these companies get to steal our work and sell it back to us, lower our wages, de-skill our field, bury us in slop, and mire us in algorithmic bureaucracy. If the long-term space god thing doesn’t work out, at least they can make a profit in the short-term.
The beliefs of these CEOs aren’t incidental to the AI product they’re selling us. These are not tools designed for us to benefit from, but tools designed to exploit us. To poison our access to jobs, and our access to information at the same time.
I said on social media that people believe what chatbots tell them, and I was laughed at. No one would trust a chatbot, silly! That same day, several different friends and colleagues quoted the output of an ‘AI’ to me in unrelated situations, as though quoting reliable facts.
So now a select few companies run by billionaires control much of the information that people see – “summarized” without sources. Meanwhile, there’s an oligarchy taking power in the US. Meanwhile, Grok’s entire purpose is to be ‘anti-woke’ and anti-trans, ChatGPT’s political views are shifting right, and Anthropic is partnering with Palantir.
Seems chill. I bet ‘agents’ are cool.
Wouldn’t want to eat a shrimp cocktail in the rain.
Tech workers seem to like tech actually
There’s a meme that goes around regularly, about the attitudes of tech enthusiasts vs tech workers…
Tech enthusiasts: My entire house is smart.
Tech workers: The only piece of technology in my house is a printer and I keep a gun next to it so I can shoot it if it makes a noise I don’t recognize.
I can relate to that sentiment, but many in our community seem unfazed or even excited about ‘AI’ and ‘agents’ and ‘codegen’ and all the rest of it. As far as I can tell, most of our industry is still on board with the project, even while protesting the changes in corporate politics, or occasionally complaining about the most obvious over-use. There are certainly a number of people raising alarms or expressing frustration, but we’re often dismissed as uninformed.
Based on every conference I’ve attended over the last year, I can absolutely say we’re a fringe minority. And it’s wearing me out. I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?
This is your opportunity to get in at the ground floor!
I don’t know how to attend conferences full of gushing talks about the tools that were designed to negate me. That feels so absurd to say. I don’t have any interest in trying to reverse-engineer use-cases for it, or improve the flaws to make it “better”, or help sell it by bending it to new uses.
When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.
I don’t care how well their ‘AI’ works – or if you found a fancy fun use-case. It fucks me up watching peers treat this tech from people who want to eradicate me as a future worth considering. I don’t want any of this.
I don’t need an agent, I want to maintain my own agency.
I don’t know
I used to see the AI bubble and trans rights as distinct issues. I no longer do. The fascist movement in tech has truly metastasized, as evidenced by Elon Musk’s personal coup, his endless supply of techbro supporters, tech companies’ eagerness to axe DEI programs once Trump gave them an excuse, erasure of queer lives from tech products, etc.
To the extent that AI marketing is an attempt to enclose and commodify culture, and thus to concentrate political power, I see it as a kind of fascism.
I know the anti-DEI(A) sea-change in mega-corp C-suites doesn’t reflect the desires of my friends and colleagues now working for (surprise!) AI arms dealers, but just trying to do their best for open web standards. I don’t know what I would do in that situation. Labor and capital are often at odds. I imagine we all deserve a tech union. But I worry about how few people seem to see the need for it.
Every time I log on I feel like I’m being gaslit – asked to train my shitty replacement, and then step aside. The future is not women, I’m learning now. You can be sued in the US for intentionally hiring women. The future is actually inhuman word synthesizers.
Oh no, I was tricked by the genders and their sneaky ideology! Now I’m a crime! Haha, oops!
Work is already harder to find, and companies mostly want help slopping more slop into the slop machine. Because it will help users, you ask? Of course not! Because everyone now has slop on-tap, and needs to turn that flow of garbage into a cash-flow!
That’s the trouble with tribbles.
Money, gain, profit!
What are we doing here? What am I doing here? How do I stay engaged in this field, and keep paying my bills, without feeling like a constant outsider – about to be dismissed from my career? I know I’m not the only one feeling this way, but the layering of threats and betrayals add up. It feels so isolating.
It’s probably good to get this clarity
“Tech” was always a vague and hand-waving field – a way to side-step regulations while starting an unlicensed taxi company or hotel chain. That was never my interest.
But I got curious about the web, a weird little project built for sharing research between scientists. And I still think this web could be pretty cool, actually, if it wasn’t trapped in the clutches of big tech. If we can focus on the bits that make it special – the bits that make it unwieldy for capitalism:
Large companies find HTML & CSS frustrating “at scale” because the web is a fundamentally anti-capitalist mashup art experiment, designed to give consumers all the power.
—Me, before all this
What are we going to build now – those of us who still care about diversity, equity, inclusion, accessibility, and giving consumers the power? Can we still put our HTML & CSS to good use? Can we get back to building a web where people have agency instead of inhuman agents?
Where are you looking to put your energy next?
Addendum, 2025-02-16
I’ve been spending a lot of time in the pottery studio instead of keeping up with my RSS feed – so I wasn’t aware of the most recent AI discourse. Jeremy Keith does a great job putting my thoughts in context of a larger blogging conversation. I recommend reading that summary, and the excellent linked posts by Baldur Bjarnason and Michelle Barker.
I find it particularly troubling the way we talk about current harms of current technology as temporary and therefor insignificant – as though something being “solvable” means that it’s basically solved already, and we shouldn’t worry about it. The logic seems so obviously backwards to me. Solve the problems first, if they are so easily solvable.
This is often used to dismiss the current energy use of LLMs, but also a common rhetorical trick of CEOs as they lay off their workforce. Don’t worry, your current unemployment could someday be solved with a universal basic income. Please ignore the harms of capitalism as we weaponize it against you – because socialism could eventually make it better!
And yet (surprise!) when the tech titans take over government institutions, they don’t seem to have much interest for improving social safety nets. It’s almost (almost) like their goal is to weaken the bargaining power of labor, and they don’t consider this a flaw in the first place.
In our marketing-department imagined future of a new technology all harms will somehow disappear (details TBD), but the potential benefits are endless and extraordinary. We could cure cancer! But are any of the AI companies trying to cure cancer, as a primary goal of their work? Well, no…
Step 2 may be actively harmful, and step 3 might be perpetually absent, but the profit described by step 4 is undeniable. Critics always lack the proper imagination.