i'm noticing a disturbing trend: people i care about on the left are sliding into what i can only call ai phobia.
to be clear: i agree, completely, that ai is dangerous. i agree, completely, that ai companies cannot be trusted to do the right thing.
but i disagree, strongly, that the answer is to avoid ai.
like it or not, this stuff is already power.
right now i can spin up a law-focused llm and have it draft legal arguments at a pace no human can touch. it runs maybe 100x faster than a person and costs maybe 10,000x less. even if those numbers are off by an order of magnitude, the shape of the conclusion doesn't change: you're looking at absurd leverage, on the order of a million times more work product per dollar.
you can argue that llms "suck." fine. but for avoidance to be rational, they'd need to be so bad that the leverage doesn't matter, something like a million times worse than a human. they're not. not even close. they're uneven, they're fallible, they hallucinate, they need supervision, yes. and still: the output, the iteration speed, the sheer volume you can generate and refine is a new kind of force.
and here's the part i don't think we're saying out loud:
if you're afraid to use ai, people who aren't afraid will outwork you, out-litigate you, out-organize you, out-propagandize you, and outspend you. not politely. not gradually. they will crush you.
so no, i'm not asking anyone to trust the companies. i'm not asking anyone to stop being worried. i'm asking you to stop confusing "dangerous" with "untouchable."
learn it. use it. instrument it. sandbox it. audit it. build guardrails around it. make it answerable to human values and democratic control.
but don't look away.
please. listen.