There is a tension between the totalitarian designs of the progressive order, and the fact that the resources that provide material security to the regime can only be generated by the kind of “creative destruction” and entrepreneurial activity that ends up giving rise to competing power centers. The USSR famously lost control of the process, while China has been more or less successfully managing their economic growth to secure state power. In the US, things work a bit differently.
Beyond a certain point in the growth of your company, you will be asked to hire some Serious People. Congratulations, you’ve made it – make sure Condi gets paid, and thank her for her service. If your company actually has no there there, it might have to be entirely constituted of such people to keep the grift alive. On the other hand, if you lucked into a money spigot, you’ll be asked to take on certain projects instead of just handing out sinecures.
But the regime constituency isn’t just made of retired war criminals and Rhodes Scholar go-getters; by mass, it’s mostly shitty people. We understand, guys, you’re trying to go for that “grad school without the poverty” vibe and you don’t want the office to have the atmosphere of a city bus. But still, see what you can do for DePrecious? Pick one of them programs and go at it. If you do it right you can turn it into another avenue to advance your own political power as a trusted regime affiliate, manipulate public policy, hand out salaries and off-the-books kickbacks, etc. Accomodations can be made.
The real problem is when your entire business model is predicated on heightening the contradictions within regime ideology. Lockheed Martin agrees, diversity is our strength, and promises to hire at least 5% LGBTP+ drone operators per year. If they’re too busy dilating to hit their wedding quota, well, that’s built into the contract bid. If your recidivism predictor works a bit too well, though, that’s gonna be an issue. I hope the AI doesn’t figure out who’s just down on their luck vs perpetually unhelpable, or who’s actually likeliest to pay their mortgage; I’m not racist or anything but it seems like that might reify some unfortunate stereotypes.
“So can’t you just do the AI thing, but, like, not racist?” Why isn’t your AI sufficiently gay? Next June is just around the corner, and we want to ensure we are a safe space for everyone’s whole self by then. We can’t let those bastards at Goldman upstage us again.
What you really need is a priest to bless the algorithm – someone with impeccable credentials, who can with a straight face write a headline calling for social credit scores and free tampons. There’s just no way around it – any algorithm that works is going to be bad for some good people who didn’t do anything wrong. The key thing though is that we’re not responsible. We blessed the inputs, only clean, verifiably non-racist hands touched it, no one knows data kashrut better than the consultants we hired. We’re always striving for improvement, but you know, we live in a society.
This has the nice effect of enforcing political control over the fundamental currency of machine learning algorithms: data. Contra Elon, AI algorithms are not generally self improving; there is approximately zero (not exactly, but close) research into code that writes code, algorithms that design algorithms, etc. What has driven improvements in AI tech is the availability of cultivated datasets, and increased investments of human and physical capital by researchers on algorithm development and experimentation as it became clear it had positive (but again, not increasingly positive) payoffs.
But as social and business data is analyzed, legibility works both ways. Most of American governance currently consists of an extended sequence of buck-passing, as we for instance borrow money from the Chinese to subsidize Latinos to push American blacks from core downtowns to boost property taxes to fund pensions, and so on, with massive externalities and tremendous opacity at every step. The less the specific winners and losers can be quantified, the better for regime legitimacy. Thus, it is important above all to know what questions must not be looked into, and how to obscure the costs of not only your algorithm, but the indulgences you buy to “correct” its “bias”.
Fortunately for the prospects of American workers, removing clarity, obscuring responsibility, and preventing correct inferences is work the modern education system well prepares them to do.