Elon Musk is hyping the imminent release of his ChatGPT competitor Grok, yet another example of how his entire personality is just itself a biological LLM made by ingesting all of Reddit and 4chan. Grok already seems patterned in many ways off of the worst of Elon’s indulgences, with the sense of humor of a desperately unfunny and regressive internet troll, and biases informed by a man whose horrible, dangerous biases are fully invisible to himself.

There are all kinds of reasons to be wary of Grok, including the standard reasons to be wary of any current LLM-based AI technology, like hallucinations and inaccuracies. Layer on Elon Musk’s recent track record for disastrous social sensitivity and generally harmful approach to world-shaping issues, and we’re already looking at even more reason for concern. But the real risk probably isn’t yet so easy to grok, just because we have little understanding yet of the extent of the impact that widespread use of LLMs across our daily and online lives will have.

One key area where they’re already having, and are bound to have much more of an impact is user-generated content. We’ve seen companies already deploying first-party integrations that start to embrace some of these uses, like Artifact with its AI-generated thumbnails for shared posts, and Meta adding chatbots to basically everything. Musk is debuting Grok on X, as a feature reserved for Premium+ subscribers initially, with a rollout supposedly beginning this week.

Clearly, Musk aims to dangle Grok as a carrot to attract more subscribers to his more expensive subscription tier for dumpster fire X. X will also inform Grok, which is its own cause for concern given how far the network has fallen in terms of being a source of reliable info even from its not particularly stellar days as Twitter.

The real threat isn’t even a woefully misinformation-laden chatbot deployed a scale on a network that was once roundly criticized by its current owner for being rife with bots; no, the real danger is what happens if Musk’s AI mini-me starts to drown out the rest of the discourse – and what happens if other moneyed megalomaniacs see it working out for him.

Musk’s special ability as it stands is a sycophantic following with a willingness to accept his interpretation of reality, no matter how twisted or demonstrably untrue. But that’s nothing compared to an LLM trained on basically the same drivel, capable of not only parroting but also generating highly accurate novel ‘opinions’ and ‘perspectives’ made from the clay of their maker, and then of spreading them quickly with unfettered access to a network that still spans tens of millions of influential users.

The only advantage that the many, many people who aren’t billionaires currently have over those that are is their very abundance – the masses are the masses because of, well, their mass. Despite what I’m sure are many clandestine offshore attempts to clone billionaires as an answer to this sticky math problem, it remains somewhat intractable. Yet the advent of LLM-based chatbots threatens to destabilize the very notion of popular opinion, by drowning out the voices of real users with countless artificial ones that are nigh-indistinguishable from authentic human beings.

We’ve already seen the demagogues of late-stage capitalism try to claim a ‘populist’ perspective that has no footing in actually being the will of the people. So far, though, they’ve mostly done that via bald-faced braggadocio; within a few iterations of the coming onslaught of LLM-powered chatbots, they could have the artificial, but convincing, numbers to back up their malign interpretations of the world.



Source link