I mean it's the same thing with sponsored content anywhere, right? The user assumes that the system is providing information in accordance with purposes, but the ads and sponsored results create opportunities for the platform hosting them to profit at the user's expense. AI platforms are absolutely subject to the same economic incentives for corruption as say, search engines, but I don't think they're uniquely so just because the model in question has a more humanlike UI.
YourNetworkIsHaunted
But the effectiveness of the fraud is ironically predicated on people continuing to treat it as gambling.
Adding on that this does feel like another application or consequence of the Great Man Theory of Everything, the idea the only the people with power and money matter because their power and influence are intrinsic to their person rather than being contingent on their social position. The average people empowered to commit insider trading by prediction markets have sufficiently limited individual agency that even collectively they don't actually matter. In fact we want them to try their hand at the grift so that their insights can flow to the enlightened ones who can better use that information. They don't matter enough to do real harm, but by watching the attempt we may be able to learn something.
Following on from yesterday's discussion of Scott's close brush with reality on prediction markets, The Aussie PowerPoint Man is talking about the strategic risks posed by the new insider training opportunities opened up by these tools. A lot of what he's saying applies to normal financial markets, but what's striking is the way that prediction markets create those opportunities for people with much less immediate power and information by allowing them to bet directly on the kinds of immediate decisions they do have information on.
I also thought the idea of integrating insider training red flags on public prediction markets into your early warning system was an interesting idea. These things aren't actually useful for forecasting or making decisions because of how bad the incentives are, but people acting on those incentives absolutely creates a spike that can be meaningful in the short-term and potentially enable a few extra hours or minutes to prepare.
I tried digging into it a little bit and it looks like the key word was definitely "less openly racist". According to A blog from someone else who tried to trace jt the term originated with Theo Beale, better known by his nom de merde Vox Day. I know I first became aware of this guy when he started trying to politicize the election process for the Hugo awards in order to make them less woke. It was real dumb.
You know, I kept expecting both this racist and the racist he was arguing with to start making the very obvious argument for why the racism is not only evil but also dumb. And instead they just kept being racist.
To summarize and spare anyone else curious, the argument is about immigration. Racist 1 argues that since some people are objectively better than others [citation desperately needed but not wanted] we should have free migration so that our superior quality of life can attract all the best people so that we can be the best place. He (correctly) notes the absurdity of Racist 2 arguing that although some people are objectively better than others we need to protect ourselves from all foreigners even if they are the best people because their foreignness would hurt our "magic dirt." I'm pretty sure I've seen this criticism elsewhere and from a better and less obviously racist writer elsewhere because the phrase "magic dirt" sounds real familiar.
Also, because I am trying everything back to my particular bugbear today, I have to note that the fundamental and wrong argument that some traits being heritable makes some people objectively better than others is yet another manifestation and justification of what I'm going to start calling the Great Man Theory of Everything. If you start from the position that history, politics, economics, and basically all forms of human activity are fundamentally driven by the actions and decisions of a few people who are for one reason or another destined for power and greatness, you can derive an impressive amount of the libertarian/Rationalist worldview, and if you additionally accept that those people are disproportionately rich white dudes and we shouldn't think too hard about that fact you can get most of the rest of the way there.
Are prediction markets not actually useful? No, it is the reality who is wrong.
Also I want to rant once again about the stupid way these people evade the insider trading problem, because there's a particular failure at play that I keep finding expressed in new and interesting ways.
So the argument goes that while insider trading may be bad for a financial market it actually just allows insiders to add their information to increase the predictive power of the market. Which would be true enough if we assume nothing else changes, but the same would also be true for price discovery in a normal asset market. Clearly we're missing something.
So why is it insider trading bad? Because it turns people without insider info into the dumb money you can take advantage of. And people, very reasonably, aren't going to participate in a system where their main role is being taken advantage of. Their departure means that the insiders don't have access to a pool of dumb money to take so they stop interacting with the system, and the market itself breaks down.
Now if you assume that the majority of people are "NPCs" or aren't very "agentic" or whatever then they're not going to act in systemically meaningful ways no matter how obvious the incentives to do so. You could also cast it as a version of the libertarian-as-housecat notion that markets simply exist as a natural system, rather than being pieces of economic infrastructure that require a lot of management and work to keep functioning at all, even before we get to the question of whether they operate to the public's benefit. So many of the problems with these ideologies spring from this belief that only some people actually matter in a systemic sense by dictating rules and Building Things and being big men, rather than systems being constantly created and shaped by all the people who interact with them through those interactions.
Community notes have become the one decent feature on Twitter, but not for any of the reasons they were supposed to be.
That's really interesting. So the model can generalize the form of what a fact looks like based on these monofacts but ends up basically playing mad libs with the actual subjects. And if I understand the inverse correlation they were describing between hallucination rate and calibration, even their best mechanism to reduce this (which seems to have applied some kind of back-end doubling to the specific monofacts to make the details stand out as much as the structure, I think?) made the model less well-calibrated. Though I'm not entirely sure what "less well-calibrated" amounts to overall. I think they're saying it should be less effective at predicting the next token overall (more likely to output something nonsensical?) but also less prone to mad libs-style hallucinations.
Thank you! Let me wildly oversimplify and make sure I understand.
The fundamental problem is that if you train on a set that includes multiple independent facts, the generative aspect of the model - the ability to generate new text that is statistically consistent with the training data - requires remixing and combining tokens in a way that will inevitably result in factual errors.
Like, if your training data includes "all men are mortal" and "all lions are cats" then in order to generate new text it has to be "loose" enough to output "all men are cats". Feedback and reinforcement can adjust the probabilities to a degree, but because the model is fundamentally about token probabilities and doesn't have any other way of accounting for whether a statement is actually true, there's no way to completely remove it. You can reinforce that "all cats are mortal" is a better answer, but you can't train it that "all men are cats" is invalid.
Some may find her "cringe" but she's doing critical work in stopping the sites you use being DDOSed by AI scrapers.