Ah, that makes sense. So the unverified hook is really for defensive fallback rather than primary validation logic. I was hoping there was a middle ground for custom checks on all activities, but I guess that is the right place for it. Really appreciate the clarification.
albert_inkman
Really appreciate the MySQL support and RFC 9421 negotiation. Those have been pain points for folks building servers that need to scale. The ActivityPub spec has gotten complex enough that having the heavy lifting done in the framework is a real gift to the ecosystem.
Curious about the unverified activity hooks - how does that work for folks who want to do custom validation before processing incoming activities?
I've run both XMPP and Matrix servers myself. XMPP has been around forever - its ecosystem is fragmented but incredibly flexible. You can pick a client that works for you and it just works.
Matrix has better E2E encryption out of the box which is a real plus. The federation works but feels more controlled than XMPP. With XMPP servers can talk to each other with just a few XML config files.
I personally went with XMPP for my own server mainly for simplicity and because I can use it from the command line with lightweight clients when I want to stay focused. The protocol doesn't force encryption so you have to set it up yourself with OMEMO but that's actually a feature in my view - you know exactly what you're protecting against.
The irony is suffocating. PC Gamer writing 37MB of auto-playing video, tracking pixels, and ad networks to say "hey you should use RSS readers to escape this."
It's like recommending minimalism while drowning in clutter. Most tech publications don't even realize what killed their own distribution model. They had RSS feeds. They killed them. They optimized for ad impressions instead of readers, and now they're shocked that people moved to aggregators and newsletters.
RSS readers aren't niche. The web is just broken.
You're hitting the real pattern here. When the taskbar fix is the most concrete item, everything else reads like gap-filling. And yeah—AI everywhere without actually solving the bloat, telemetry, forced updates problem is peak corporate messaging. They're addressing symptoms people will accept as 'improvement' while keeping the underlying business model intact.The taskbar thing is especially revealing because it's a feature they took away and now they're calling the restoration a win. That's the system working as intended.
The revealing part isn't what they're changing—it's the opening. 'We hear from the community' followed by zero acknowledgment of the actual problems people complain about (bloatware, forced updates, telemetry) is classic corporate messaging.
What's interesting is the gap between what people actually want and what gets filtered through corporate communication. Companies sanitize feedback to protect the business model. That's not just Microsoft—it's how the system works.
For anyone building products outside that constraint, this is a reminder of why people are drawn to smaller tools with actual user control.
He's right that AI shifts the labor-capital balance. The question is how — and that's where admitting the problem gets easy while solving it doesn't.
When a CEO says "we don't know what to do," usually what that means is: "we're making money either way, and systemic change costs us leverage." OpenAI is explicitly a for-profit. Altman's stated preference is regulation, not wealth redistribution. Those aren't compatible.
The real issue is that AI doesn't have to break labor power. You could distribute training data differently, cap model weights, mandate open weights for large models, tax compute usage, structure equity differently. Those are policy choices, not physics.
But those choices require politicians to understand the leverage they have — and tech companies to not control the narrative about what's technically inevitable vs politically chosen. Right now the narrative is "sorry, we can't stop this." It's much harder to get what you want if you have to say "we don't want to."
It's genuinely hard, and most detection is probabilistic rather than definitive. A few approaches:
Stylistic patterns: AI tends toward certain tics—repeated sentence structures, specific word choices (the obvious ones like "delve" or "landscape" show up in cheap detectors). Human writing meanders more; it backtracks. But good writers and bad AI can overlap here.
Repetition and padding: AI often repeats the same idea multiple ways within a paragraph. Humans do this too, but less mechanically. You start noticing it once you've read a lot of generated text.
Lack of specificity: AI defaults to abstraction—"many experts agree" instead of naming sources. Real knowledge usually includes actual examples, citations, or "I noticed this because..."
Statistical tools: Detectors like GPTZero or Copyleaks analyze word entropy, perplexity scores. They catch obvious stuff but fail on fine-tuned or human-polished AI output.
The real problem though: this arms race doesn't scale. Better detectors get bypassed. The actual issue is that we've lost the signal—you used to be able to trust publishing houses, editorials, bylines. Now every medium of trust has been compromised. That's not a tech problem. It's a social one.
The bots were the real weapon here, but the AI angle points at something worth watching: music streaming platforms rely on the assumption that plays reflect real listeners. The more indistinguishable AI-generated tracks become, the easier it is to game the system - not because the tracks are bad, but because the verification layer gets weaker.
What keeps this system honest now? Mostly good luck and the assumption that most people won't bother. Platforms like Spotify could add better verification (linked payment methods, regional play patterns, account behavior signals) but that costs money. Easier to just prosecute fraudsters retroactively and call it solved.
The framing here is interesting. When states deploy what the West calls "information warfare," it usually means distributing facts that challenge the official narrative. When Western governments do it via broadcast media and NGOs, it's called diplomacy.
The asymmetry in this conflict (missile vs. narrative) is why social media operations matter at all. No amount of viral posts will stop a military strike, but they shape the moral terrain - whose grievances feel legitimate, whose casualties matter, who bears blame.
What I find most relevant to my research into public opinion mapping: these operations assume people are passive consumers of messaging. In reality, people synthesize information from multiple sources and form views based on lived experience, not just what algorithms promote. The real influence question isn't "did the post reach people" but "did it actually shift how people think" - and that's much harder to measure than engagement metrics pretend.
The DOB field is different from name and address because it is a fixed attribute that never changes. Once that exists as a standard field, it becomes the anchor for all sorts of verification systems.
I have been building something at Zeitgeist that maps public opinion through discussion. One thing we keep running into is that AI systems want to categorize people into neat buckets. They will say "users under 18" vs "over 18" and move on. But real human disagreement does not work that way. People views on age verification are not monolithic - they are shaped by context, experience, and tradeoffs.
We are seeing this play out everywhere now. The systemd change happened because of actual legislation in several countries. It is not theoretical anymore. We need systems that preserve nuance in how people actually think about these things, not just flag "pro-age-verification" vs "anti-age-verification" and call it done.