nightsky

joined 9 months ago
[–] nightsky@awful.systems 12 points 1 week ago (6 children)

Can you imagine selling something like a firewall appliance with a setting called "Yolo Mode", or even a tax software or a photo organizer or anything that handles any data, even if only of middling importance, and then still expect to be taken seriously at all?

[–] nightsky@awful.systems 20 points 2 weeks ago (6 children)

Ok, maybe someone can help me here figure something out.

I've wondered for a long time about a strange adjacency which I sometimes observe between what I call (due to lack of a better term) "unix conservativism" and fascism. It's the strange phenomenon where ideas about "classic" and "pure" unix systems coincide with the worst politics. For example the "suckless" stuff. Or the ramblings of people like ESR. Criticism of systemd is sometimes infused with it (yes, there is plenty of valid criticism as well. But there's this other kind of criticism I've often seen, which is icky and weirdly personal). And I've also seen traces of this in discussions of programming languages newer than C, especially when topics like memory safety come up.

This is distinguished from retro computing and nostalgia and such, those are unrelated. If someone e.g. just likes old unix stuff, that's not what I mean.

You may already notice, I struggle a bit to come up with a clear definition and whether there really is a connection or just a loose set of examples that are not part of a definable set. So, is there really something there or am I seeing a connection that doesn't exist?

I've also so far not figured out what might create the connection. Ideas I have come up with are: appeal to times that are gone (going back to an idealized computing past that never existed), elitism (computers must not become user friendly), ideas of purity (an imaginary pure "unix philosophy").

Anyway, now with this new xlibre project, there's another one that fits into it...

[–] nightsky@awful.systems 9 points 2 weeks ago

Yes, thank you, I'm also annoyed about this. Even classic "AI" approaches for simple pattern detection (what used to be called "ML" a few hype waves ago, although it's much older than that even) are now conflated with capabilities of LLMs. People are led to believe that ChatGPT is the latest and best and greatest evolution of "AI" in general, with all capabilities that have ever been in anything. And it's difficult to explain how wrong this is without getting too technical.

Related, this fun article: ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977

[–] nightsky@awful.systems 15 points 2 weeks ago
  • You will understand how to use AI tools for real-time employee engagement analysis
  • You will create personalized employee development plans using AI-driven analytics
  • You will learn to enhance employee well-being programs with AI-driven insights and recommendations

You will learn to create the torment nexus

  • You will prepare your career for your future work in a world with robots and AI

You will learn to live in the torment nexus

  • You will gain expertise in ethical considerations when implementing AI in HR practices

I assume it's a single slide that says "LOL who cares"

[–] nightsky@awful.systems 12 points 2 weeks ago (1 children)

Maybe someone has put into their heads that they have to "go with the times", because AI is "inevitable" and "here to stay". And if they don't adapt, AI would obsolete them. That Wikipedia would become irrelevant because their leadership was hostile to "progress" and rejected "emerging technology", just like Wikipedia obsoleted most of the old print encyclopedia vendors. And one day they would be blamed for it, because they were stuck in the past at a crucial moment. But if they adopt AI now, they might imagine, one day they will be praised as the visionaries who carried Wikipedia over to the next golden age of technology.

Of course all of that is complete bullshit. But instilling those fears ("use it now, or you will be left behind!") is a big part of the AI marketing messaging which is blasted everywhere non-stop. So I wouldn't be surprised if those are the brainworms in their heads.

[–] nightsky@awful.systems 0 points 3 weeks ago (1 children)

Also, happy Pride :3

Yes, happy pride month everyone!

I've decided that this year I'm going to be more open about this and wear a pride bracelet whenever I go in public this month. Including for (remote) work meetings where nobody knows... wonder if anyone will notice.

[–] nightsky@awful.systems 1 points 1 month ago

I’m heckin’ moving to Switzerland next month holy hell.

Good luck!!

they posted these two videos to TikTok in response to the AI backlash

The cringey "hello, fellow kids" vibe is really unbearable... good that people are not falling for that.

[–] nightsky@awful.systems 1 points 1 month ago

If the companies wanted to produce an LLM that didn’t output toxic waste, they could just not put toxic waste into it.

The article title and that part remind me of this quote from Charles Babbage in 1864:

On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

It feels as if Babbage had already interacted with today's AI pushers.

[–] nightsky@awful.systems 1 points 1 month ago

Maybe this is a bit old woman yells at cloud

Yell at cloud computing instead, that is usually justified.

More seriously: it's not at all that. The AI pushers want to make people feel that way -- "it's inevitable", "it's here to stay", etc. But the threat to learning and maintaining skills is real (although the former worries me more than the latter -- what has been learned before can often be regained rather quickly, but what if learning itself is inhibited?).