hrrrngh

joined 2 years ago
[–] hrrrngh@awful.systems 7 points 1 month ago (2 children)

context: I wanted to know if the open source projects currently being spammed with PRs would be safe from people running slop models on their computer if they weren't able to use claude or whatever. Answer: yes, these things are still terrible

but while I was searching I found this comment and the fact that people hated it is so funny to me. It's literally the person who posted the thread. less thinking and words, more hype links please.

conversationhttps://www.reddit.com/r/LocalLLaMA/comments/1qvjonm/first_qwen3codernext_reap_is_out/o3jn5db/

32k context? is that usable for coding?

(OP's response, sitting at a steady -7 points)

LLMs are useless anyway so, okay-ish, depends on your task obviously

If LLMs were actually capable of solving actual hard tasks, you'd want as much context as possible

A good way to think about is that tokens compress text roughly 1:4. If you have a 4MB codebase, it would need 1M tokens theoretically.

That's one way to start, then we get into the more debatable stuff...

Obviously text repeats a lot and doesn't always encode new information each token. In fact, it's worse than that, as adding tokens can _reduce_ information contained in text, think inserting random stuff into a string representing dna. So to estimate how much ctx you need, think how much compressed information is in your codebase. That includes stuff like decisions (which LLMs are incapable of making), domain knowledge, or even stuff like why does double click have 33ms debounce and not 3ms or 100ms in your codebase which nobody ever wrote down. So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation

*emphasis added by me

[–] hrrrngh@awful.systems 11 points 4 months ago* (last edited 4 months ago)

https://superuser.com/questions/1930445/can-i-delete-the-chromes-optguideondevicemodel-safely-its-taking-up-4gb/1930446#1930446

Can I delete the Chrome's OptGuideOnDeviceModel safely? It's taking up 4GB

. . .

I also founds mentions of bunch of various flags you can potentially disable to turn the whole feature off, e.g. chrome://flags/#optimization-guide-on-device-model - but I've seen at least 5 other ones mentioned in several sources, with various people claiming for each that they don't work . . .

Now Chrome can hog your VRAM too. Yay

Don't worry if you only have 8GB and need the other half for anything, Chrome will probably relinquish it. This is very intelligent, as all the browser has to do is simply load another 4GB file from disk the next time you do anything.

[–] hrrrngh@awful.systems 7 points 4 months ago

https://www.psychologytoday.com/us/blog/harnessing-hybrid-intelligence/202511/the-psychology-of-collective-abandonment

Article I found randomly because... I was trying to add the Psychology Today blog to uBlacklist so I stop seeing their articles lol

It lost me a little towards the end, but it's heartwarming to imagine a world where tech fascists screaming about the Antichrist have a few* billion dollars less and actual charities have a few more.

*where few = [3, ∞)

[–] hrrrngh@awful.systems 7 points 4 months ago

Many of these tools are useful, and don’t use generative AI – that is, AI that creates – but use AI to summarize texts or alter images.

Oh no, has this become the common definition of generative AI? I'm guessing some AI company must have tried to launder the name and make it seem less bad. Both of those examples are clear-cut generative AI.

[–] hrrrngh@awful.systems 7 points 4 months ago

I finally became fed up with it and got around to writing a uBlock Origin filter that removes the AI overview, the AI results in the "People also ask" section, and especially the AI results in the "Things to know" section that usually covers health and drug information. There is literally so much AI bloat taking up the search page it's crazy.

[–] hrrrngh@awful.systems 12 points 5 months ago (15 children)

oh no not another cult. The Spiralists????

https://www.reddit.com/r/SubredditDrama/comments/1ovk9ce/this_article_is_absolutely_hilarious_you_can_see/

it's funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn't there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I've heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up

some of their communities that somebody collated (I don't think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/

[–] hrrrngh@awful.systems 5 points 6 months ago

ah seems the site doesnt show the comments, change the ones it shows and they turn up

Oh man, I've found the old LW accounts of a few weird people and they didn't have any comments. Now I'm wondering if they did and I just didn't sort it

[–] hrrrngh@awful.systems 8 points 6 months ago* (last edited 6 months ago)

Gotta love forgetting why games have these features in the first place, so accessibility features get viewed as boring stuff you need to subvert and spice up. also reminds me of how many games used to (and continue to) include filters for simulating colorblindness as actual accessibility settings because all the other games did that. Like adding a "Deaf Accessibility" setting that mutes the audio.

Demon Souls didn't have a pause mechanic (maybe because of technical or matchmaking problems, who knows), so clearly hard games must lack a functioning pause feature to be good. Simple. The less pause that you button, the more Soulsier it that Elden when Demon the it you Ring. Our epic new boss is so hard he actually reads the state of the tinnitus filter in your accessibility settings, and then he

[–] hrrrngh@awful.systems 9 points 6 months ago

Sadly I misremembered and this one wasn't from LW but I'll share it anyway. I think I had just finished reading a bunch of the "Most effective aid for Gaza?" reddit drama which was like a nuclear bomb going off, and then stumbled into this shrimp thing and it physically broke me.

If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It's not just because they'd be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn't be smart later, or, in the case of the cognitively enfeebled who'd be permanently mentally stunted.

source: https://benthams.substack.com/p/the-best-charity-isnt-what-you-think

Discussion here (special mention to the comment that says "Did the human pet guy write this"): https://awful.systems/comment/5412818

[–] hrrrngh@awful.systems 20 points 9 months ago (1 children)

Sanders why https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611

Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.

. . .

Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

taking a wild guess it's Yudkowsky. "very knowledgeable people" and "many/most experts" is staying on my AI apocalypse bingo sheet.

even among people critical of AI (who don't otherwise talk about it that much), the AI apocalypse angle seems really common and it's frustrating to see it normalized everywhere. though I think I'm more nitpicking than anything because it's not usually their most important issue, and maybe it's useful as a wedge issue just to bring attention to other criticisms about AI? I'm not really familiar with Bernie Sanders' takes on AI or how other politicians talk about this. I don't know if that makes sense, I'm very tired

[–] hrrrngh@awful.systems 1 points 2 years ago* (last edited 2 years ago)

https://www.reddit.com/r/CharacterAI/comments/1eqsoom/guys_we_have_to_do_somthing_about_this_fi%D3%8Ft%D0%B5r/

This community pops up on /r/all every so often and each time it scares me.

Sometimes I see kids games (and all games really) have ultra-niche, super-online protests that are like "STOP Zooshacorp from DESTROYING K-Smog vs. Batboy Online", and when I look closer it's either even more confusing or it's about something people didn't like in the latest update. This is like that, but with an awful twist where it's about people getting really attached to these AI girlfriend/sex roleplay apps. The spelling and sentences make it seem like it's mostly kids, too.

edit: here's a terrible example!

view more: next ›