antifuchs

joined 1 year ago
[–] antifuchs@awful.systems 7 points 5 hours ago

If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.

https://unstable.systems/@sop/114898566686215926

[–] antifuchs@awful.systems 6 points 5 hours ago

Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data

[–] antifuchs@awful.systems 10 points 1 day ago

The whisper model has always been pretty crappy at these things: I use a speech to text system as an assistive input method when my RSI gets bad and it has support for whisper (because that supports more languages than the developer could train on their own infrastructure/time) since maybe 2022 or so: every time someone tries to use it, they run into hallucinated inputs in pauses - even with very good silence detection and noise filtering.

This is just not a use case of interest to the people making whisper, imagine that.

[–] antifuchs@awful.systems 19 points 1 day ago (8 children)

This incredible banger of a bug against whisper, the OpenAI speech to text engine:

Complete silence is always hallucinated as "ترجمة نانسي قنقر" in Arabic which translates as "Translation by Nancy Qunqar"

[–] antifuchs@awful.systems 13 points 2 days ago (2 children)

Here’s Dave Barry, still-alive humorist, sneering at Google AI summaries, one of the most embarrassing features Google ever shipped.

[–] antifuchs@awful.systems 6 points 1 week ago

Ooooh that would explain a similarly weird interaction I had on a ticket-selling website, buying a streaming ticket to a live show for the German retro game discussion podcast Stay Forever: they translated the title of the event as “Bleib für immer am Leben”, guess they named it “Stay Forever Live”? No way to know for sure, of course.

[–] antifuchs@awful.systems 9 points 1 week ago (2 children)

It’s distressingly pervasive: autocorrect, speech recognition (not just in voice assistants, in accessibility tools), image correction in mobile cameras, so many things that are on by default and “helpful”

[–] antifuchs@awful.systems 8 points 2 weeks ago

Poor rich guy, forced by the leftmost party available to support the party that is now constructing concentration camps.

[–] antifuchs@awful.systems 11 points 2 weeks ago (1 children)

Geordi, disgusted: being a _con_tent creator

Geordi, interested: being a con_tent_ creator

[–] antifuchs@awful.systems 14 points 3 weeks ago (2 children)

Now I’m curious how a protected class question% speedrun of one of these interviews would look. Get the bot to ask you about your age, number of children, sexual orientation, etc

[–] antifuchs@awful.systems 9 points 3 weeks ago

Fucking rude to drag lisp into this. How dare they.

 

Got the pointer to this from Allison Parrish who says it better than I could:

it's a very compelling paper, with a super clever methodology, and (i'm paraphrasing/extrapolating) shows that "alignment" strategies like RLHF only work to ensure that it never seems like a white person is saying something overtly racist, rather than addressing the actual prejudice baked into the model.

view more: next ›