Bibip

joined 3 weeks ago
[–] Bibip@programming.dev 20 points 2 days ago

i learned something fun: goodwill sells kitchen knives for a dollar a pop. they're gonna be scuffed. if you get a reversible coarse/fine whetstone, you can practice sharpening your knife. if you do a good job of sharpening your knife, it will be sharp. if you fuck it up, it was a dollar.

hope this helps.

[–] Bibip@programming.dev 5 points 4 days ago

a little bit of knowledge is a dangerous thing

[–] Bibip@programming.dev 4 points 4 days ago (1 children)

hi, i have strong feelings about the use of genai but i come at it from a very different direction (story writing). it's possible for someone to throw together a 300 page story book in an afternoon - in the style of lovecraft if they want, or brandon sanderson, or dan brown (dan brown always sounds the same and so we might not even notice). now, the assumption that i have about said 300 pager is that it will be dogshit, but art is subjective and someone out there has been beside themselves pining for it.

but this has always been true. there have always been people churning out trash hoping to turn a buck. the fact that they can do it faster now doesn't change that they're still in the trash market.

so: i keep writing. i know that my projects will be plagiarized by tech companies. i tell myself that my work is "better" than ai slop.

for you, things are different. writing code is a goal-oriented creative endeavor, but the bar for literature is enjoyment, and the bar for code is functionality. with that in mind, i have some questions:

if someone used genai to generate code snippets and they were able to verify the output, what's the problem? they used an ersatz gnome to save them some typing. if generated code is indistinguishable from human code, how does this policy work?

for code that's been flagged as ai generated- and let's assume it's obvious, they left a bunch of GPT comments all over the place- is the code bad because it's genai or is it bad because it doesn't work?

i'm interested to hear your thoughts

[–] Bibip@programming.dev 4 points 4 days ago

Same. I think it's ghastly that an internet where individuals are behaving safely is an internet where every single comment is the same sterile clank.

[–] Bibip@programming.dev 1 points 1 week ago

In the words of diablo 2's carvers, "rock in the shoe"

[–] Bibip@programming.dev 2 points 1 week ago
[–] Bibip@programming.dev 3 points 1 week ago (1 children)

Ketchup

Ingredients: ketchup

Duh

[–] Bibip@programming.dev 1 points 1 week ago

That may be the case, GiantChickDicks, but I would really appreciate said step-by-step pictorial diagram. Hand holding optional.

[–] Bibip@programming.dev 1 points 1 week ago

i didn't like ∆V, but i almost loved it. i thought it was neat that you could hire crew that gave different bonuses depending upon their specialty and experience. i thought it was neat that your crew can recognize other folks you come across in the rings. i thought it was really neat when one of those strangers told my crewmate about an anomalous lidar contact they made further in.

as i got closer to coordinates they shared, tracking this enormous lidar contact: it occured to me that if the developers had seized upon this moment with eldritch horror that it would be my favorite game ever. i felt scared, if that rock turned around and had an eye and then a dozen rock tentacles busted into my ship and squeezed all the blood out of my crew -- it would have been my favorite game.

[–] Bibip@programming.dev 4 points 2 weeks ago

there are many use-cases, and you've neglected one: linguistic analysis can be used to identify a person and to link them to other accounts. i'm not saying it's likely or apocalyptic, but it is true and present. using an LLM to "sanitize" your outputs can prevent this.

from a privacy perspective, everyone should do this using a locally hosted LLM. from a person-that-uses-the-internet perspective, i would absolutely hate it if every article and every comment looked like an identical brand of ai slop.

[–] Bibip@programming.dev 5 points 2 weeks ago

a layperson cannot be relied upon to draw meaningful conclusions from a scholarly article. i learned this when i tried to do it. have you ever tried to read a spanish book, without knowing spanish, with nothing but an english-spanish dictionary? it's very slow going and it works alright until someone speaks in idiom or metaphor, but even then you can mostly still get it. this is not always the case with scholarly articles.

moreover, it's a waste of time. if it takes you 30 hours to look up every term and graph, but it would have taken your biology friend 20 minutes to synthesize it for you, there's an obvious solution here. if an LLM can save you 30 hours, and your biology friend 20 minutes, it's a useful tool.

[–] Bibip@programming.dev 12 points 3 weeks ago

hi friends i hope you're well.

i worked a laborious job and experienced a phenomenon i refer to as "parasitic thought:" it is where someone will provide to you all of the information that a person would require to reach the correct conclusion, and then stare at you. they want you to crunch the info for them.

i feel like one of those parasites in my agent interactions. i know i COULD think, but you can do it too, lil buddy. go on. do it for me.

i don't know about "reasonable" or "ethical" or "polite," but in my experience: if someone just regurgitates some clank clank slop slop, it reads as hostile. "i can't be bothered to communicate with you, here, read this wall of gpt-vomit"

my instinct is to copy and paste, "LLM agent of my choice, what's this person trying to say to me?" and then skim the ai synthesized summary of the ai composed body text generated from some idiot's faint echoes of thought.

in the words of your highschool biology teacher, the human is the powerhouse of the agentic loop. in my unimportant opinion, responsible use of genai agents means that the output should be indistinguishable, if not better, than something you wrote by hand.

there are privacy implications. linguistic assessment can be used to identify you. from a privacy perspective, the internet would be preferable if everyone fed their carefully formed thoughts to an LLM and said "make this look like chatgpt 3 wrote it."

view more: next ›