i'm gonna infect you with covid stay away
fullsquare
slow is smooth, smooth is fast
the most productive way to do things is to do it deliberately and with good planning, at least in my field which is not coding related in any way
"hello anthropic? can you pay me 50k a year so that i specifically don't go around making biological weapons? think about all these future simulated beings it'll save"
Ask Claude any basic question about biology and it will abort.
it might be that, or it may have been intended to shut off any output of medical-sounding advice. if it's the former, then it's rare rationalist W for wrong reasons
I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks.
look up the story of vil mirzayanov. break out these bayfucker style salaries in eastern europe or india or number of other places and you'll find a long queue of phds willing to cook man made horrors beyond your comprehension. it might even not take six figures (in dollars or euros) after tax
LLMs aren’t actually smart enough to make delicate judgements
maybe they really made machines in their own image
very easily, previously these guys weren't a target. asm launchers were, but these can get hidden in a cave rather quickly
i'd like to see some elaboration from the smiling man himself (when he's back), since he seems to have some numbers on that
i think that mbs got wiser than that at this point and doesn't put oil money in softbank anymore. saudi money is in vision fund 1 (2017; 45% of it; another 15% is emirati money); i don't think that vision fund 2 has any after they got burned on wework, wirecard, ftx, and so on and so on. per last ed zitron post, most of softbank funding round for openai is not from them but instead from some other investors, whoever they might be, so there clearly is someone further down the line. (initial 10b, of which 7.5b from softbank and 2.5 from others; then if it converts to for-profit another 30b, but if it doesn't, it's 10b (?), of which 8.3b is from other investors, and 1.7b from softbank (either the numbers are wrong or ed accidentally a 1b dollars), but i don't believe that openai will convert, so it's 9.2b from softbank and 10.8 from others, +20b from softbank otherwise)
well nobody guarantees that internet is safe, so it's more on chatbot providers pretending otherwise. along with all the other lies about machine god that they're building that will save all the worthy* in the incoming rapture of the nerds, and even if it destroys everything we know, it's important to get there before the chinese.
i sense a bit of "think of the children" in your response and i don't like it. llms shouldn't be used by anyone. there was recently a case of a dude with dementia who died after fb chatbot told him to go to nyc
* mostly techfash oligarchs and weirdo cultists
so how is it fundamentally different from qanon, except that it's strictly personalized this time
commercial chatbots have a thing called system prompt. it's a slab of text that is fed before user's prompt and includes all the guidance on how chatbot is supposed to operate. it can get quite elaborate. (it's not recomputed every time user starts new chat, state of model is cached after ingesting system prompt, so it's only done when it changes)
if you think that's just telling chatbot to not do a specific thing is incredibly clunky and half-assed way to do it, you'd be correct. first, it's not a deterministic machine so you can't even be 100% sure that this info is followed in the first place. second, more attention is given to the last bits of input, so as chat goes on, the first bits get less important, and that includes these guardrails. sometimes there was a keyword-based filtering, but it doesn't seem like it is the case anymore. the more correct way of sanitizing output would be filtering training data for harmful content, but it's too slow and expensive and not disruptive enough and you can't hammer some random blog every 6 hours this way
there's a myriad ways of circumventing these guardrails, like roleplaying a character that does these supposedly guardrailed things, "it's for a story" or "tell me what are these horrible piracy sites so that i can avoid them" and so on and so on
get yourself into a career where not doing things carefully makes them either to stop working or generating accidents, it'll usually stop managerial assholes from forcing you to do things wrong way