Yeah, this is what I'm going to do if I think about getting another cat again. These two are probably already gone. I was just entranced yesterday and my imagination was running a little too wild haha.
lets_get_off_lemmy
I don't think they're bonded. They were just delivered to the PetCo from a rescue at the same time.
Honestly, moving would be devastating at this point. I'd probably have to pay $200 more for a place 1/2 the size (and I currently have a personal garage and a balcony). I'm not gonna risk it because I saw something cute lol
I think this is the right answer. I've been a little more flippant with rules in my life lately and I think I needed someone else to tell me this. I don't really want to give my landlord any reason to raise the rent more or kick me out.
Oh no, I mean could you explain the joke? I believe I get the joke (shitty AI will replace experts). I was just leaving a comment about how systems that use LLMs to check the work of other LLMs do better than if they don't. And that when I've introduced AI systems to stakeholders with consequential decision making, they tend to want a human in the loop. While also saying that this will probably change over time as AI systems get better and we get more used to using them. Is that a good thing? It will have to be on a case by case basis.
Could you explain?
That's why too high a level of accuracy in ML is always something that makes me squint... I don't trust it, as an AI researcher and engineer, you have to do the due diligence in understanding your data well before you start training.
True! I'm an AI researcher and using an AI agent to check the work of another agent does improve accuracy! I could see things becoming more and more like this, with teams of agents creating, reviewing, and approving. If you use GitHub copilot agent mode though, it involves constant user interaction before anything is actually run. And I imagine (and can testify as someone that has installed different ML algorithms/tools on government hardware) that the operators/decision makers want to check the work, or understand the "thought process" before committing to an action.
Will this be true forever as people become more used to AI as a tool? Probably not.
Over 80,000 is what the organizers said at the protest.
Oh I completely agree that we are turning everything to shit in about a million different ways. And as oligarchs take over more, while AI is a huge money-maker, I can totally see regulation around it being scarce or entirely non-existent. So as it's introduced into areas like the DoD, health, transportation, crime, etc., it's going to be sold to the government first and it's ramifications considered second. This has also been my experience as someone working in the intersection of AI research and government application. I immediately saw Elon's companies, employees, and tech immediately get contracts without consultation by FFRDCs or competition by other for-profit entities. I've also seen people on the ground say "I'm not going to use this unless I can trust the output."
I'm much more on the side of "technology isn't inherently bad, but our application of it can be." Of course that can also be argued against with technology like atom bombs or whatever but I lean much more on that side.
Anyway, I really didn't miss the point. I just wanted to share an interesting research result that this comic reminded me of.