Sandbar_Trekker

joined 7 months ago
[–] Sandbar_Trekker@lemmy.today 11 points 2 weeks ago

Why not create comparison like "generating 1000 words of your fanfiction consumes as much energy as you do all day" or something more easily to compare.

Considering that you can generate 1000 words in a single prompt to ChatGPT, the energy to do that would be about 0.3Wh.

That's about as much energy as a typical desktop would use in about 8 seconds while browsing the fediverse (assuming a desktop consuming energy at a rate of ~150W).

Or, on the other end of the spectrum, if you're browsing the fediverse on Voyager with a smartphone consuming energy at a rate of 2W, then that would be about 9 minutes of browsing the fediverse (4.5 minutes if using a regular browser app in my case since it bumped up the energy usage to ~4W).

[–] Sandbar_Trekker@lemmy.today 7 points 2 weeks ago (3 children)

I agree with your comment except that I think you've got the privacy part wrong there. Any company can come in and scrape all the information they want, including upvote and downvote info.

In addition, if you try to delete a comment, it's very likely that it won't be deleted by every instance who federates with yours.

[–] Sandbar_Trekker@lemmy.today 4 points 2 weeks ago (1 children)

I think you mean that you can choose a project that doesn't have an "algorithm" (in the sense that you're conveying).

Anyone can create a project with ActivityPub that has an algorithm for feeding content to you.

[–] Sandbar_Trekker@lemmy.today 5 points 2 weeks ago

I think this would only be acceptable if the "AI-assisted" system kicks in when call volumes are high (when dispatchers are overburdened with calls).

For anyone that's been in a situation where you're frantically trying to get ahold of 911, and you have to make 10 calls to do so, a system like this would have been really useful to help relieve whatever call volumes situation was going on at the time. At least in my experience it didn't matter too much because the guy had already been dead for a bit.

And for those of you who are dispatchers, I get it, it can be frustrating to get 911 calls all the time for the most ridiculous of reasons, but still I think it would be best if a system like this only kicks in when necessary.

Being able to talk to a human right away is way better than essentially being asked to "press 1 if this is really an emergency, press 2 if this is not an emergency".

[–] Sandbar_Trekker@lemmy.today 14 points 3 weeks ago

I had to click to figure out just what an "AI Browser" is.

It's basically Copilot/Recall but only for your browser. If the models are run locally, the information is protected, and none of that information is transmitted, then I don't see a problem with this (although they would have to prove it with being open source). But, as it is, this just looks like a browser with major privacy/security flaws.

At launch, Dia’s core feature is its AI assistant, which you can invoke at any time. It’s not just a chatbot floating on top of your browser, but rather a context-aware assistant that sees your tabs, your open sessions, and your digital patterns. You can use it to summarize web pages, compare info across tabs, draft emails based on your writing style, or even reference past searches.

Reading into it a bit more:

Agrawal is also careful to note that all your data is stored and encrypted on your computer. “Whenever stuff is sent up to our service for processing,” he says, “it stays up there for milliseconds and then it’s wiped.” Arc has had a few security issues over time, and Agrawal says repeatedly that privacy and security have been core to Dia’s development from the very beginning. Over time, he hopes almost everything in Dia can happen locally.

Yeah, the part about sending my data of everything appearing on my browser window (passwords, banking, etc.) to some other computer for processing makes the other assurances worthless. At least they have plans to get everything running locally, but this is a hard pass for me.

[–] Sandbar_Trekker@lemmy.today 2 points 3 weeks ago

I didn't factor in mobile power usage as much in the equation before because it's fairly negligible. However, I downloaded an app to track my phone's energy use just for fun.

A mobile user browsing the fediverse would be using electricity around a rate of ~1 Watt (depends on the phone of course and if you're using WiFi or LTE, etc.).

For a mobile user on WiFi:
In the 16 seconds that a desktop user has to burn through the energy to match those 2 prompts to chatGPT, that same mobile user would only use up ~0.00444 Wh.

Looking at it another way, a mobile user could browse the fediverse for 18min before they match the 0.3 Wh that a single prompt to ChatGPT would use.

For a mobile user on LTE:
With Voyager I was getting a rate of ~2 Watts.
With a browser I was getting a rate of ~4 Watts.

So to match the power for a single prompt to chatGPT you could browse the fediverse on Voyager for ~9 minutes, or using a browser for ~4.5 minutes.

I'm not sure how accurate this app is, and I didn't test extensively to really nail down exact values, but those numbers sound about right.

[–] Sandbar_Trekker@lemmy.today 33 points 3 weeks ago

My question simply relates to whether I can support the software development without supporting lemmy.ml.

No. You can't support Lemmy without supporting lemmy.ml because the developers use lemmy.ml for testing. They have not created a means for users to separate out their donations from one or the other.

That's why others are suggesting you should just support a different but similar fediverse project like PieFed or Mbin instead.

[–] Sandbar_Trekker@lemmy.today 2 points 3 weeks ago

Yeah, if you're relying on them to be right about anything, you're using it wrong.

A fine tuned model will go a lot further if you're looking for something specific, but they mostly excel with summarizing text or brainstorming ideas.

For instance, if you're a Dungeon Master in D&D and the group goes off script, you can quickly generate the back story of some random character that you didn't expect the players to do a deep dive on.

[–] Sandbar_Trekker@lemmy.today 2 points 3 weeks ago

Yeah, ~100-133 depending on how much energy your electric kettle uses.

[–] Sandbar_Trekker@lemmy.today 6 points 3 weeks ago (2 children)

Depends on the electric kettle, the first few I looked at on amazon run at ~600-800 Watts.

So, on the lower end there, you're looking at about 0.166 Wh every second.

So a single push to chatGPT (0.3 Wh) uses about the same energy as an electric kettle does in less than 2 seconds.

[–] Sandbar_Trekker@lemmy.today 3 points 3 weeks ago (8 children)

While I agree that their comment didn't add much to the discussion, it's possible that you used more electricity to type out your response than it did for them to post theirs.

It's estimated that a single ChatGPT prompt uses up ~0.3 Wh of electricity.

If @Empricorn@feddit.nl is on a desktop computer browsing the internet using electricity at a rate of ~150 W, and @TropicalDingdong@lemmy.world is on a smartphone, then you would only have ~16 seconds to type up a response before you begin using more electricity than they did.

Some math150Wh/60min/60sec = 0.041666 Wh every second

Or about 2.5 Wh every minute.

[–] Sandbar_Trekker@lemmy.today 22 points 3 weeks ago (1 children)

I think you missed the part at the very end of the page that showed the timeline of them reporting the vulnerability back in April, being rewarded for finding the vulnerability, the vulnerability being patched in May, and being allowed to publicize the vulnerability as of today.

 

Video that goes over some of the issues today with AI generated content and some attempts to prove something that's real.

view more: next ›