this post was submitted on 08 Jun 2025
17 points (94.7% liked)

Artificial Intelligence

1648 readers
35 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
MODERATORS
 

Original question by @SpiderUnderUrBed@lemmy.zip

Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/

top 7 comments
sorted by: hot top controversial new old
[–] Genius@lemmy.zip 4 points 3 days ago

Yes please. I want labels on my food saying what it contains, and I want labels on my art too.

[–] j4k3@lemmy.world 3 points 3 days ago

It won't change the scenario at all and will goad people into creating worse stuff. The best thing possible is to normalize both the positive and negative aspects of this like what happened much slower with Photoshop/digital editing fakes. Those were actually pretty high quality even before AI but were relegated to the weirder corners of the internet.

The more normal it is to be skeptical of recorded media, the better. AI can be tweaked and tuned in private on enthusiast level hardware. I can and have done fine tuning with a LLM and a CNN at home and offline. It is not all that hard to do. If the capabilities are ostracized, the potential to cause far larger problems becomes greater. A person is far less likely to share their stories and creations when they get a poor reception, and may withdraw further and further within themselves until they make use of what they have created.

[–] Sandbar_Trekker@lemmy.today 2 points 3 days ago* (last edited 3 days ago)

What does the watermark really give you?

It gives a false sense that you can tell what's AI and what's not. Especially when anything created malicously is likely going to remove that watermark anyway. Pandoras box is already open on those abilities and there's no putting the lid back.

And, even in the case of non-maliciously generated work, if you suspect that something is AI, but it doesnt have a watermark, do you start investigations into how a video/image/story(text) was created? Doesn't that mean that any artist or author is now going to need to prove their innocence just because someone suspects that their work had some form of AI involved in the process at some point?

It's bad enough that they have to worry about those accusations from average people to begin with, but now you're just giving ammunition for anyone (or any corporation) to drag them through the legal system based on what "appears" to be AI generated.

Edit: typo

Sure. But when does it count as AI generated.

If I used AI to brainstorm something then wrote it myself? If I had a poorly written text myself and asked AI to rewrite it with proper phrasing?

[–] wabafee@lemmy.world 1 points 3 days ago

I think it should be the other way around. Like videos that has some mark that it's real.

[–] freshcow@lemmy.world 1 points 3 days ago

I think you probably bump into first amendment issues if you try to mandate how and what can be shared in general. You may be able to regulate it in specific contexts, for example like in advertisements.
I think the only good way to do it would to have something enforceable on companies who publish AI models rather than individuals who share images.

[–] NegentropicBoy@lemmy.world 1 points 3 days ago

We could have "source: ..."