In summary: you have the cooperation of the parents, you cannot exclude the existence of a mental issues, and you are allowed to spray the item then. These are conditions I put ahead of any other suggestion.
FriendOfDeSoto
No, I would not want to join such an instance but I wouldn't mind its existence. Nobody could really federate with it. So you create a niche server in an already niche environment.
I am not convinced the conclusion "if the government runs it, the first amendment has to apply" is apt. Even if the server was run from under the house majority leader's desk - which I don't think it would, this smells more like an outsourced undertaking - moderation on the platform is not "making a law." And proprietors of platforms are legally compelled to moderate in certain cases, e.g. when illegal stuff like child sexual abuse is involved.
There are at least two discussions going on here simultaneously. Is the process of a beefed up spell checker sucking up all the data the same as an artist looking at what had come before, before either of them churn out new art? I'm inclined to agree with you; the process does seem similar enough. The difference remains that one is a statistical model and the other is a human being. So even if the process appears similar enough, they are two different types of player and I can also agree that we should not treat them the same. One is able to throw constant massive amounts of spaghetti at the wall as long as there are chips and power and the other is limited by their health and more limited processing power. So where the compromise lands in this discussion simply isn't clear yet. And while you and I can discuss this, I can say for myself at least I'm not smart enough to see where this goes eventually.
The other discussion is how all of it collides with existing copyright/trademark law, which is essentially different in every country. Constitutional rights, like freedoms of expression and the arts, are given to real people, not computers. But at least one supreme court in this planet has made corporate money a form of free speech. So eff knows where LLMs end up.
This is new territory we're in. And I fear that's why it will take another decade until we get a legal landmark decision or a political compromise that will be similar enough all around the world.
The law mostly disagrees with the memes = theft. A lot of it is covered through freedom of speech and fair use. If you have taken a bit of content, changed it a bit, recontextualized, and reposted it, you are most likely in the clear. Especially if the original content was publicly posted. This gets less clear if you are using the likeness of a private person but this will also depend on context. Where in the world you are, if this content was captured in a public space or from something published - the list goes on, like some stuff can be trademarked as well, and I'm no lawyer. A lot of these things run under the legal doctrine of "no plaintiff, no judge." I feel artists in general have accepted that anything they post online is just potentially gone. And if no one steals their content to make money off it, they're not going to hire a lawyer, whom they cannot afford.
And I'm not saying any of this is great but that's an established status quo.
The reason why so-called AI generated art gets decried is twofold. It's new and we don't like new things. And in order for it to be created, the models have to suck in all the training data they can. And they don't tend to pay for it. So that's where some people see theft happening. But that's not settled law yet because it's fairly new, there are plaintiffs but not enough judges have passed judgement yet. Do they have to pay for stuff that's publicly available? Where is the line, if any? Is imitation of a style okay if there is more to the work than just copying something from Studio Ghibli or Disney? These questions are going to keep a lot of legal professionals in bacon for a long time still.
This shit is hard. It's more gray than black and white.
I've been thinking about strategies to get Google to back down on this. And I think the most viable strategy is to let them know that we will all move to iOS if they go through with it. If they lock down their OS, then we might as well use the OG locked down OS and turn to Apple. We only have to make this convincing enough.
I don't want to go to the dark side either. But as the light is going out on this side: I'm gonna need a new phone within the next 12-18 months. For the first time since ditching my blackberry I'm thinking about switching again. And for the first time ever I'm seriously thinking about an iPhone. All my purchases and what not be dammed. LOOK WHAT YOU MADE ME DO, GOOGLE!
I hear you. I'd still be hesitant to let school age kids learn with an LLM companion. If the grownups think they're talking with a sentient gigabyte, I think the danger is too great to expose kids to this. Which brings me to my big picture opinion: the general public doesn't need to have access to most of these models. We don't need to cook polar bears alive to make 5 second video memes, slop, or disinformation. You can just read your emails. No one needs ChatGPT plan their next trip. No one should consider an LLM a substitute for a trained therapist. There are good applications in the field of accessibility, probably medical as well. The rest can stay in a digital lab until they've worked out how not to tell teenagers to kill themselves, not to eat rocks to help your digestion, or insert any other bullshit so-called AI headline you have read recently here. It's not good for people, the environment, and it's forming a dangerous bubble that will have shades of subprime mortgages 2007/8 when it bursts. The negatives outweigh the positives.
Gosh, are we dumb the world over. Maybe these chat bots are just lowering the threshold for what used to be the "I'm hearing voices or communicate with the supernatural" type of people. Thanks to a chat bot, you can now be certifiable much sooner.
When they use idioms and expressions incorrectly.
YT and TT are platforms that breed weird quirk uniformity. They all grab your attention with the same phrases ("you'll never believe ...", "what about [insert something outrageous]? Let me explain ..." etc.) For a while, everybody had the same Ikea shelves behind them crammed with shit. Then I think we moved on to neon signs. It used to be fashionable to show off your expensive big microphone, probably much to the delight of its manufacturer. And that's why I wouldn't be surprised to learn that the manufacturer paid some influencers to hold the tiny mike prominently in the shot like they would hold a dog poop bag filled with poop from a stranger's dog. And then it was copied.
I'm not talking about models. That in itself is not a YouTube competitor.
I'm not aware if they have announced a platform for this type of video. OpenAI and Meta have and that's what I meant.
Neither of us are legal scholars, are we. If I pretended to be one, I would say the government acting as a user on somebody else's platform or the government running its own platform are different enough circumstances not to derive comparisons from.