Technology

267 readers
542 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
1
 
 
  • As nickel prices plunge, Indonesia’s nickel processors are considering layoffs.
  • Tens of thousands work in the world’s largest nickel processing zone.
  • The risky jobs entail trade-offs between income and safety.
2
3
4
 
 
  • Government must regulate age assurance providers to protect users’ privacy and security as digital platforms start to implement Online Safety Act.
  • Users are given no choice over how they verify age, with platforms such as Reddit, Bluesky and Grindr choosing providers with problematic privacy policies.
  • Data protection law is not enough to protect users.
  • There are growing threats to free expression, as platforms increasingly place features and content, such as direct messaging behind age gates, going beyond the Act’s intended focus on restricting access to adult content.

Open Rights Group has warned of serious privacy and security risks for people in the UK as online platforms start to ask users to verify their age, as required by the Online Safety Act. There are also freedom of expression harms as platforms require age verification to access features and content.

5
6
7
 
 
8
9
 
 

As they become increasingly isolated, people are treating AI chatbots as friends and even lovers. We have to fix the broken society that made this possible.

10
11
12
13
14
 
 

Gustafsson was the CEO of Escobar Inc., a corporation registered in Puerto Rico that held successor-in-interest rights to the persona and legacy of Pablo Escobar, the deceased Colombian narco-terrorist and late head of the Medellín Cartel. Escobar Inc. used Pablo Escobar’s likeness and persona to market and sell purported consumer products to the public.

From July 2019 to November 2023, Gustafsson identified existing products in the marketplace that were being manufactured and sold to the public. He then used the Escobar persona to market and advertise similar and competing products purportedly being sold by Escobar Inc., advertising them at a price substantially lower than existing counterparts being sold by other companies.

Gustafsson then purportedly sold the products – including an Escobar Flamethrower, an Escobar Fold Phone, an Escobar Gold 11 Pro Phone, and Escobar Cash (marketed as a “physical cryptocurrency”) – to customers, receiving payments via PayPal, Stripe, Coinbase, among other payment processors, as well as bank and wire transfers.

Despite receiving customer payments, Gustafsson did not deliver the Escobar Inc. products to paying customers because the products did not exist.

In furtherance of the scheme, Gustafsson sent crudely made samples of the purported Escobar Inc. products to online technology reviewers and social media influencers to attempt to increase the public’s demand for them. For example, Gustafsson sent Samsung Galaxy Fold Phones wrapped in gold foil and disguised as Escobar Inc. phones to online technology reviewers to attempt to induce victims who watched the online reviews into buying the products that never would be delivered.

Also, rather than sending paying customers the actual products, Gustafsson mailed them a “Certificate of Ownership,” a book, or other Escobar Inc. promotional materials so there was a record of mailing from the company to the customer. When a paying customer attempted to obtain a refund when the product was never delivered, Gustafsson fraudulently referred the payment processor to the proof of mailing for the Certificate of Ownership or other material as proof that the product itself was shipped and that the customer had received it so the refund requests would be denied.

Gustafsson also caused bank accounts to be opened under his name and entities he controlled to be used as funnel accounts – bank accounts into which he deposited and withdrew proceeds derived from his criminal activities. The purpose was to conceal and disguise the nature, location, source, ownership, and control of the proceeds. The bank accounts were located in the United States, Sweden, and the United Arab Emirates.

15
16
17
18
19
 
 

Sweden has quietly taken a radical step: it is now illegal to purchase online sexual acts. This move advances Sweden’s long-standing “end demand” policy model for tackling sexual services from the physical realm, into the digital. Yet it seems to overlook the significant differences between the two spheres – in terms of behaviour models, profiles, and market dynamics – and how such differences may be taken into account when determining the persuasiveness of the law’s rationale. This becomes especially clear when measured against the protections enshrined under Article 8 of the European Convention on Human Rights (ECHR) and recent Strasbourg case law.

While the criminalisation of the purchase of in-person sexual services has been judged to be compatible with Article 8, the underlying reasoning rests on factors that do not translate to the online sphere: combatting prostitution and human trafficking, a lack of consensus on sex work policy across Europe, and an inability to parse the harms caused by the law from the harms caused by sex work itself. Sweden’s extension of its “end demand” policy into digital sex work thus risks overstepping the boundaries of Article 8 of the ECHR and reveals how laws that are directly transplanted from the offline to the online sphere without due thought may lead to the erosion of private digital rights.

20
 
 

For over a decade, MEGA has been the trusted choice for secure, encrypted file sharing. But not every file transfer needs end-to-end encryption. Sometimes, simplicity and speed matter more, especially when dealing with large files or recipients unfamiliar with the limitations around their browsers having to decrypt their downloads.

That’s why we created Transfer.it, a new service from MEGA designed for effortless file transfers, without end-to-end encryption.

21
 
 

55 min read

Good journalism is making sure that history is actively captured and appropriately described and assessed, and it's accurate to describe things as they currently are as alarming.

And I am alarmed.

Alarm is not a state of weakness, or belligerence, or myopia. My concern does not dull my vision, even though it's convenient to frame it as somehow alarmist, like I have some hidden agenda or bias toward doom. I profoundly dislike the financial waste, the environmental destruction, and, fundamentally, I dislike the attempt to gaslight people into swearing fealty to a sickly and frail psuedo-industry where everybody but NVIDIA and consultancies lose money.

I also dislike the fact that I, and others like me, are held to a remarkably different standard to those who paint themselves as "optimists," which typically means "people that agree with what the market wishes were true." Critics are continually badgered, prodded, poked, mocked, and jeered at for not automatically aligning with the idea that generative AI will be this massive industry, constantly having to prove themselves, as if somehow there's something malevolent or craven about criticism, that critics "do this for clicks" or "to be a contrarian."

I don't do anything for clicks. I don't have any stocks or short positions. My agenda is simple: I like writing, it comes to me naturally, I have a podcast, and it is, on some level, my job to try and understand what the tech industry is doing on a day-to-day basis. It is easy to try and dismiss what I say as going against the grain because "AI is big," but I've been railing against bullshit bubbles since 2021 — the anti-remote work push (and the people behind it), the Clubhouse and audio social networks bubble, the NFT bubble, the made-up quiet quitting panic, and I even, though not as clearly as I wished, called that something was up with FTX several months before it imploded.

This isn't "contrarianism." It's the kind of skepticism of power and capital that's necessary to meet these moments, and if it's necessary to dismiss my work because it makes you feel icky inside, get a therapist or see a priest.

Nevertheless, I am alarmed, and while I have said some of these things separately, based on recent developments, I think it's necessary to say why.

In short, I believe the AI bubble is deeply unstable, built on vibes and blind faith, and when I say "the AI bubble," I mean the entirety of the AI trade.

And it's alarmingly simple, too.

But this isn’t going to be saccharine, or whiny, or simply worrisome. I think at this point it’s become a little ridiculous to not see that we’re in a bubble. We’re in a god damn bubble, it is so obvious we’re in a bubble, it’s been so obvious we’re in a bubble, a bubble that seems strong but is actually very weak, with a central point of failure.

I may not be a contrarian, but I am a hater. I hate the waste, the loss, the destruction, the theft, the damage to our planet and the sheer excitement that some executives and writers have that workers may be replaced by AI — and the bald-faced fucking lie that it’s happening, and that generative AI is capable of doing so.

And so I present to you — the Hater’s Guide to the AI bubble, a comprehensive rundown of arguments I have against the current AI boom’s existence. Send it to your friends, your loved ones, or print it out and eat it.

No, this isn’t gonna be a traditional guide, but something you can look at and say “oh that’s why the AI bubble is so bad.” And at this point, I know I’m tired of being gaslit by guys in gingham shirts who desperately want to curry favour with other guys in gingham shirts but who also have PHDs. I’m tired of reading people talk about how we’re “in the era of agents” that don’t fucking work and will never fucking work. I’m tired of hearing about “powerful AI” that is actually crap, and I’m tired of being told the future is here while having the world’s least-useful most-expensive cloud software shoved down my throat.

Look, the generative AI boom is a mirage, it hasn’t got the revenue or the returns or the product efficacy for it to matter, everything you’re seeing is ridiculous and wasteful, and when it all goes tits up I want you to remember that I wrote this and tried to say something.

22
23
 
 
24
 
 

Benyamin Cohen, The Forward.

This story was originally published in the Forward. Click here to get the Forward’s free email newsletters delivered to your inbox.

Just weeks after Grok echoed neo-Nazi rhetoric and Holocaust denial, Musk unveiled “Baby Grok” — an AI app for children with no clear safeguards

Two weeks after Elon Musk’s Grok chatbot praised Adolf Hitler, suggested Jews control Hollywood, and spewed Holocaust denial, the billionaire entrepreneur announced plans to release a version for children.

It’s called “Baby Grok.”

“We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content,” Musk posted Saturday night on X, the platform he owns. By Sunday afternoon, the tweet had racked up more than 17 million views.

At the moment, Grok is mainly used on X, where users must be at least 13 years old.

It’s a head-spinning move for the world’s richest person, who earlier this month was under fire for allowing his company’s AI system to generate Holocaust denialism and white nationalist talking points.

Musk’s startup, xAI, released the latest version of Grok on July 9. The update — dubbed Grok 4 — was designed to compete with OpenAI’s ChatGPT and Google’s Gemini. Instead, it became the latest flashpoint in the ongoing struggle to put guardrails on generative AI.

Musk’s AI responded to user prompts with far-right tropes. When asked about Jews, Grok claimed they promote hatred toward white people. It echoed neo-Nazi rhetoric. It called for imprisoning Jews in camps. Other answers suggested the Holocaust may have been exaggerated. Some responses have since been deleted, but many remain archived online.

The chatbot’s responses didn’t emerge in a vacuum.

Grok is trained on a wide swath of online content — including posts from X — and like many generative AI systems, it mimics patterns in that data. Grok is the latest in a long line of machines built to “understand” humans — and perhaps the most willing to echo their ugliest impulses.

Just days after Grok’s stream of antisemitic posts, xAI signed a deal with the Department of Defense, worth up to $200 million, to provide the technology to the U.S. military. The company has not publicly stated whether the children’s version will be trained separately or filtered differently from Grok 4.

Musk has faced repeated criticism for amplifying antisemitic content on X, including a post agreeing with the “Great Replacement” theory, a baseless claim that Jews conspire to replace whites in the West.

In January, he posted Holocaust-themed jokes after appearing to perform a Nazi-style salute at an inaugural rally for President Donald Trump. Last year, he visited Auschwitz with right-wing commentator Ben Shapiro and suggested that social media might have helped prevent the Holocaust.

Now, Musk is touting Baby Grok — even as experts warn the industry isn’t ready for such a product. Generative AI models are notoriously difficult to moderate, and child safety advocates have flagged concerns about disinformation, bias and exposure to harmful content.

The announcement comes amid growing concern about the use of generative AI with minors. No federal guidelines currently exist for how child-targeted AI tools should be trained, moderated, or deployed — leaving companies to set their own rules, often without transparency.

25
view more: next ›