Technology

267 readers
562 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
1
 
 
2
3
4
 
 

As they become increasingly isolated, people are treating AI chatbots as friends and even lovers. We have to fix the broken society that made this possible.

5
6
7
8
 
 

Gustafsson was the CEO of Escobar Inc., a corporation registered in Puerto Rico that held successor-in-interest rights to the persona and legacy of Pablo Escobar, the deceased Colombian narco-terrorist and late head of the Medellín Cartel. Escobar Inc. used Pablo Escobar’s likeness and persona to market and sell purported consumer products to the public.

From July 2019 to November 2023, Gustafsson identified existing products in the marketplace that were being manufactured and sold to the public. He then used the Escobar persona to market and advertise similar and competing products purportedly being sold by Escobar Inc., advertising them at a price substantially lower than existing counterparts being sold by other companies.

Gustafsson then purportedly sold the products – including an Escobar Flamethrower, an Escobar Fold Phone, an Escobar Gold 11 Pro Phone, and Escobar Cash (marketed as a “physical cryptocurrency”) – to customers, receiving payments via PayPal, Stripe, Coinbase, among other payment processors, as well as bank and wire transfers.

Despite receiving customer payments, Gustafsson did not deliver the Escobar Inc. products to paying customers because the products did not exist.

In furtherance of the scheme, Gustafsson sent crudely made samples of the purported Escobar Inc. products to online technology reviewers and social media influencers to attempt to increase the public’s demand for them. For example, Gustafsson sent Samsung Galaxy Fold Phones wrapped in gold foil and disguised as Escobar Inc. phones to online technology reviewers to attempt to induce victims who watched the online reviews into buying the products that never would be delivered.

Also, rather than sending paying customers the actual products, Gustafsson mailed them a “Certificate of Ownership,” a book, or other Escobar Inc. promotional materials so there was a record of mailing from the company to the customer. When a paying customer attempted to obtain a refund when the product was never delivered, Gustafsson fraudulently referred the payment processor to the proof of mailing for the Certificate of Ownership or other material as proof that the product itself was shipped and that the customer had received it so the refund requests would be denied.

Gustafsson also caused bank accounts to be opened under his name and entities he controlled to be used as funnel accounts – bank accounts into which he deposited and withdrew proceeds derived from his criminal activities. The purpose was to conceal and disguise the nature, location, source, ownership, and control of the proceeds. The bank accounts were located in the United States, Sweden, and the United Arab Emirates.

9
10
11
12
 
 

Sweden has quietly taken a radical step: it is now illegal to purchase online sexual acts. This move advances Sweden’s long-standing “end demand” policy model for tackling sexual services from the physical realm, into the digital. Yet it seems to overlook the significant differences between the two spheres – in terms of behaviour models, profiles, and market dynamics – and how such differences may be taken into account when determining the persuasiveness of the law’s rationale. This becomes especially clear when measured against the protections enshrined under Article 8 of the European Convention on Human Rights (ECHR) and recent Strasbourg case law.

While the criminalisation of the purchase of in-person sexual services has been judged to be compatible with Article 8, the underlying reasoning rests on factors that do not translate to the online sphere: combatting prostitution and human trafficking, a lack of consensus on sex work policy across Europe, and an inability to parse the harms caused by the law from the harms caused by sex work itself. Sweden’s extension of its “end demand” policy into digital sex work thus risks overstepping the boundaries of Article 8 of the ECHR and reveals how laws that are directly transplanted from the offline to the online sphere without due thought may lead to the erosion of private digital rights.

13
14
15
 
 

For over a decade, MEGA has been the trusted choice for secure, encrypted file sharing. But not every file transfer needs end-to-end encryption. Sometimes, simplicity and speed matter more, especially when dealing with large files or recipients unfamiliar with the limitations around their browsers having to decrypt their downloads.

That’s why we created Transfer.it, a new service from MEGA designed for effortless file transfers, without end-to-end encryption.

16
 
 

55 min read

Good journalism is making sure that history is actively captured and appropriately described and assessed, and it's accurate to describe things as they currently are as alarming.

And I am alarmed.

Alarm is not a state of weakness, or belligerence, or myopia. My concern does not dull my vision, even though it's convenient to frame it as somehow alarmist, like I have some hidden agenda or bias toward doom. I profoundly dislike the financial waste, the environmental destruction, and, fundamentally, I dislike the attempt to gaslight people into swearing fealty to a sickly and frail psuedo-industry where everybody but NVIDIA and consultancies lose money.

I also dislike the fact that I, and others like me, are held to a remarkably different standard to those who paint themselves as "optimists," which typically means "people that agree with what the market wishes were true." Critics are continually badgered, prodded, poked, mocked, and jeered at for not automatically aligning with the idea that generative AI will be this massive industry, constantly having to prove themselves, as if somehow there's something malevolent or craven about criticism, that critics "do this for clicks" or "to be a contrarian."

I don't do anything for clicks. I don't have any stocks or short positions. My agenda is simple: I like writing, it comes to me naturally, I have a podcast, and it is, on some level, my job to try and understand what the tech industry is doing on a day-to-day basis. It is easy to try and dismiss what I say as going against the grain because "AI is big," but I've been railing against bullshit bubbles since 2021 — the anti-remote work push (and the people behind it), the Clubhouse and audio social networks bubble, the NFT bubble, the made-up quiet quitting panic, and I even, though not as clearly as I wished, called that something was up with FTX several months before it imploded.

This isn't "contrarianism." It's the kind of skepticism of power and capital that's necessary to meet these moments, and if it's necessary to dismiss my work because it makes you feel icky inside, get a therapist or see a priest.

Nevertheless, I am alarmed, and while I have said some of these things separately, based on recent developments, I think it's necessary to say why.

In short, I believe the AI bubble is deeply unstable, built on vibes and blind faith, and when I say "the AI bubble," I mean the entirety of the AI trade.

And it's alarmingly simple, too.

But this isn’t going to be saccharine, or whiny, or simply worrisome. I think at this point it’s become a little ridiculous to not see that we’re in a bubble. We’re in a god damn bubble, it is so obvious we’re in a bubble, it’s been so obvious we’re in a bubble, a bubble that seems strong but is actually very weak, with a central point of failure.

I may not be a contrarian, but I am a hater. I hate the waste, the loss, the destruction, the theft, the damage to our planet and the sheer excitement that some executives and writers have that workers may be replaced by AI — and the bald-faced fucking lie that it’s happening, and that generative AI is capable of doing so.

And so I present to you — the Hater’s Guide to the AI bubble, a comprehensive rundown of arguments I have against the current AI boom’s existence. Send it to your friends, your loved ones, or print it out and eat it.

No, this isn’t gonna be a traditional guide, but something you can look at and say “oh that’s why the AI bubble is so bad.” And at this point, I know I’m tired of being gaslit by guys in gingham shirts who desperately want to curry favour with other guys in gingham shirts but who also have PHDs. I’m tired of reading people talk about how we’re “in the era of agents” that don’t fucking work and will never fucking work. I’m tired of hearing about “powerful AI” that is actually crap, and I’m tired of being told the future is here while having the world’s least-useful most-expensive cloud software shoved down my throat.

Look, the generative AI boom is a mirage, it hasn’t got the revenue or the returns or the product efficacy for it to matter, everything you’re seeing is ridiculous and wasteful, and when it all goes tits up I want you to remember that I wrote this and tried to say something.

17
 
 

Ofcom warns traditional public-service TV is endangered Recommendation for prominence on third-party platforms part of six-point action plan Urgent clarity needed from Government on how TV will be distributed to reach audiences in future Broadcasters must work more together, and with global tech firms, to survive

Urgent steps must be taken to ensure that public service media content is easy to find and discover on third-party platforms, under new Ofcom recommendations to secure the system’s survival.

18
19
 
 

Benyamin Cohen, The Forward.

This story was originally published in the Forward. Click here to get the Forward’s free email newsletters delivered to your inbox.

Just weeks after Grok echoed neo-Nazi rhetoric and Holocaust denial, Musk unveiled “Baby Grok” — an AI app for children with no clear safeguards

Two weeks after Elon Musk’s Grok chatbot praised Adolf Hitler, suggested Jews control Hollywood, and spewed Holocaust denial, the billionaire entrepreneur announced plans to release a version for children.

It’s called “Baby Grok.”

“We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content,” Musk posted Saturday night on X, the platform he owns. By Sunday afternoon, the tweet had racked up more than 17 million views.

At the moment, Grok is mainly used on X, where users must be at least 13 years old.

It’s a head-spinning move for the world’s richest person, who earlier this month was under fire for allowing his company’s AI system to generate Holocaust denialism and white nationalist talking points.

Musk’s startup, xAI, released the latest version of Grok on July 9. The update — dubbed Grok 4 — was designed to compete with OpenAI’s ChatGPT and Google’s Gemini. Instead, it became the latest flashpoint in the ongoing struggle to put guardrails on generative AI.

Musk’s AI responded to user prompts with far-right tropes. When asked about Jews, Grok claimed they promote hatred toward white people. It echoed neo-Nazi rhetoric. It called for imprisoning Jews in camps. Other answers suggested the Holocaust may have been exaggerated. Some responses have since been deleted, but many remain archived online.

The chatbot’s responses didn’t emerge in a vacuum.

Grok is trained on a wide swath of online content — including posts from X — and like many generative AI systems, it mimics patterns in that data. Grok is the latest in a long line of machines built to “understand” humans — and perhaps the most willing to echo their ugliest impulses.

Just days after Grok’s stream of antisemitic posts, xAI signed a deal with the Department of Defense, worth up to $200 million, to provide the technology to the U.S. military. The company has not publicly stated whether the children’s version will be trained separately or filtered differently from Grok 4.

Musk has faced repeated criticism for amplifying antisemitic content on X, including a post agreeing with the “Great Replacement” theory, a baseless claim that Jews conspire to replace whites in the West.

In January, he posted Holocaust-themed jokes after appearing to perform a Nazi-style salute at an inaugural rally for President Donald Trump. Last year, he visited Auschwitz with right-wing commentator Ben Shapiro and suggested that social media might have helped prevent the Holocaust.

Now, Musk is touting Baby Grok — even as experts warn the industry isn’t ready for such a product. Generative AI models are notoriously difficult to moderate, and child safety advocates have flagged concerns about disinformation, bias and exposure to harmful content.

The announcement comes amid growing concern about the use of generative AI with minors. No federal guidelines currently exist for how child-targeted AI tools should be trained, moderated, or deployed — leaving companies to set their own rules, often without transparency.

20
 
 
21
 
 

As Silicon Valley’s influence expands, a new belief system is quietly reshaping society. This piece explores how tech elites are redefining power, the risks to human agency, and what it will take to reclaim our collective future

22
 
 

Imagine you live in the western United States and are planning a vacation to Europe, returning with a connecting flight somewhere on the east coast. When you arrive in the U.S., the government may invoke the Border Search Exception to search — and even fully copy — your electronic devices, all without a warrant. But because of the chaotic state of Fourth Amendment law for border searches, you’ll face one rule if you fly into Logan International Airport in Boston, an entirely different rule if you arrive at Hartsfield Airport in Atlanta, and a third rule if you land in Dulles Airport outside Washington DC. A fourth rule will govern searches if you land at JFK or LaGuardia Airport in New York City, but if you land just outside New York at Newark International Airport, a fifth rule applies. And if you opt to avoid a connecting flight and land directly on the west coast, a sixth rule will be used.

With the stakes as high as the government being able to copy every sensitive email, photo, and document on your phone — without a warrant— how has the law become so convoluted? It is because each of those airports are located in a different appellate court’s jurisdiction, and those courts have disagreed on the scope of the Border Search Exception to the Fourth Amendment’s warrant requirement.

Warrantless border searches became a feature of U.S. law long ago, well before the digital age. The power of Customs agents to search property entering the United States was established in the late 1700s, and the Supreme Court acknowledged warrantless border search authority in cases in the late 19th century and early 20th century. It formally recognized border searches by Customs agents as an exception to the Fourth Amendment’s warrant requirement in the 1977 case U.S. v. Ramsey.

This out-of-date rule, created to help detect dangerous contraband as it is smuggled into the country, is a poor fit for the digital age and dangerously broad when applied to personal electronic devices like smart phones. Now that individuals carry as much sensitive information in their pocket as they could possibly store in their entire home, the Border Search Exception needs an update.

In 2014 the Supreme Court addressed this precise problem for another exception to the Fourth Amendment’s warrant requirement: searches conducted during arrests. The Court refined the Search Incident To Arrest Exception to the warrant requirement, blocking its application to electronic devices. It noted that “Cell phones differ in both a quantitative and a qualitative sense from other objects” individuals carry and that “[p]rior to the digital age, people did not typically carry a cache of sensitive personal information with them as they went about their day.” Though these same considerations apply at the border, the Supreme Court has not yet stepped in to similarly limit the Border Search Exception to the Fourth Amendment’s warrant requirement. Instead, the law has become a complex patchwork, with appellate courts setting out a range of rules.

23
 
 

Simply using extra electricity to power some Christmas lights or a big fish tank shouldn’t bring the police to your door. In fact, in California, the law explicitly protects the privacy of power customers, prohibiting public utilities from disclosing precise “smart” meter data in most cases.

Despite this, Sacramento’s power company and law enforcement agencies have been running an illegal mass surveillance scheme for years, using our power meters as home-mounted spies. The Electronic Frontier Foundation (EFF) is seeking to end Sacramento’s dragnet surveillance of energy customers and have asked for a court order to stop this practice for good.

24
 
 

Republished from DailyKos under their terms.

Donald Trump put up a video of President Barack Obama being arrested by the FBI in the Oval Office as you can see in the screen grab. Trump is exceedingly happy about it, of course.

The video starts with Obama and a number of Democrats all saying "No One Is Above The Law," then a clown picture, and then the video of the FBI grabbing Obama. Then, Obama is in prison orange jumpsuit in a prison hallway, and then finally in a jail cell.

Here it is on Trump's Truth Social account.

Smaller versions without the No One Is Above The Law intro:

Mario Nawfal has a link to it up on his X account.

Here's the direct link to it, still on X.

It's really a small screen video, and the soundtrack sounds like something from the Village People.

This is also up on Trump's Truth Social.

Samantha Power was Obama's Ambassador to the United Nations. Under Joe Biden, she was the Administrator of the United States Agency for International Developmen. USAID.

All what money? Besides her political and service career, she wrote 4 books. She had $20 million before she became the USAID Administrator. She had $30 million when she left. Elon Musk wondered how she went from $6.7 million to $30 million in three years. He obviously had a wrong starting point. Trump and Musk were doing everything they could to discredit her and USAID.

25
 
 

Many people sense that the United States is undergoing an epistemic crisis, a breakdown in the country’s collective capacity to agree on basic facts, distinguish truth from falsehood, and adhere to norms of rational debate.

This crisis encompasses many things: rampant political lies; misinformation; and conspiracy theories; widespread beliefs in demonstrable falsehoods (“misperceptions”); intense polarization in preferred information sources; and collapsing trust in institutions meant to uphold basic standards of truth and evidence (such as science, universities, professional journalism, and public health agencies).

According to survey data, over 60% of Republicans believe Joe Biden’s presidency was illegitimate. 20% of Americans think vaccines are more dangerous than the diseases they prevent, and 36% think the specific risks of COVID-19 vaccines outweigh their benefits. Only 31% of Americans have at least a “fair amount” of confidence in mainstream media, while a record-high 36% have no trust at all.

What is driving these problems? One influential narrative blames social media platforms like Facebook, Twitter (now X), and YouTube. In the most extreme form of this narrative, such platforms are depicted as technological wrecking balls responsible for shattering the norms and institutions that kept citizens tethered to a shared reality, creating an informational Wild West dominated by viral falsehoods, bias-confirming echo chambers, and know-nothing punditry.

The timing is certainly suspicious. Facebook launched in 2004, YouTube in 2005, and Twitter in 2006. As they and other platforms acquired hundreds of millions of users over the next decade, the health of American democracy and its public sphere deteriorated. By 2016, when Donald Trump was first elected president, many experts were writing about a new “post-truth” or “misinformation” age.

Moreover, the fundamental architecture of social media platforms seems hostile to rational discourse. Algorithms that recommend content prioritize engagement over accuracy. This can amplify sensational and polarizing material or bias-confirming content, which can drag users into filter bubbles. Meanwhile, the absence of traditional gatekeepers means that influencers with no expertise or ethical scruples can reach vast audiences.

The dangerous consequences of these problems seem obvious to many casual observers of social media. And some scientific research corroborates this widespread impression. For example, a systematic review of nearly five hundred studies finds suggestive evidence for a link between digital media use and declining political trust, increasing populism, and growing polarization. Evidence also consistently shows an association between social media use and beliefs in conspiracy theories and misinformation.

But there are compelling reasons to be skeptical that social media is a leading cause of America’s epistemic challenges. The “wrecking ball” narrative exaggerates the novelty of these challenges, overstates social media’s responsibility for them, and overlooks deeper political and institutional problems that are reflected on social media, not created by it.

The platforms are not harmless. They may accelerate worrying trends, amplify fringe voices, and facilitate radicalization. However, the current balance of evidence suggests that the most consequential drivers of America’s large-scale epistemic challenges run much deeper than algorithms.

view more: next ›