Technology

106 readers
69 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

Encouraged:

founded 2 weeks ago
MODERATORS
1
2
3
4
5
 
 

“Hardmaxxing is NECESSARY. Softmaxxing alone will NEVER mog you into viability — it’s like putting a fresh coat of paint on a crumbling building,” declares a chatbot featured prominently on OpenAI’s GPTs page. It has just analyzed a photograph of a man and deemed him “subhuman”. The page, prominently linked from the sidebar in the ChatGPT interface, lists “Looksmaxxing GPT” as #6 in the “Lifestyle” section, behind bots promising astrological analysis, color analysis, and “fictional not-real therapy”.

6
 
 

For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users' privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?

Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.

But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.

In practice, this means things like critical updates to Meta's algorithms, new safety features and changes to how content is allowed to be shared across the company's platforms will be mostly approved by a system powered by artificial intelligence — no longer subject to scrutiny by staffers tasked with debating how a platform change could have unforeseen repercussions or be misused.

Inside Meta, the change is being viewed as a win for product developers, who will now be able to release app updates and features more quickly. But current and former Meta employees fear the new automation push comes at the cost of allowing AI to make tricky determinations about how Meta's apps could lead to real world harm.

"Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you're creating higher risks," said a former Meta executive who requested anonymity out of fear of retaliation from the company. "Negative externalities of product changes are less likely to be prevented before they start causing problems in the world."

Meta said in a statement that it has invested billions of dollars to support user privacy.

Since 2012, Meta has been under the watch of the Federal Trade Commission after the agency reached an agreement with the company over how it handles users' personal information. As a result, privacy reviews for products have been required, according to current and former Meta employees.

In its statement, Meta said the product risk review changes are intended to streamline decision-making, adding that "human expertise" is still being used for "novel and complex issues," and that only "low-risk decisions" are being automated.

But internal documents reviewed by NPR show that Meta is considering automating reviews for sensitive areas including AI safety, youth risk and a category known as integrity that encompasses things like violent content and the spread of falsehoods.

7
8
9
 
 

An alternate reality is unfolding across social media platforms, including China’s Douyin and Weibo, where a surge of falsehoods is fueling anti-American sentiment that could undermine the fragile truce.

10
 
 
  • An internal strategy paper reveals that OpenAI aims to develop ChatGPT into a personalized assistant by mid-2025, helping users with daily tasks, providing expertise such as programming knowledge, and performing actions online.
  • To achieve this, OpenAI plans to use its own reasoning models and introduce new tools like "Computer Use," expanding ChatGPT's features with multimodal capabilities and generative user interfaces; a proprietary search index could serve as a key component.
  • OpenAI wants ChatGPT to stand apart from search engines and operating systems as its own product category, identifying short-term competition from other AI chatbots and long-term competition from search services, browsers, and even human interactions.
11
12
 
 
13
14
15
16
17
18
19
 
 

"The phrase artificial intelligence is a marketing term that is used to sprinkle some magic fairy dust that brings the venture capital dollars."

20
21
22
23
 
 

Of course, power demand is set to continue expanding rapidly as the supply chain increases its production capacity while demand remains high. TSMC has already confirmed its target to double its CoWoS capacity again in 2025 (see Data S1, sheet 2). This could mean the total power demand associated with devices produced using TSMC’s CoWoS capacity will also double from 2024 to 2025—just as it did from 2023 to 2024 (Figure 1), when TSMC similarly doubled its CoWoS capacity. At this rate, the cumulative power demand of AI accelerator modules produced in 2023, 2024, and 2025 could reach 12.8 GW by the end of 2025. For AI systems, this figure would rise to 23 GW, surpassing the electricity consumption of Bitcoin mining and approaching half of total data center electricity consumption (excluding crypto mining) in 2024. However, with the industry transitioning from CoWoS-S to CoWoS-L as the main packaging technology for AI accelerators, continued suboptimal yield rates for this new packaging technology may slow down both device production and the total power demand associated with these devices. Moreover, although demand for TSMC’s CoWoS capacity exceeded supply in both 2023 and 2024, it is not guaranteed that this trend will persist throughout 2025. Several factors could lead to a slowdown in AI hardware demand, such as waning enthusiasm for AI applications. Additionally, AI hardware may face new bottlenecks in the manufacturing and deployment process. While limited CoWoS capacity has constrained AI accelerator production and power demand over the past 2 years, export controls and sanctions driven by geopolitical tensions could introduce new disruptions in the AI hardware supply chain. Chinese companies have already faced restrictions on the type of AI hardware they can import, leading to the notable release of Chinese tech company DeepSeek’s R1 model. This large language model may achieve performance comparable to that of OpenAI’s ChatGPT, but it was claimed to do so using less advanced hardware and innovative software. These innovations can reduce the computational and energy costs of AI. At the same time, this does not necessarily change the “bigger is better” dynamic that has driven AI models to unprecedented sizes in recent years. Any positive effects on AI power demand as a result of efficiency gains may be negated by rebound effects, such as incentivizing greater use and the use of more computational resources to improve performance. Furthermore, multiple regions attempting to develop their own AI solutions may, paradoxically, increase overall AI hardware demand. Tech companies may also struggle to deploy AI hardware, given that Google already faced a “power capacity crisis” while attempting to expand data center capacity. For now, researchers will have to continue navigating limited data availability to determine what TSMC’s expanding CoWoS capacity means for the future power demand of AI.

24
 
 
  • Brazil is testing a digital wallet program that allows users to monetize their data.
  • A federal bill, when passed, would turn data into commercial assets for citizens — the first such proposal in the world.
  • The pilot, a partnership between the public and private sectors, is ahead of similar initiatives in some U.S. states.
25
view more: next ›