Artificial Intelligence

1638 readers
82 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
MODERATORS
1
2
 
 

Original question by @SpiderUnderUrBed@lemmy.zip

Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/

3
 
 

It's a little disconcerting that the company that's trying to make an "ethical" GenAI does the best at deception and backstabbing.

Opus was lured in by the hope of a non-violent resolution. It was quickly betrayed and eliminated by o3, which went on to win.

4
5
6
 
 

I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.

One thing is clear: teachers are not OK.

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

7
8
9
10
 
 

The project implements sparse multiplication and fuses up/down projections in the MLP layers through low rank weight activations. Work is based on Deja Vu and Apple's LLM in a Flash.

This approach avoids loading and computing activations with feed forward layer weights whose outputs will eventually be zeroed out.

It's a lossless approach as these weights anyway do not contribute in the current token prediction. It does however, need the predictors to be accurate in clustering the weights.

The result? We are seeing 5X faster MLP layer performance in transformers with 50% lesser memory consumption avoiding the sleeping nodes in every token prediction. For Llama 3.2, Feed forward layers accounted for 30% of total weights and forward pass computation resulting in 1.6-1.8x increase in throughput:

Sparse LLaMA 3.2 3B vs LLaMA 3.2 3B (on HuggingFace Implementation):

- Time to First Token (TTFT):  1.51× faster (1.209s → 0.803s)
- Output Generation Speed:     1.79× faster (0.7 → 1.2 tokens/sec)  
- Total Throughput:            1.78× faster (0.7 → 1.3 tokens/sec)
- Memory Usage:                26.4% reduction (6.125GB → 4.15GB)
11
12
13
14
15
 
 

fuck yeah, here it goes off! lets goo! :D

16
4
submitted 1 week ago* (last edited 1 week ago) by Goten@piefed.social to c/ai_@lemmy.world
17
18
 
 

nice! it actually follows me now!

19
20
21
22
 
 

Dramatic advances in artificial intelligence over the past decade (for narrow-purpose AI) and the last several years (for general-purpose AI) have transformed AI from a niche academic field to the core business strategy of many of the world’s largest companies, with hundreds of billions of dollars in annual investment in the techniques and technologies for advancing AI’s capabilities.

We now come to a critical juncture. As the capabilities of new AI systems begin to match and exceed those of humans across many cognitive domains, humanity must decide: how far do we go, and in what direction?

AI, like every technology, started with the goal of improving things for its creator. But our current trajectory, and implicit choice, is an unchecked race toward ever-more powerful systems, driven by economic incentives of a few huge technology companies seeking to automate large swathes of current economic activity and human labor. If this race continues much longer, there is an inevitable winner: AI itself – a faster, smarter, cheaper alternative to people in our economy, our thinking, our decisions, and eventually in control of our civilization.

But we can make another choice: via our governments, we can take control of the AI development process to impose clear limits, lines we won’t cross, and things we simply won’t do – as we have for nuclear technologies, weapons of mass destruction, space weapons, environmentally destructive processes, the bioengineering of humans, and eugenics. Most importantly, we can ensure that AI remains a tool to empower humans, rather than a new species that replaces and eventually supplants us.

This essay argues that we should keep the future human by closing the “gates” to smarter-than-human, autonomous, general-purpose AI – sometimes called “AGI” – and especially to the highly-superhuman version sometimes called “superintelligence.” Instead, we should focus on powerful, trustworthy AI tools that can empower individuals and transformatively improve human societies’ abilities to do what they do best. The structure of this argument follows in brief.

23
24
25
view more: next ›