this post was submitted on 25 Mar 2026
24 points (100.0% liked)

China

2656 readers
57 users here now

Discuss anything related to China.

Community Rules:

0: Taiwan, Xizang (Tibet), Xinjiang, and Hong Kong are all part of China.

1: Don't go off topic.

2: Be Comradely.

3: Don't spread misinformation or bigotry.


讨论中国的地方。

社区规则:

零、台湾、西藏、新疆、和香港都是中国的一部分。

一、不要跑题。

二、友善对待同志。

三、不要传播谣言或偏执思想。

founded 5 years ago
MODERATORS
 

Archive link: https://archive.ph/gsvf3

BEIJING, May 12 (Xinhua) -- China will establish a tiered AI education system spanning primary, junior high, and senior high schools to guide students from foundational cognitive awareness to practical technological innovation, according to policy documents unveiled Monday.

At the primary school level, the Ministry of Education (MOE) prioritizes AI literacy through exposure to basic technologies, such as voice recognition and image classification. Building on this foundation, junior high school students will deepen their understanding of AI logic, examine machine learning processes, and develop critical thinking to identify misinformation in generative AI outputs.

Progressing to senior secondary education, the focus shifts toward applied innovation. Students will use accumulated AI knowledge to design and refine AI algorithm models, while cultivating interdisciplinary systems thinking.

To achieve the goals, the MOE will integrate AI-enabled teaching competencies into the teacher training framework. Additionally, it mandates schools to develop age-appropriate curricula with tiered instructional practices that align with cognitive development stages.

Notably, the MOE underscores generative AI's pedagogical potential. "Teachers can empower generative AI tools to construct interactive teaching and create immersive learning experiences," said an official overseeing basic education.

The official also called for strengthening students' logical and innovative thinking through generative AI-powered interactive learning ecosystems.

Meanwhile, the MOE prohibits students from submitting AI-generated content as academic work or examination responses. Simultaneously, it demands that teachers cultivate learners' capacity for critical thinking of AI outputs, thereby fostering authentic engagement in information processing.

On r/Sino: https://www.reddit.com/r/Sino/comments/1krrjki/china_will_establish_a_tiered_ai_education_system/

you are viewing a single comment's thread
view the rest of the comments
[–] CriticalResist8@lemmygrad.ml 9 points 1 week ago (1 children)

There's technically different ways to train models and they work different, but they're all neural networks working on layers in the end. What I mean is 'genAI' isn't really a thing beyond a vague boogeyman, which single it out as some unique 'evil' because detractors have to concede there's actual uses for AI while still wanting to retain their apprehension against it. It doesn't name the actual problem they have: either with big tech companies, or against the loss of their sense of superiority for not using AI. But if we have a problem with OpenAI, Anthropic, Amazon etc then we should be able to name them out and study them without lumping all of it into the 'genAI' label.

As an example when you use a sentence-transformer to turn a sentence into a tensor (an array of vectors in N dimensions, which gives the sentence semantic meaning in pure numbers), you're using genAI... if genAI had an objective, measurable definition. The sentence transformer generates vectors out of your prompt, based on how the model was trained.

Yet you can use sentence-transformers for a lot of stuff that is not necessarily 'generative'. Making a search engine, for example, which I did for a hobby project. I wouldn't say Google is 'generative' though.

So what is genAI? It's whatever one doesn't like. That way they can distance themselves from 'genAI' while conceding the actual usecases of AI, because there are indeed objectively beneficial uses for it, and they can't persist in denying that reality forever, lest they look like fools (like when twitter didn't understand how image generation worked early on and tried to claim that it was pasting together pieces from thousands of different pictures. They moved on from that very quickly when they learned about noise diffusion.)

I know I'm a bit over the place because I haven't synthesized this on paper yet, but basically I don't like the distinction because it creates a divide between socially acceptable AI use and socially unacceptable AI use. But the difference doesn't exist; bullying people into compliance is idealist and will not lead to lasting change, material conditions will.

This leads us to being able to talk about the electricity/water consumption. I don't doubt the MIT's findings, though I will say estimates are always only estimates and calculating actual, final energy use is difficult even when you have all the data available.

However like I often say, if we united all the countries of the world together, we could have the largest GDP in the universe. What I mean is that we must not miss the forest for the trees. 1 hour of running microwave seems like a lot because we usually don't run the microwave for more than 3 minutes at home, but you know who runs microwaves all day long without a care in the world? The fossil fuel industry. Golf courses. The meat industry. A single grocery store throwing away hundreds of kilograms of food because it's perishable has done more environmental harm than my microwave ever could in its lifespan of heating up my food.

Even gaming takes more power than running a local neural network, whether an LLM or an image diffusion model. Youtube is hosted in datacenters too, and some years back it was all the rage on Linkedin to try and shame proles for watching too much youtube because "watching one hour of youtube consumes as much power as leaving the lights on when you don't use them! So think about that when you leave it on as background noise!"

We have to move away from individual citizen responsibility (i.e. instilling moral failure into people for not living up to some standard we impose on ourselves and each other) and towards systematic structural change. There is no ethical consumption under capitalism; people are allowed to watch Netflix and drive cars, and they will do it regardless of how many managers on Linkedin disapprove. That's nothing compared to a billionaire flying a private jet for a 15 minute trip or the meat industry making a beef patty.

That's not to say there aren't issues with the way AI is treated in the West. The US, in its usual way, has given AI companies carte blanche to do whatever they want regardless of law. This is why datacenters pollute; people like Elon Musk buy gas turbines to power their data center because the US grid could not power them even if it wanted to. They normally need EPA approval for gas turbines, but they just don't care because they can absorb the fine, and they figure they won't even be hit with one. And so far that's been true. Musk's datacenter in Memphis has 12 turbines when deploying even 1 is already a huge deal. But that's the US, it's not new and it's not the only way of doing things, it's just theirs. In China they are installing the US grid's equivalent of solar every 18 months, so it's very likely that a substantial portion of Deepseek or GLM (z.ai) is powered by solar (I tried to look for more information once before but it doesn't really seem to exist). But, if we limited ourselves to saying "oh it's just how genAI is, genAI is bad for the environment" we would miss all of that and never study the problem deeper.

We agree though overall - there is lacking education in anything to do with AI and it's going to be important to teach people (both in school and outside) about AI. I wanted to add this comment to answer @burlemarx@lemmygrad.ml's comment as well.

[–] burlemarx@lemmygrad.ml 4 points 1 week ago

Just to note, I am not part of the ludite group that thinks using AI is immoral. The problem of AI (and GenAI, more specifically) is basically the appropriation of labor (of both datasets created by humans and people working on the supervised learning portions) of millions of people in order to create a product. This, coupled with some misleading marketing campaigns that is using a tech-apocalypse language in order to inflate a financial bubble. So the punchline is I don't think using AI to create art or code is immoral. The problem lies in the production, reproduction and accumulation of capital.

That said, I do think in any curriculum that addresses AI, it needs to include all the types of AI and enlighten how AI actually works (as some researchers like Miguel Nicolelis like to say, it's not artificial nor it is intelligent), opposed to how people are promoting AI like it's some kind of magical wand to solve all problems. I think it's better than the main current market approach which is replacing AI knowledge with prompt engineering.