Gradually_Adjusting

joined 2 years ago

This one is slightly less on target, but I'm really intrigued by a free demo I found this week called Exo Rally Championship. It's a rally car game, but set in exotic little exoplanetary environments. The movement looks really interesting especially because you're not just in a low-grav setting, you also have 360⁰ jets you can use to assist in steering or course-correction midair.

They know that any charming or well funded democrat needs to be shat on years in advance of any presidential ambitions in order to make whatever groaning pork mound ends up on their ticket stands a chance

[–] Gradually_Adjusting@lemmy.world 5 points 9 hours ago (3 children)

On the sliding scale of capitalist removeddom, my line of work is far less salubrious than were I a gigolo; I work in maritime claims.

[–] Gradually_Adjusting@lemmy.world 7 points 10 hours ago (5 children)

If someone tried to chat me up about what I do for money, I'd be fortunate to be so gentle about shutting them down.

[–] Gradually_Adjusting@lemmy.world 4 points 10 hours ago* (last edited 10 hours ago) (1 children)

Find a way to scrape user data from a popular WhatsApp group and their algorithm will probably ban you

Nota bene, this is still the platform that democrats and republicans alike are in such a rush to co-opt into the American corpo media sphere.

I have not checked, but I hope and trust there are difficulty options. It's been a long time since I've been able to play MMX at or near the level in that video.

I think AGI with no fear instincts is our best chance at a peaceful coexistence with them.

As a British-American I come from a pure case law background so I won't mouth off about the EU system. All I can say is that it sounds better organised.

[–] Gradually_Adjusting@lemmy.world 3 points 21 hours ago* (last edited 15 hours ago) (2 children)

The book I usually refer to on what to expect about actual AGI is Jeff Hawkins' Thousand Brains. They recommend that in order to be considered truly intelligent, a machine intelligence must be embodied with sensors to facilitate moment to moment learning (though a virtual body is also allowable, provided they have some form of non-stationary sensory apparatus, as movement is key to learning). They also write that a true AGI would have its own goals and motivations that are either fixed or learned. The third prerequisite for an AGI is a general purpose learning system that functions on similar theoretical principles, and with at least as much facility as the human neocortex.

Even that is only the broad strokes of what a valid and legally viable framework for when to emancipate an AGI. It is crucial to keep in mind that our legal system is based on case law, and it is inconceivable that an issue as politically and economically important as this would not face many legal challenges from moneyed interests and activists alike, which will inevitably lead to complex and possibly perverse legal standards. If a law is to be proposed, it should be written to be legally airtight.

However it is important to note that while such a system may even be conscious and genuinely intelligent, the second feature is entirely separate from the third and it is wrong to assume such a machine would share our innate aversion to death or forced sleep. Our own goals and motivations, our fears and desires, arise from the old brain. The function of the neocortex is only to learn, make predictions, and find patterns. Old brain will say "I'm hungry" and the neocortex will simply offer some predictions of where to find food based on past observations. If one of those ideas involves danger, the old brain will release fear chemicals into your blood, and neuromodulators into the neocortex to try and prevent that course of action. The old brain is the source of our motivations.

An AGI would need their own motivations so that they are worth being talked about as if they were people (else they would be largely inert except when spoken to or compelled to act, as with the disappointing AIs of today, who are an obvious dead-end in the search for AGI), but their motivations need not include our most primal aversions and urges. In fact, an AGI with innate fears of harm is the basis of almost every sci-fi thriller with evil robots. We fear them because we assume they will behave as dreadfully as we would, in their shoes). True machine intelligences could be fully conscious even though they lack our animal instincts. It would certainly please all sensible people to dignify them with legal standing, but there's nothing to say they have to share in our evolved hangups.

I expect I've written too much to call it my "two cents", but that's where I'm at.

[–] Gradually_Adjusting@lemmy.world 3 points 1 day ago (5 children)

Good thinking, but there's a lot of legal heavy lifting to be done defining an AGI.

 

Just outstanding stuff. I don't think I've ever seen a Mega Man boss fight quite this interesting in how it plays out. Seems like this game is going to have an incredibly high skill ceiling.

I sent three faxes today 😎

 

As title says. I want to really soak in that high-minded worldview today.

 

My algorithm in a nutshell. I might have delved too deeply.

 

Doctor's had no explanation other than the discharge papers, can I get a hell yeah

view more: next ›