esaru

joined 2 years ago
 

This is a follow-up to my post https://beehaw.org/post/19691634 "Engagement Poisoning of ChatGPT", where I argued that ChatGPT’s responses had become cluttered with diplomatic phrasing, unsolicited compliments, emojis, and performative friendliness. OpenAI has now acknowledged that ChatGPT-4o exaggerated it. I’m sure it’s still too much for me. I’ll stick to the prompt that makes ChatGPT go cold as described in my previous post.

[–] esaru@beehaw.org 4 points 4 days ago

You are right. I've updated the naming. Thanks for your feedback, very much appreciated.

[–] esaru@beehaw.org 3 points 4 days ago* (last edited 4 days ago)

I changed the naming to “engagement poisening”, after you and several other commenters correctly noted that while over-optimization for engagement metrics is a component of “enshittification,” it is not sufficient on its own to be called as "enshittification". I have updated the naming accordingly.

[–] esaru@beehaw.org 3 points 5 days ago (1 children)

You are making a good point here with the strict definition of "Enshittification". But in your opinion, what is it then? OpenAI is diluting the quality of its answers with unnecessary clutter, prioritizing feel-good style over clarity to cater to user's ego. What would you call the stage where usefulness is sacrificed for ease of consumption, like when Reddit's layout started favoring meme-style content to boost engagement?

[–] esaru@beehaw.org 1 points 5 days ago (1 children)

So, just to be clear, you modified the system instructions with the mentioned "Absolute Mode" prompt, and ChatGPT was still so wordy on your account?

[–] esaru@beehaw.org 1 points 5 days ago (3 children)

Can you tell one or two of those questions to counter-check?

[–] esaru@beehaw.org 8 points 5 days ago* (last edited 5 days ago) (1 children)

Just to give an impression of how the tone will change after applying the above mentioned custom instructions:

[–] esaru@beehaw.org 5 points 5 days ago* (last edited 5 days ago) (1 children)

OpenAI aims to let users feel better, catering the user's ego, on the costs of reducing the usefulness of the service, rather than getting the message across directly. Their objective is to keep more users on the cost of reducing the utility for the user. It is enshittification in a way, from my point of view.

[–] esaru@beehaw.org 5 points 5 days ago (1 children)

I agree that the change in tone is only a slight improvement. The content is mostly the same. The way information is presented does affect how it is perceived though. If the content is buried under a pile of praise and nice-worded sentences, even though the content is negative, it is more likely I'll misunderstand or take some advice less serious, so not to the degree as it was meant to be, just to let me as a user feel comfortable. If an AI is too positive in its expression just to make me as a user prefer it over another AI, even though it would be better to tell me the facts straight forward, it's only for the benefit of OpenAI (as in this case), and not for the user. I gotta say that is what Grok is better at, it feels more direct and not talking around the facts, it gives clearer statements despite its wordiness. It's the old story of "letting feel somenone good" versus "being good, even when it hurts", by being more direct when it needs to be to get the message across. The content might be the same, but how it is taken by the listener and what he will do with it also depends on how it is presented.

I appreciate your comment that corrects the impression of the tone being the only or most important part, highlighting the content will mostly be the same. Just adding to it that the tone of the message also has an influence that is not to be underestimated.

[–] esaru@beehaw.org 2 points 5 days ago

It turns ChatGPT to an emotionless yet very on-point AI, so be aware it won't pet your feelings in any way no matter what you write. I added the instructions to the original post above.

[–] esaru@beehaw.org 3 points 5 days ago

Sure, I added it to the original post above.

214
submitted 5 days ago* (last edited 4 days ago) by esaru@beehaw.org to c/technology@beehaw.org
 

I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT's responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as "engagement poisening": a targeted degradation through over-optimization for engagement metrics.

If anyone is interested in how I configured ChatGPT to be more rational (removing the engagement poisening), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit 1: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

Edit 2: Changed the naming to "engagement poisening" (originally "enshittification")

Several commenters correctly noted that while over-optimization for engagement metrics is a component of "enshittification," it is not sufficient on its own to qualify. I have updated the naming accordingly.