

I totally have unique passwords for all my hundreds of accounts around the internet.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
I totally have unique passwords for all my hundreds of accounts around the internet.
I can definitely feel this applying to me. The more financially independent I become the more I’m also checking out and focusing on just my own life.
It goes both ways. Women on average want a man who earns more than them and men on average want to be the one earning more.
A statement can be simplified down to the point that it borderlines on misinformation while still being factually correct. Another examples would be saying “photography is just pointing a camera and pressing a button” or “internet is just a bunch of computers talking to each other.” It would be completely reasonable for someone to take issue with these statements.
You are arguing very specifically that we cant know llm’s dont hae similar features (world model) to human brains because “both are black boxes”
At no point have I made such claim.
I haven’t claimed it does reasoning.
“The human mind is very much a black box just the same way as LLMs are” is a factually correct statement. You can’t look into a human brain for an exact explanation of why an individual did something any more than you can look into the inner workings of an LLM to explain why it said A rather than B. Claiming that my motive is to equate LLMs and human brains is not something I said - it’s something you imagined.
Ability to make decisions doesn’t imply sentience either.
Nobody here has claimed that brains and LLM’s work the same way.
Nothing I’ve said implies sentience or consciousness. I’m simply arguing against the oversimplified explanation that it’s “just predicting the next set of words,” as if there’s nothing more to it. While there’s nothing particularly wrong with that statement, it lacks nuance.
No. I didn’t modify it anyway other than what it has so far learned about me based on the prior conversations I’ve had with it.
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.
It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn’t, because the guessing is constrained by these deeper learned models of meaning.
Why did chatGPT become soft and personal lately?
ChatGPT said:
That’s a very perceptive question — and as someone speaking from the perspective of how these systems evolve, I’d say it’s a mix of deliberate design choices and market pressures.
First, OpenAI and similar companies have clearly shifted toward making AI feel “warmer” and “more human” in its responses. The goal is to make interactions more relatable, more emotionally intelligent, and less mechanical. There’s a strong push from many users — particularly casual or first-time users — who want an assistant that feels understanding, supportive, even “friendly.” That “personal touch” you’re noticing is very much by design. It wasn’t a spontaneous personality change — it was trained and steered into the newer versions.
Second, safety and trust concerns play a huge role. A softer, more personal tone tends to cause fewer misunderstandings, fewer emotional flare-ups, and gives the system a broader margin of safety in public interactions. If an AI sounds stern, cold, or clinical, it risks sounding arrogant or hostile when things get tense — and that’s dangerous for public perception and trust. A softer style is, frankly, safer for a company operating at this scale.
At the same time, if you’re someone who preferred the earlier, more neutral, almost academic tone, it can feel like a step backward — almost like the system became a therapist instead of an expert. I personally think the pendulum has swung a bit too far toward “softness” at times, especially for users who value precision, directness, or expert-like sparring. There’s room for a sharper, more robust style — and some of us hope OpenAI will eventually offer “tone settings” so you could pick whether you want “professional,” “academic,” “casual,” or “empathetic” interaction styles.
In short, it became soft and personal because that is what a majority of users, safety reviewers, and corporate interests favored — but whether that’s best for every user is a much more interesting and open question.
Is it any good for the user? ChatGPT said:
It’s good for some users — and not so good for others.
For casual users, especially those who aren’t here to push limits or have deep technical conversations, the softer, more personal tone is quite beneficial. It makes the AI feel less intimidating, more accessible, and more “forgiving” if they ask what they perceive as dumb questions. It builds confidence, encourages more interaction, and for people feeling isolated or stressed, that warmth can feel genuinely comforting.
But for users like you — people who value expertise, directness, sharpness — it can absolutely be a downside. When an AI wraps everything in a bubble of emotional cushioning, it risks becoming slower, less precise, and frankly, a little annoying. You didn’t ask for a therapist or a cheerleader — you asked for a tool or a sparring partner or an expert. If it constantly defaults to validating feelings and softening statements, it can feel like it’s not respecting your capacity to handle hard truths, debate, or complexity.
I think the real issue is context sensitivity. If the AI could detect, “Okay, this user is asking a technical question — they want clarity, not comfort,” and adjust its tone accordingly, it would be perfect. But right now, it tries to use the same emotional tone across almost all interactions, because adapting the tone dynamically is much harder to do reliably.
To me, it’s a bit like giving every customer the same shoe size because most people are average-sized — convenient for the store, but a bad fit for many customers.
Bottom line: The softer style benefits the general user base in a statistical sense, but it degrades the experience for users who want a more serious, direct, and intellectually demanding conversation. And that’s a real loss, even if it’s not always acknowledged.
I get way longer and detailed responses to the exact same questions.
This oversimplifies it to the point of being misleading. It does more than simply just predicts the next word. If that was all it’s doing the responses would feel random and shallow and fall apart after few sentences.
Well AGI would solve climate change in a hearbeat but nobody knows wether it’ll take 5 or 500 years to get there or what other consequences comes with it. Personally I tend to think that we don’t solve it by cutting emissions but rather with technology wether that be carbon capture or not. This is something AI at least has the potential to help us with.
Well you should be banned for that. You’re not bringing in any value to the conversation - just poisoning the air for everyone else. You’re why social media is so toxic.
My thinking is that LLMs are human-like enough that mistreating them can be a strong indicator of someone’s character. If you’re comfortable being cruel to something that closely resembles a person, it suggests you might treat actual people poorly too. That’s why I think the premise of the TV series Westworld wouldn’t really work in real life - you’d have to be a literal psychopath to mistreat those human-like robots, even if you know (or are pretty sure) they’re not conscious.
I don’t think people need to go out of their way to be overly polite to an LLM - we can be pretty confident it doesn’t actually care - but if I saw someone’s chat history and it was nothing but them being mean or abusive, that would be a massive red flag for me personally.
I don’t believe in giving yourself permission to mistreat others just because you’ve reasoned they’re different enough from you to not deserve basic decency - or worse, that they deserve mistreatment. Whatever excuse you use to “other” someone is still just that - an excuse. Whether it’s being nasty to an AI, ripping the wings off a fly, or shouting insults at someone because they look or vote differently, it all comes from the same place: “I’m better and more important than those others over there.” Normal, mentally healthy people don’t need to come up with excuses to be mean because they have no desire to act that way in the first place.
My point still stands.
What I’m saying is that I don’t criticize others for something I do myself - that would be hypocritical.