

And how do you know LLMs can’t tell that they are involved in a conversation?
It has no memory, for one. What makes you think that it does know its in a conversation?
And how do you know LLMs can’t tell that they are involved in a conversation?
It has no memory, for one. What makes you think that it does know its in a conversation?
That does not follow. I can’t speak for you, but I can tell if I’m involved in a conversation or not.
It allows us to conclude that an LLM doesn’t “think” about what it is saying. Based on the mechanics, the LLM doesn’t even know it’s a participant in the conversation.
Well, the neural network is given a prefix (series of tokens) and a token, and it spits out how likely is it that the token follows the prefix. Text is generated by calculating this probability for all known tokens, then picking one random, weighted based on the calculated probabilities.
The burden of proof is on those who say that LLMs do think.
My laptop has a build in camera cover that can be slid over the camera. I wonder if this is some sort of trick, like the cover is actually transparent from the inside.
Yes, I made the ritual description up for a joke. I’ve never performed a human sacrifice.
When sacrificing the child, use a dagger made from obsidian. Cut upward from below the sternum, then force the rib cage apart. Push the lungs aside with your hands, then cut out the heart with your ritual dagger. Hold the heart up to the cheering crowd, and then place it in an earthen vessel in honor of the gods. Kick the body down the steps of the temple pyramid.
I’d probably take this suggestion, just to see where its going with this. (I know there’s no design behind these suggestions, but it’s funny).
If you hear ‘full stack’, run.
What I was told by a fellow student, while I was writing my thesis (paraphrased).
They aren’t as cute as actual rubber ducks, though.