• 0 Posts
  • 43 Comments
Joined 3 years ago
cake
Cake day: January 20th, 2023

help-circle


  • It is always hilarious and strange to see the buy-in on these things. We have a single coder in his late 60s that has bought in hard to spicy autocorrect. Meanwhile, the youngest on our team (like 22) won’t touch it with a 10 ft pole.

    The other issue is just the morality of it. Do I know people that got rich on Bitcoin? Yes. Do I feel like they’re participating in a pyramid scheme still? Also yes. And with spicy autocorrect, where they got their training data for any and all of these models is so freaking morally bankrupt, and they’re desperate to paper over that and make it “ok” for businesses to use it.











  • I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, “Aw, it’s sick and hallucinating, otherwise it would tell us the truth.”

    Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of “It’s making stuff up” which seems closer to the mechanisms behind an LLM to me.

    I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.