• 0 Posts
  • 209 Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Jesus Christ what a dumb take. But at least they didn’t say that millennials are killing the cell phone industry. I guess that doesn’t make for good clickbait anymore.

    Reminds me if the parable of the broken window, in which French economist Frédéric Bastiat explains the painfully-obvious truth that breaking windows is generally a bad thing, even though it drums up business for the glass maker.

    But if, on the other hand, you come to the conclusion, as is too often the case, that it is a good thing to break windows, that it causes money to circulate, and that the encouragement of industry in general will be the result of it, you will oblige me to call out, “Stop there! Your theory is confined to that which is seen; it takes no account of that which is not seen.”

    It is not seen that as our shopkeeper has spent six francs upon one thing, he cannot spend them upon another. It is not seen that if he had not had a window to replace, he would, perhaps, have replaced his old shoes, or added another book to his library. In short, he would have employed his six francs in some way, which this accident has prevented.


  • The actual paper presents the findings differently. To quote:

    Our results clearly indicate that the resolution limit of the eye is higher than broadly assumed in the industry

    They go on to use the iPhone 15 (461ppi) as an example, saying that at 35cm (1.15 feet) it has an effective “pixels per degree” of 65, compared to “individual values as high as 120 ppd” in their human perception measurements. You’d need the equivalent of an iPhone 15 at 850ppi to hit that, which would be a tiny bit over 2160p/UHD.

    Honestly, that seems reasonable to me. It matches my intuition and experience that for smartphones, 8K would be overkill, and 4K is a marginal but noticeable upgrade from 1440p.

    If you’re sitting the average 2.5 meters away from a 44-inch set, a simple Quad HD (QHD) display already packs more detail than your eye can possibly distinguish

    Three paragraphs in and they’ve moved the goalposts from HD (1080p) to 1440p. :/ Anyway, I agree that 2.5 meters is generally too far from a 44" 4K TV. At that distance you should think about stepping up a size or two. Especially if you’re a gamer. You don’t want to deal with tiny UI text.

    It’s also worth noting that for film, contrast is typically not that high, so the difference between resolutions will be less noticeable — if you are comparing videos with similar bitrates. If we’re talking about Netflix or YouTube or whatever, they compress the hell out of their streams, so you will definitely notice the difference if only by virtue of the different bitrates. You’d be much harder-pressed to spot the difference between a 1080p Bluray and a 4K Bluray, because 1080p Blurays already use a sufficiently high bitrate.





  • I very much enjoyed the start but steadily lost interest.

    There’s some good stuff in Discovery all the way through, don’t get me wrong. But they kind of flipped the script in a way I did not appreciate.

    Most of classic Trek showed us a future with a largely functional society, mostly full of good people who were ready and willing to deal with occasional corruption.

    Lots of newer Trek, and especially Discovery, showed us a future where society is largely dysfunctional and corruption is the norm. Almost everyone in the series who isn’t a main character (plus a couple who are) is a piece of shit. Even the “good guys” frequently encourage or at least tolerate clearly evil behavior as long as it serves their ends. But it’s okay because…friendship I guess?!?

    Their heart is in the right place but the writing is generally bad. I think this generation of writers is incapable of imagining a better world, which, sure, is understandable, given how thoroughly corrupt our current society is. But it’s deeply depressing. It lacks soul.

    SNW is better in this regard. But you’ll probably want to watch season 1 of Discovery first since there’s some crossover.




  • I actually did this a lot on classic Mac OS. Intentionally.

    The reason was that you could put a carriage return as the first character of a file, and it would sort above everything else by name while otherwise being invisible. You just had to copy the carriage return from a text editor and then paste it into the rename field in the Finder.

    Since OS X / macOS can still read classic Mac HFS+ volumes, you can indeed still have carriage returns in file names on modern Macs. I don’t think you can create them on modern macOS, though. At least not in the Finder or with common Terminal commands.





  • That can’t be good. But I guess it was inevitable. It never seemed like Arc had a sustainable business model.

    It was obvious from the get-go that their ChatGPT integration was a money pit that would eventually need to be monetized, and…I just don’t see end users paying money for it. They’ve been giving it away for free hoping to get people hooked, I guess, but I know what the ChatGPT API costs and it’s never going to be viable. If they built a local-only backend then maybe. I mean, at least then they wouldn’t have costs that scale with usage.

    For Atlassian, though? Maybe. Their enterprise customers are already paying out the nose. Usage-based pricing is a much easier sell. And they’re entrenched deeply enough to enshittify successfully.




  • Yeah, that’s true for a subset of code. But for others, the hardest parts happen in the brain, not in the files. Writing readable code is very very important, especially when you are working with larger teams. Lots of people cut corners here and elsewhere in coding, though. Including, like, every startup I’ve ever seen.

    There’s a lot of gruntwork in coding, and LLMs are very good at the gruntwork. But coding is also an art and a science and they’re not good at that at high levels (same with visual art and “real” science; think of the code equivalent of seven deformed fingers).

    I don’t mean to hand-wave the problems away. I know that people are going to push the limits far beyond reason, and I know it’s going to lead to monumental fuckups. I know that because it’s been true for my entire career.


  • If I’m verifying anyway, why am I using the LLM?

    Validating output should be much easier than generating it yourself. P≠NP.

    This is especially true in contexts where the LLM provides citations. If the AI is good, then all you need to do is check the citations. (Most AI tools are shit, though; avoid any that can’t provide good, accurate citations when applicable.)

    Consider that all scientific papers go through peer review, and any decent-sized org will have regular code reviews as well.

    From the perspective of a senior software engineer, validating code that could very well be ruinously bad is nothing new. Validation and testing is required whether it was written by an LLM or some dude who spent two weeks at a coding “boot camp”.



  • I remember when some company started advertising “BURN-proof” CD-R drives and thinking that was a really dumb phrase, because literally nobody shortened “buffer underrun” to “BURN”, and because, you know, “burning” was the entire point of a CD-R drive.

    It worked though. Buffer underruns weren’t a problem on the later generations of drives. I still never burned at max speed on those though. Felt like asking for trouble to burn a disc at 52x or whatever they maxed out at. At that point it was the difference between 1.5 minutes and 4 minutes or something like that. I was never in that big a rush.