• 0 Posts
  • 170 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle
  • It’s the most boring thing of the technical side of the job especially at the more senior levels because it’s so mindnumbingly simple, uses a significant proportion of development time and is usually what ends up having to be redone if there are small changes in things like input or output interfaces (i.e. adding, removing or changing data fields) which is why it’s probably one of the main elements in making maintaining and updating code already in Production a far less pleasant side of job than the actual creation of the application/system is.


  • If the information never leaves the device then it doesn’t need a policy - privacy is not about what an app does in the device which never leaves the device hence never gets shared, it’s about what it shares with a 3rd party.

    A clock doesn’t need to send system time settings information to a server since that serves no purpose for it - managing that is all done at the OS level and the app just uses what’s there - and that’s even more so for location data since things like determining the timezone are done by the user at the OS level, which will handle stuff like prompting the user to update the timezone if, for example, it detects the device is now in a different timezone (for example, after a long trip).




  • One of the first things they teach you in Experimental Physics is that you can’t derive a curve from just 2 data points.

    You can just as easilly fit an exponential growth curve to 2 points like that one 20% above the other, as you can a a sinusoidal curve, a linear one, an inverse square curve (that actually grows to a peak and then eventually goes down again) and any of the many curves were growth has ever diminishing returns and can’t go beyond a certain point (literally “with a limit”)

    I think the point that many are making is that LLM growth in precision is the latter kind of curve: growing but ever slower and tending to a limit which is much less than 100%. It might even be like more like the inverse square one (in that it might actually go down) if the output of LLM models ends up poluting the training sets of the models, which is a real risk.

    You showing that there was some growth between two versions of GPT (so, 2 data points, a before and an after) doesn’t disprove this hypotesis. I doesn’t prove it either: as I said, 2 data points aren’t enough to derive a curve.

    If you do look at the past growth of precision for LLMs, whilst improvement is still happening, the rate of improvement has been going down, which does support the idea that there is a limit to how good they can get.







  • Above a certain level of seniority (in the sense of real breadth and depth of experience rather than merely high count of work years) one’s increased productivity is mainly in making others more productive.

    You can only be so productive at making code, but you can certainly make others more productive with better design of the software, better software architecture, properly designed (for productivity, bug reduction and future extensibility) libraries, adequate and suitably adjusted software development processes for the specifics of the business for which the software is being made, proper technical and requirements analysis well before time has been wasted in coding, mentorship, use of experience to foresee future needs and potential pitfalls at all levels (from requirements all the way through systems design and down to code making), and so on.

    Don’t pay for that and then be surprised of just how much work turns out to have been wasted in doing the wrong things, how much trouble people have with integration, how many “unexpected” things delay the deliveries, how fast your code base ages and how brittle it seems, how often whole applications and systems have to be rewritten, how much the software made mismatches the needs of the users, how mistrusting and even adversarial the developer-user relationship ends up being and so on.

    From the outside it’s actually pretty easy to deduce (and also from having known people on the inside) how plenty of Tech companies (Google being a prime example) haven’t learned the lesson that there are more forms of value in the software development process than merely “works 14h/day, is young and intelligent (but clearly not wise)”


  • Sound like a critical race condition or bad memory access (this latter only in languages with pointers).

    Since it’s HTTP(S) and judging by the average developer experience in the domain of multi-threading I’ve seen even for people doing stuff that naturally tends to involve multiple threads (such as networked access by multiple simultaneous clients), my bet is the former.

    PS: Yeah, I know it’s a joke, but I made the serious point anyways because it might be useful for somebody.







  • Making a mistake once in a while on something one does all time is to be expected - even somebody with a 0.1% rate of mistakes will fuck up once in while if they do something with high enough frequency, especially if they’re too time constrained to validate.

    Making a mistake on something you do just once, such as setting up the process for pushing virus definition files to millions of computers in such a way that they’re not checked inhouse before they go into Production, is a 100% rate of mistakes.

    A rate of mistakes of 0.1% is generally not incompetence (dependes on how simple the process is and how much you’re paying for that person’s work), whilst a rate of 100% definitelly is.

    The point being that those designing processes, who have lots of time to do it, check it and cross check it, and who generally only do it once per place they work (maybe twice), really have no excuse to fail the one thing they had to do with all the time in the World, whilst those who do the same thing again and again under strict time constraints definitelly have valid excuse to once in a blue moon make a mistake.


  • If you system depends on a human never making a mistake, your system is shit.

    It’s not by chance that for example, Accountants have since forever had something which they call reconciliation where the transaction data entered from invoices and the like then gets cross-checked with something else done differently, for example bank account transactions - their system is designed with the expectation that humans make mistakes hence there’s a cross-check process to catch those.

    Clearly Crowdstrike did not have a secondary part of the process designed to validate what’s produced by the primary (in software development that would usually be Integration Testing), so their process was shit.

    Blaming the human that made a mistake for essentially being human and hence making mistakes, rather than the process around him or her not having been designed to catch human failure and stop it from having nasty consequences, is the kind of simplistic ignorant “logic” that only somebody who has never worked in making anything that has to be reliable could have.

    My bet, from decades of working in the industry, is that some higher up in Crowdstrike didn’t want to pay for the manpower needed for the secondary process checking the primary one before pushing stuff out to production because “it’s never needed” and then the one time it was needed, it wasn’t there, thinks really blew up massivelly, and here we are today.


  • Yeah, the tools are still there to figure out the low level shit, information on it has never been this easy to come by and bright people who are interested will still get there.

    However growing up during a time you were forced to figure the low level details of tech out merely to get stuff to work, does mean that if you were into tech back then you definitely became bit of a hacker (in the traditional sense of the word) whilst often what people consider as being into tech now is mainly spending money on shinny toys were everything is already done for you.

    Most people who consider themselves as being “into Tech” don’t really understand it to significant depth because they never had to and only the few who actually do want to understand it at that level enough to invest time into learning it do.

    I’m pretty sure the same effect happened in the early days vs later days of other tech, such as cars.