

Just get a drive from any old notebook of the last 15 years or so someone wants to throw out, and buy a USB to SATA slim cable.


Just get a drive from any old notebook of the last 15 years or so someone wants to throw out, and buy a USB to SATA slim cable.


That’d break git repos where files with the same name, but different case exist.
If you can afford it see if Eaton has a smaller tower UPS suitable for you.


About 20 years ago I made a script that converts pictures to HTML tables. Back then RAM was a severe problem for this, and even for more powerful hardware browsers tended to just crash on larger pictures.
I checked it again a few years later, and things looked way better. I guess using CSS it’d be rather trivial nowadays to do the same with a short video by just cycling through showing/hiding tables of each frame.


I just mentioned that because google drive links are one of the very few things I’m opening in chrome - and they’re the only site where I need a 3rd party cookie exemption for.


They probably couldn’t get google drive to work without 3rd party cookies.
At the time of sending the mail I need the metadata - so offering a SMTP server implementation which keeps this in memory while forwarding is not hard. You’d lose a persistent spool in case of delivery errors - but we’ve been doing relays that keep the client connection open while trying to deliver the mail to relay errors directly to the client already 30 years ago, so that also isn’t an excuse.
For IMAP - if you don’t do serverside searching or similar it’ll work with fully encrypted mails.
They will have access to metadata - otherwise they wouldn’t be able to work as email service. That’s sufficient to implement those protocols.
The client then would have to bring their own crypto, and you’d probably want the SMTP server to reject mails if delivered unencrypted (though their FAQ says you can send unencrypted mails).
The reason they claim they can’t is probably trying to keep full control over what users are doing, in which case I agree - fuck them, don’t use services like that.
Don’t have links anymore, but few months ago I came across some startup trying to sell AI that watches your production environment and automatically optimizes queries for you.
It is just a matter of time until we see first AI induced large data loss.
Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.


I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.


Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.


Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.
I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.
I nowadays manage my private stuff with the ansible scripts I develop for work - so mostly my own stuff is a development environment for work, and therefore doesn’t need to be done on private time.
One thing I like about bluesky is that your identity doesn’t have to be tied to an instance domain - you’d still have issues if you want to change is later, but if you plan ahead and use your domain you can just move it between instances.


A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Funny timing, I’m currently going through a stack of Sun hardware in my garage to decide what to keep, and for what I’ll try to find a good home (or eventually dispose of it).


It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.
Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.
I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.
Note that those are deepseek, not chatgpt. I’ve largely given up on chatgpt a long time ago as it has severe limitations on what you can ask it without fighting its filters. You can make it go on hallucinated rants just as easily - I just nowadays do that on locally hostable models.