We’ve all been there, friend. The bit arrays can’t hurt you now.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
- 2 Posts
- 325 Comments
Wrong direction!
Store only bits using word-length ints (32 bits in most modern architectures), and program everything to do math using arrays of 32 int-bits to numbers!
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Programming@programming.dev•GitHub is introducing rate limits for unauthenticated pulls, API calls, and web access1·1 day agoI should know this, but I think Go’s module metadata server also caches, and the compiler(s) looks there first if you don’t override it. I remember Drew got pissed at Go because the package server was pounding on sr.ht for version information; I really should look into those details. It Just Works™, so I’ve never bothered to read up about how I works. A lamentable oversight I’ll have to correct with this new rate limit. It might be no issue after all.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Programming@programming.dev•GitHub is introducing rate limits for unauthenticated pulls, API calls, and web access24·2 days agoThe Go module system pulls dependencies from their sources. This should be interesting.
Even if you host your project on a different provider, many libraries are on github. All those unauthenticated Arch users trying to install Go-based software that pulls dependencies from github.
How does the Rust module system work? How does pip?
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Programming@programming.dev•Flattening Rust's Learning Curve | corrode Rust Consulting3·2 days ago“Have you tried not being depressed?”
“Section”. They just used the wrong word ¯\(ツ)/¯
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•Plebbit Will Never Deliver, Apologies for the Hype, Lemmy is Where I’m StayingEnglish1·3 days agoMan, I love a good nitpicking.
Lemmy is decentralized, but it’s not distributed. It’s decentralized because the source of truth for a community isn’t your instance
It’s a source of truth for you. It’s locally centralized. Your admins have complete control over your account; they can log in as you, post as you, remove your content.
Compare this to git. Github may provide public hosting for you, but you can take your marbles and go somewhere else if you like, and there’s nothing they can do about it. But midwest.social owns my Lemmy identity, and everything that’s on it. If they propagate a “delete” on all my messages, any cooperating servers will delete those messages. For each and every one of us, Lemmy is effectively centralized to the Lemmy instance our account is on.
Now, I agree, this is different than, say, Reddit, where if the Brown Shirts shut out down, they shut out all down, and this can’t happen with Lemmy.
But it’s also not git, or bitcoin, out Nostr, where all they can do is squash nodes which has no impact on user accounts (or wallets, or whatever your identity is) or content.
Those can be updated asynchronously, so if data is cached locally, latency shouldn’t be an issue.
They day they’re not using DHT ¯\(ツ)/¯
I don’t know. This post was the first I’ve heard of it, but since then I’ve seen a couple more “organic” posts asking if anyone thinks it’s good. It smells a tiny bit of astroturfing, but not a lot, so maybe it’s genuine interest. I’ll wait a bit and see, personally.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•Plebbit Will Never Deliver, Apologies for the Hype, Lemmy is Where I’m StayingEnglish1·4 days agoThis is similar to Jami. Jami has http name servers for lookup, and (optional) http DHT proxy servers for NAT traversal. Beyond that, it’s peer-to-peer DHT. The DHT isn’t global, it’s shared between connected clients. DHT are also key-value stores, and Jami’s issues are not with the name server, they’re with message synchronization between clients.
Actually, I have to qualify that I don’t know what causes Jami’s delivery issues, but it’s probably not the name servers or proxies, because you can (and I have) hit them directly with a web browser or curl. From what I can tell, the Jami developers don’t acknowledge issues or are incapable of or unwilling to track them down, but the point is that it’s very likely the P2P part that is giving them trouble.
P2P is Ia hard problem to solve when the peers come and go online; peers may not be online at the same times and there’s no central mailbox to query for missed Messages; peers are mobile devices that change IPs frequently; or peers are behind a NAT.
You may be right about the design; I scanned the design summary, and easily could have misunderstood it. I don’t think it affects the difficulty of building robust, reliable P2P applications.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•Plebbit Will Never Deliver, Apologies for the Hype, Lemmy is Where I’m StayingEnglish14·4 days agoI don’t take issue with your points, but you’re conflating issues. I think it’s worth clarifying some terms up front.
Being utterly independent isn’t necessary for decentralization. Decentralization very specifically means there’s no single holder of the data; it does not have any implication for dependencies.
Lemmy is not decentralized; it’s federated. “Decentralized” and “federated” are not synonyms, and as long as you doing don’t run your own server, you’re effectively on a centralized platform. This is to your point about being “always dependent on something, somewhere in some way.” It’s true for Lemmy; it is not true for all systems, not unless you’re being pedantic, which wouldn’t be helpful: you being dependent on electricity from your electric company doesn’t mean an information network can’t be “truly” decentralized.
A distributed ledger can be truly decentralized. Blockchains aren’t always distributed ledgers, and not all distributed ledgers are blockchains, but whether or not a specific blockchain is resource intensive has no bearing on whether or not it’s centralized. This is the part I take issue with: it’s irrelevant to the decentralization discussion.
Bitcoin is decentralized: no single person or group of people control it, and there is no central server that serves as an authoritative source of information. If there were, it wouldn’t be nearly so ecologically expensive. Its very nature as something that exists on equally on every single full node is part of the cost. You can take out any node, or group of nodes, and as long as there’s one full node left in the world, bitcoin exists (you then have a consensus verification problem, but that’s a different issue).
But let’s look at a second, less controversial, example: git, or rather, git repositories. This is, again, fully decentralized, and depends on no single resource. Microsoft would like you to believe that github is the center of git, and indeed github is the main reason git is as popular as it is despite its many shortcomings, but many people don’t use github for their projects, and any full clone of any repository is a independent and fully decentralized copy, isolated and uncontrolled by anyone but the person on whose computer it exists. Everything else is just convention.
Nostr is yet another fully decentralized ecosystem. It is, unfortunately, colonized almost entirely by cryptobros, and that’s the majority of its content, but there’s nothing “blockchain” or crypto in the core design. Nodes are simple key/value stores, and when you publish something to Nostr you submit it to (usually) a half-dozen different nodes, and it propagates out from there to other nodes. If you run your own node, even if your node dies, you still have your account and can publish content to other nodes, because your identity - your private key - is stored on your computer. Or, if you’re smart, on your phone, and maybe your laptop too, with backups. Your identity need not even be centralized to one device. No single group can stop you from publishing - individual nodes can choose to reject your posts, and there are public block lists, but not every node uses those. It is truly decentralized.
I’m not familiar with Plebbit, but it seems to me they’re trying to establish a cryptographically verifiable distributed ledger - a distributed blockchain. There’s no proof-of-work in this, because the blocks are content, so the energy cost people associate with bitcoin is missing.
DHTs and distributed ledgers are notoriously difficult to design well, often suffering from syncing lags and block delivery failures. Jami is a good example of a project plagued by DHT sync issues. I’m not surprised they’re taking a long time to get stable, because this is a hard problem to solve - a deceptively simple problem to describe, but syncing hides issues like conflict resolution, updating published content, and all the administrative tools necessary in a world full of absolute shitheads who just want to cause chaos. It does look to me as it it would be fully decentralized, in a way Lemmy isn’t, if they can get it working reliably.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•3-2-1 Backups: How do you do the 1 offsite backup?English8·6 days agoI used to say restic and b2; lately, the b2 part has become more iffy, because of scuttlebutt, but for now it’s still my offsite and will remain so until and unless the situation resolves unfavorably.
Restic is the core. It supports multiple cloud providers, making configuration and use trivial. It encrypts before sending, so the destination never has access to unencrypted blobs. It does incremental backups, and supports FUSE vfs mounting of backups, making accessing historical versions of individual files extremely easy. It’s OSS, and a single binary executable; IMHO it’s at the top of its class, commercial or OSS.
B2 has been very good to me, and is a clear winner for this is case: writes and space are pennies a month, and it only gets more expensive if you’re doing a lot of reads. The UI is straightforward and easy to use, the API is good; if it weren’t for their recent legal and financial drama, I’d still unreservedly recommend them. As it is, you’d have you evaluate it yourself.
6 years now. 6. Years.
I’m in the US, and I had (unwisely) accepted promotions later in my career and was middle management when The Purge came. So, don’t be discouraged; our situations are very different. I’ve got white-middle-aged-middle-management-male working against me.
But I do have sone advice: do not stay inactive. Volunteer. Take whatever short h term contract work you can get, even if it doesn’t pay great. Having something to fill in the time on your CV is invaluable. I didn’t do that; I am, at my age, reasonably well off and was being picky, looking and applying only for jobs I really wanted, and it was a mistake. When I finally gave that up and stated being less selective, I found that even applying for “lower” roles wasn’t working.
Don’t be inactive. Even volunteer work gives you the opportunity to meet people and make connections, and it’s something on your resume - not just a long gap. It’s easier to explain and more palatable to employers when they ask, “so, what have you been doing for the past X years?”
Good luck.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•Self Hosted OpenSource Projectmanagement ToolEnglish1·11 days agoIt’s over there top for everyone. I wish it were easier to use, but then it wouldn’t be as effective.
As I said, much of its value probably comes from the rigor it makes you exercise to really get its value. It costs a lot of effort, though, and you’re on the right path with kanban: use the most lean process that works.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•Self Hosted OpenSource Projectmanagement ToolEnglish7·11 days agoDid you look through the github project management list?
While it doesn’t meet any of your requirements, I firmly believe the best project management software is Taskjuggler. You have to be able to write software to use it, because it’s a language for defining tasks and projects, and it can get quite involved. But it is an excellent educational experience that exposes just how much people futz with Gantt charts to get what they want to see, vs the reality. It is also unparalleled in exposing resource use and needs.
At it’s most complete, here’s a taste of what it looks like to use it:
You declare all your resources and their capabilities (John is junior and is 60% as efficient as Mary). You define a project, broken down into tasks at various and increasing levels of detail, including priorities and estimated effort, and assign teams and resources. When it’s all defined, you compile a Gantt chart and it will show you exactly how long everything will take: when things will start and end; and that you can’t deliver X and Y at the same time because while you have enough developers, the QA servers can’t be used for both at the same time.
It’s incredibly tedious and frustrating to use, but after a while when you get the resource definitions really dialed in, I know of no other tool that predicts reality with such accuracy. It’s definitely ideal for for the waterfall minded, although it can be used with agile if you keep it to the release scope; you can record both expectations and reality as time passes.
It’s not a lightweight process, and I haven’t met a project manager yet who could or would use it; it’s quite intensive. You do have to define a complete and comprehensive picture of everything impacting your project, and honestly i think that’s most of the value as most teams just wing a bunch of stuff - which is why estimations are so frequently wrong. It does tend to eliminate surprises, like the fact that half your dev team just happen to be planning vacations at the same time in the middle of the release cycle, or Management is having a big two-day team building event. If you can see it in a calendar, you put it in the plan and assign the people it affects, and the software calculates the overall delivery impact.
It’s a glorious, powerful, terrifying and underused tool, and satisfies none of your declared requirements.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•What webapps do you selfhost that aren't media/game servers?English3·13 days agoAre books not media?
I was thinking through my list, and almost mentioned Calibre Web, but decided it’s media related.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•I want to build a Mini ITX PC for my home server, where do I start?English2·16 days agoIt actually is RAID5/6 I’m looking for. Striping for speed isn’t important to my, and simple redundancy at a cost of 1/2 your total capacity isn’t a nice as getting 3/5 of your total capacity while having drive failure protection and redundancy.
Used to go the device mapper and LVM route, but it was a administrative nightmare for a self-hoster. I only used the commands when something went wrong, which was infrequent enough that I’d forget everything between events and need to re-learn it while worrying that something else would fail while I was fixing it. And changing distros was a nightmare. I use the btrfs command enough for other things to keep it reasonably fresh; if it could reliably do RAID5, I’d have RAID5 again instead of limping along with no RAID and relying on snapshots, backups, and long outages when drives fail.
Multi device is only niche because nothing else supports it yet. I’ll bet once bcachefs becomes more standard (or, if, given the main author of the project), you’ll see it a lot more. The ability to use your M.2 but have eventual consistency replication to one or more slower USB drives without performance impact will be game changing. I’ve been wondering whether this will be usable with network mounted shares as level-3 replication. It’s a fascinating concept.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•Kener open source status page v3.2.14 released with the most requested change: Subscribe to monitorsEnglish13·16 days agoThank you. It’s starting to become a pretty peeve of mine, people posting about stuff and just assuming everyone knows WTH the software is.
Even when I’m posting about updates to my own software, I include the elevator pitch. It’s not hard; if you can take the time to post, you can take the time to copy and paste the blurb.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•I want to build a Mini ITX PC for my home server, where do I start?English1·17 days agoShit, that’s a lot of storage. K.
I’ve lived on btrfs for years. I love the filesystem. However, RAID had been unreliable for a decade now, with no indication that it will ever be fixed; but most importantly, neither btrfs not zfs have prioritized multi-device support, and bcachefs does.
You can configure a filesystem built from an SSD, a hard drive, and a USB drive, and configure it so that writes and reads go to the SSD first, and are eventually replicated to the hard drive, and eventually eventually to the USB drive. All behind the scenes, so you’re working at SSD speeds for R/W, even if the USB hasn’t yet gotten all of the changes. With btrfs and zfs, you’re working at the speed of the slowest device in your multi-device FS; with bcachefs, you work at the speed of the fastest.
There’s a lot in there I don’t know about yet, like: can it be configured s.t. the fastest is an LRU? But from what I read, it’s designed very similar to L1/L2 cache and main memory.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•I want to build a Mini ITX PC for my home server, where do I start?English4·17 days agoHow much is “limited?” I’ve got one of those AMD Ryzen mobile CPU jobs that I bought new, from Amazon, for $300. I added a 2TB M.2 drive for another $100. For a bit over $200 ($230?) you can get a 4TB M.2 NVMe.
And that’s for fast storage. There’s USB3 A and C ports, so nearly unlimited external - slower, but still faster than your WiFi - drives.
When bcachefs is reliable, it’s got staged multi-device caching for the stuff you’re actually using, and background writing to your slower drives. I’m really looking forward to that, but TBH I have all of our media on a USB3 SSD it’s plenty fast enough to stream videos and music from.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.socialto Selfhosted@lemmy.world•Backblaze responds to claims of “sham accounting,” customer backups at risk - Ars TechnicaEnglish2·17 days agoI’m only concerned insofar as I don’t know of a good alternative, and really don’t want to spend the time shifting everything to a new system. I have 3 VPSes and 4(? 5?) home computers backing up to B2. The major ones, I have also backing up to disk, so really the risk for me is in that gap period while I find and set up on a new backup service.
This will be beyond annoying, but for me not catastrophic. Mainly, I’ve liked B2 - the price, and how easy it’s been to use. I understand the UI; it’s pretty straightforward, and it’s directly supported by a lot of software. It would be a real shame if it went under due to mismanagement.
Also: another example supporting my theory that one of the major flaws in Capitalism is public trading markets. This shit wasn’t an issue before they went public.
What is your question? I see you describing your approach, and think reusing an old laptop for this is perfect (built-in UPS, yay!), but it’s not clear what you’re seeking advice about.