

// Increment i
i++;
Very info. Much useful.
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
// Increment i
i++;
Very info. Much useful.
If you do, use the -k
option - it locks access to the rook service to only the user session. Rook works without it, but is more secure with it.
Have you ever used OwnCloud, before the fork?
I hated administrating OwnCloud, and that’s kept me away from NextCloud. OwnCloud was a big, resource hogging, hot mess; did NextCloud do a huge refactor and clean it up?
Shamelessly shilling my OSS project, rook. It provides a secret-server-ish headless tool backed by a KeePass DB.
You might be interested in rook if you’re a KeePassXC user. Why might you want this instead of:
Rook is read-only, and intended to be complementary to KeePassXC. The KeePassXC command line tools are just fine for editing, where providing a password for every action is acceptable, and of course the GUI is quite nice for CRUD.
There’s a false equivalency here. Array sizes have nothing to do with static typing. For evidence, look at your own words: if undisputed strongly typed languages don’t support X, then X probably doesn’t have anything to do with strong typing. You’re conflating constraints or contracts or specific type features with type theory.
On the topic of array sizes, are you suggesting that size isn’t part of the array type in Go? Or that the compiler can’t perform some size constraint checks at compile time? Are you suggesting that Rust can perform compile time array bounds checking for all code that uses arrays?
Maybe it depends on your definition of “part of”, but
a := make([]int, 5)
len(a) == 5
len("hello") == 5
Arrays in Go have associated sizes.
But that’s beside the point; what does array size metadata have to do with strong typing?
Now, did you know that sevral good Lisp and Scheme implementations like SBCL, Chez Scheme or Racket compile to native code, even when they are dynamically typed languages? This is done by type inference.
Compiled or not, inferred or not (Go has type inference; most modern, compiled languages do), the importance of strong typing is that it detects typing errors at compile time, not at run time. Pushing inference into the compile phase also has performance benefits. If a program does type checking in advance of execution, it is by definition strongly typed.
Java is still interpreted. It compiles to bytecode for a virtual machine, which then executes a for a simulated CPU. The bytecode interpreter has gotten very good, and optimizes the bytecode as it runs; nearly every Java benchmark excluded warm-up because it takes time for the huge VM to load up and for the optimization code to analyze and settle.
Java is not the gold standard for statically typed compiled languages. It’s gotten good, but it barely competes with far younger, far less mature statically typed compiled languages.
You’re comparing a language that has existed since before C and has had decades of tuning and optimization, to a language created when Lisp was already venerable and which only started to get the same level of performance tuning decades after that. Neither of which can come close to Rust or D, which are practically infants. Zig is an infant; it’s still trying to be a complete language with a complete standard library, and it’s still faster than SBCL. Give it a decade and some focus on performance tuning, and it’ll leap ahead. SBCL is probably about a fast as it will ever get.
An indisputable fact is that static typing and compilation virtually eliminate an entire class of runtime bugs that plague dynamically typed languages, and it’s not an insignificant class.
If you add a type checker to a dynamically typed language, you’ve re-invented a strongly typed, compiled language without adding any of the low hanging fruit gains of compiling.
Studies are important and informative; however, it’s really hard to fight against the Monte Carlo evidence of 60-ish years of organic evolution: there’s a reason why statically typed languages are considered more reliable and fast - it’s because they are. There isn’t some conspiracy to suppress Lisp.
It’s an interesting system; it does clutter your workspace with a lot of files, especially if you have a lot of rules. I use it in one project and came away with mixed feelings.
Lots of good, some noise.
Did you look at Pelican?
I have not, but I will. I may also look at Zola, although it, too, appears at the surface level to be tightly coupled with markdown.
the template language is buggy and inscrutable
It’s just Go templates, which are pretty solid; I’d be surprised by any bugs, unless they’re in the Hugo short codes. The syntax is challenging, even if you’re a Go developer and use it all the time. It’s a bespoke DSL, and a pretty awful one: it’s verbose, obtuse, and makes some common things hard.
Go is my language of choice, but my faith gets shaky whenever I have to use templates.
I’m not a huge fan of Python; despite its popularity, it’s got a lot of problems, not least of which is the whole Python 2/3 fiasco; which, years later, is still plaguing us. However, if I can containerized it so it isn’t constantly breaking in the background when I do a system update, I’m not opposed to using a project written in it. At least it isn’t Node; I won’t let that crap onto any server I admin.
Edit: Zola has the same problem as Hugo.
Ah, Ok.
I do as (or a similar workflow): I rsync the content directory and let Hugo on the server render. My sites are public, but perhaps they’re just much smaller or not as popular; Hugo renders even my largest site in about a second, but for a large, slow, heavy-use production situation I could see a push-and-swap process for a more atomic site update.
I don’t see the degradation you do, but there are so many possible variables.
My biggest gripe about Hugo is how limited it is in supporting source document formats. There’s no mechanism for hooking in different formats, and the team is reluctant to merge PRs for other formats. When I started with Hugo, I had a large repository of essays spanning a decade and written in a variety of markup, from asciidoc (which I used for years), to reST, to markdown; and markdown is by far the worst. I was faced with converting everything to markdown, which was usually a lossy process because markdown is so limited, or not publishing all of that history. And now we have djot, which is almost the perfect plain text markup language, but I again have to first do a lossy conversion to markdown to get Hugo to consume it. It low-key sucks, and I’m actively looking for an alternative that has a more flexible AST-based model for which new formats can be added; something that consumes a format like pandoc’s AST.
Hugo has a watch mode, right? It should rebuild if it detects changes.
Ah. I was wondering where the “Hugo, but Rust” was.
I love these rewrites in other languages. They often learn from, and improve on, their predecessors in a way that having to maintain backwards compatability doesn’t allow.
I lived this working for a company that, as part of a suite of services, tracked international business travelers for the companies they worked for. The air travel part was a nightmare.
You know what else was a nightmare? There’s an airport in Hong Kong which technically isn’t in any country as per most land country boundary maps; it’s built it out in the ocean. It occasionally gave us grief when we updated map data because we’d have to go in and manually change the map boundaries so the software would correctly locate travelers at the airport as being in the country.
Which countries are Hong Kong, Macau, Tibet, and Taiwan airports in? Hong Kong has since become un-controversial, but no matter what you choose people get upset about it.
That system was so complex, it was fascinating. The fight data alone is a nightmare, but when you start factoring in itineraries, and the fact that there’s no commonly used standard for booking systems and booking agencies have terrible data quality control, our most common issue was data quality; even after 15 years, the we’d still find edge cases in the system where real world varied from theory.
Dude(tte). I don’t know if pilots ever giggle about this, but
controlled flight into terrain
I know it’s the official, correct term, but still. Every time I see this, it cracks me up. The euphemisms that organizations come up with to either be extremely specific, or avoid using emotionally charged words like “crash” is hilarious.
Excellent, informative post.
shyly raises hand
I wish there was an alternative to Amazon. If there were only 3 or 6 stores where I could get everything I get from Amazon, I’d go through the trouble of multiple orders. But the alternatives to Amazon is usually a bunch of individual items ordered from a bunch of unknown sites, all of which give my angst about giving me credit card to, and which usually adds up to significant shipping costs. Or, driving into the city and spending an entire day driving from shop to shop, and being limited in my options and often never finding everything.
I so badly want an alternative to Amazon. Shopping was objectively worse before it. We’ve tried Walmart, but it’s worse and I’m not sure it’s an ethical improvement.
I still drive to the mall for clothes, but even the mall is limited for non-clothes - and often hella expensive.
Go, because
I was using Java in my job, and had been since 1995 at that point, and did not touch it except for work. When I finally got fed up with dynamic, interpreted languages (around 2008?) I evaluated Rust, Vala, and Go, and Go won. In retrospect, I’m really glad I didn’t pick Vala, but whenever I look into Rust these days I’m also glad I didn’t pick that.
For scripting, now I stick to bash, or zsh if I’m not sharing the code. Bash scripts never fail because bash changes; the biggest risk is having to be careful with commands; tools I’ve become accustomed to, like ripgrep, aren’t guaranteed to be installed everywhere, and POSIX tools like grep have variants that differ in argument support: GNU grep is substantially different from SysV grep. If I’m distributing it and can’t write it using basic, standard bash and the standard SysV POSIX tools available in BusyBox, I write it in Go.
I did do a project in V recently and like it a lot, but it’s not mature enough to switch to, yet, and it’s so close to Go I can’t switch between the two because I start to conflate them.
I will very rarely program in C to fix a bug in some project I’m using. C is so basic, I’ll never forget it; the biggest hazard in C is other people’s idioms.
The Streisand Effect doesn’t apply here. They’re not making news about it, they’re silencing content posts on their platform. If Google went out and started using takedowns on other platforms, that’s when you start to get a compound media effect because site owners tend to broadcast to their readership; in this case, the only people who notice both the takedown and the cause is the author. And us, because OP told us, but we’re tiny.
After so many people stayed on Twitter, and after companies like Apple reversed their policy and went back to advertising there, I’ve lost faith in any mass internet movement. Most users don’t care, as long as they’re getting free stuff, and most content providers insist on using it because of monetization. If that’s where the content is, that’s where the users will go.
Except that one is automatically versioned and would have saved you this pain, and the other relies on you actively remembering to reflexively commit, and then do extra work to clean up your history before sharing, and once you push, it’s harder to change history and make a clean version to share.
These days, there’s little excuse to not use COW with automated snapshots in addition to your normal, manual, VCS activities.