• 0 Posts
  • 299 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle


  • towerful@programming.devtoProgrammer Humor@programming.devThe future is here
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Yeh, but you only need 10 vibe code cleaner-uppers per vibe coder.
    And a vibe coder is a 10x developer.
    You just have to mitigate the increased cost of AI API calls.
    It pretty much balances out, with the obvious 20% efficiency boost - which is where everyone makes their money: companies, developers and shovel AI platforms… All 20% efficiency boost. Which directly relates to profit boosts. 20% line goes up!
    Which also pays for the datacenters, the shovels GPUs, the power, the cooling and the water for the cooling. It’s all cheaper, cause AI is at least 20% more productive.

    Even if your vibe-coder-code-fixers turn into vibe-coder-code-vibe-fixers… That’s just another 20% efficiency boost. Basically printing money! Oh, but you need to buy more shovels GPUs. But that’s also a win because shovels GPUs don’t have unions or require holidays. Think of the profits! They work 24/7.
    And all you need are vibe-coder-code-vibe-fixer-code-fixers.

    …As long as your vibe-coder-code-vibe-fixer-code-fixers don’t turn into vibe-coder-code-vibe-fixer-code-vibe-fixers (I’m so lost, I think that’s right).

    Edit: forgot some shovels



  • I’d still run k8s inside a proxmox VM. Even if it’s basically all resources dedicated to the VM, proxmox gives you a huge amount of oversight and additional tooling.
    Proxmox doesn’t have to do much (or even anything), beyond provide a virtual machine.

    I’ve ran Talos OS (dedicated k8s distro) bare metal. It was fine, but I wish I had a hypervisor. I was lucky that my project could be wiped and rebuilt with ease. Having a hypervisor would mean I could’ve just rolled back to a snapshot, and separated worker/master nodes without running additional servers.
    This was sorely missed when I was both learning the deployment of k8s, and k8s itself.
    For the next project that is similar, I’ll run talos inside proxmox VMs.

    As far as “how does cloudflare work in k8s”… However you want?
    You could manually deploy the example manifests provided by cloudflare.
    Or perhaps there are some helm charts that can make it all a bit easier?

    Or you could install an operator, which will look for Custom Resource Definitions or specific metadata on standard resources, then deploy and configure the suitable additional resources in order to make it work.
    https://github.com/adyanth/cloudflare-operator seems popular?

    I’d look to reduce the amount of yaml you have to write/configure by hand. Which is why I like operators








  • Oh, operators are absolutely the way for “released” things.

    But on bigger projects with lots of different pods etc, it’s a lot of work to make all the CRD definitions, hook all the events, and write all the code to deploy the pods etc.
    Similar to helm charts, I don’t see the point for personal projects. I’m not sharing it with anyone, I don’t need helm/operator abstraction for it.
    And something like cdk8s will generate the yaml for you to inspect. So you can easily validate that you are “doing the right thing” before slinging it into k8s.


  • Everyone talks about helm charts.
    I tried them and hate writing them.
    I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
    Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
    Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
    But if it’s a solo project or consuming other people’s projects, I don’t think it really solves a problem.

    Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
    Though, if you are used to ansible, it might make more sense to use ansible.
    Pretty sure ansible will be able to do it all in a way you are familiar with.

    As for writing the manifests themselves, I find it rare I need to (unless it’s something I’ve made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
    If there aren’t helm charts or kustomize files, then it’s adapting a docker compose file into manifests. Which is manual.
    Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).

    I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
    I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.

    The way I use kubernetes for the projects I do is:
    Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
    Then apply all my pods, services, certificates etc from hand written manifests.
    Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
    If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.

    Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.

    However, I have recently found https://cdk8s.io/ and I’m meaning to investigate that for creating the manifests themselves.
    Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
    I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.


  • So uplink is 500/500.
    LAN speed tests at 1000/1000.
    WAN is 100/400.
    VPN is 8/8.

    I’m guessing the VPN is part of your homelab? Or do you mean a generic commercial VPN (like pia or proton)?

    How does the domain resolve on the LAN? Is it split horizon (so local ip on the lan, public IP on public DNS)?
    Is the homelab on a separate subnet/vlan from the computer you ran the speed test from? Or the same subnet?



  • Servers: one. No need to make the log a distributed system, CT itself is a distributed system.

    The uptime target is 99%3 over three months, which allows for nearly 22h of downtime. That’s more than three motherboard failures per month.

    CPU and memory: whatever, as long as it’s ECC memory. Four cores and 2 GB will do.

    Bandwidth: 2 – 3 Gbps outbound.
    Storage:
    3 – 5 TB of usable redundant filesystem space on SSD or.
    3 – 5 TB of S3-compatible object storage, and 200 GB of cache on SSD.
    People: at least two. The Google policy requires two contacts, and generally who wants to carry a pager alone.

    Seems beyond you typical homelab self hoster, except for the countries that have 5gbps symmetric home broadband.
    If anyone can sneak 2-3gbps outbound pass their employer, I imagine the rest is trivial.
    Altho… “At least 2 [people]” isn’t the typical self hosting

    Edit:
    Tried to fix the copy/paste.

    Also will add:

    https://crt.sh/
    Has a list of all certificates issued.
    If you are using LE for every subdomain of your homelab (including internal), maybe think about a wildcard cert?
    One of those “obscurity isn’t security”, but why advertise your endpoints? Also increases privacy (IE not advertising porn.example.com)


  • Smaller file size, lower data rate, less computational overhead, no conversion loss.

    A 64 bit float requires 64 bits to store.
    ASCII representation of a 64 bit float (in the example above) is 21 characters or 168 bits.
    Also, if every record is the same then there is a huge overhead for storing the name of each value. Plus the extra spaces, commas and braces.
    So, you are at least doubling the file size and data throughput. And there is precision loss when converting float-string-float. Plus the computational overhead of doing those conversions.

    Something like sqlite is lightweight, fast and will store the native data types.
    It is widely supported, and allows for easy querying of the data.
    Also makes it easy for 3rd party programs to interact with the data.

    If you are ever thinking of implementing some sort of data storage in files, consider sqlite first.


  • I don’t use it anymore though because I found the suggestions to be annoying and distracting most of the time and got tired of hitting escape

    Same. It took longer for me to parse and validate the suggestion as it did for me to just type what I wanted.

    I do like the helper for more complex refractors.
    Where you have a bunch of similar, but not exactly the same, changes to make.
    Where a search & replace refactor isn’t enough.
    It manages to figure out what you are doing, highlights the next instance of it and suggests the replacement.
    I don’t think I’ve seen it make a mistake doing that, and it is a useful speedup.
    I guess the LLM already has all the context: the needle, the haystack and the term.


  • Yeh, my example was pretty contrived and very surface level.
    It grouped things that seemed related at a surface level but weren’t actually related at all. Which makes it a bad example.
    And realistically, you would use a timer class that raised events, and passed in an interval class that could be constructed from any appropriate units.

    It was more to highlight that types and classes are a fairly easy way to improve the context around variable.
    It can also use type checker to show incorrect conversions between minutes and seconds, Polar and Cartesian coords, RGB and HSV, or miles and kilometers. Any number of scenarios where unit conversions aren’t a syntax error.