• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • I’m sorry this happened, but it seems rather reckless of the author to be running “Malicious PoCs” on their “daily driver” (re: the PC they use for everything).

    If I was in the habit of running “Malicious PoCs”, you can be certain it would be isolated from the rest of my system. This could be in a sandbox or a vm. Heck, just created a dedicated (one time use) “new user” would have been better than "Hey, let me just download and run some random shell script. Oh, it needs root? No problem!



  • I’ve got a similar set up and everything works. So, I can confirm that your assumptions are sound.

    My solution is kubernetes based, so I use cert-Manager to issue/create the Let’s Encrypt (using DNS as the verification mechanism), when gets fed into a Traefik Reverse Proxy. Traefik is running on a non-standard port, which I can access from the outside world.

    I’d suggest tearing your current system down and verify everything is configured correctly.

    For example :

    • Take a look at the SSL cert. Is it generated properly?
    • Look at the reverse proxy. Is it using the proper SSL cert and is it properly formatted? (I’ve found curl - -verbose - - insecure https://... to be helpful)
    • Maybe add a static file (ie: robots.txt) to nginx. This would allow you to see if the problem is between the outside world and nginx or between nginx and your service.
    • You can also use the “snake oil” cert, in a pinch. It’s an insecure SSL cert, but it would allow you to confirm that your nginx is properly configured and it would confirm that the issue is with the Lets Encrypt cert (or that process/payload).

    … and not to rob you of this experience, but you might want to look into Cloudflare Tunnels. It allows you to run services within your network, but are exposed/accessible directly from Cloudflare. It’s entirely secure (actually more so than your proposed system) and you don’t need to mess around with SSL.










  • Windows (and most other operating systems) have a “user land” and a “kernel space”.

    “user land” is where all your applications run. A “user land” application can only see other applications and files owned by the same user. Eventually, a user land app will want to do “something”. This can be something like read a file from disk, make a network connection, draw a picture on the screen. To accomplish this, the user space app need to “talk” to the kernel.

    If user space apps were instruments being played in an orchestra, the kernel would be the conductor. The kernel is responsible for making sure the user land apps can only see their respective users files/apps/etc.

    The kernel “can see and do everything”, it reports to no one. It has complete access to all the applications and every file. Your device drivers for your printer, video card, ect all run in “kernel space”.

    Basically, the OPs link: they’ve ported Doom to run effectively like a device driver. This means that if doom crashes, your PC will blue screen.

    This has no practical purpose, other than saying “yeah, we did it” :)