

Then use a desktop app instead, this is absolutely useful for me to host on my homelab, I can convert from any web connected device.
Then use a desktop app instead, this is absolutely useful for me to host on my homelab, I can convert from any web connected device.
But I want to trust randos with my file conversions! 😡
It will if it detects the requests and blocks them
Crowdsec is what stops the intrusion.
That’s a great point, yes, the majority of my podcast listening was on my longer drive before covid then I was wfh for a while but also moved closer to my eventual office.
I’m general I heard the podcast ecosystem is in a dire state, post covid numbers dropped a lot seemingly.
It’s funny because I used to listen to podcasts way more frequently before covid, but I’ve sorta transitioned to long form videos instead of straight podcasts.
Right, I guess I meant if it necessarily requires internet access to notify, how does it send the notification when it can not reach the internet.
I have uptime Kuma and use ntfy to alert myself for various things, but if I can’t access my server for any reason, the likelihood I’d be alerted first is very low.
How do you get alerted from uptime Kuma if you can’t access the site though?
Big same
Didn’t they already leave tho? I haven’t seen them around in a bit, don’t disagree with the neolib part lol.
The gpu is already running because it’s in the device, by this logic I shouldn’t have a GPU in my homelab until I want to use it for something, rip jellyfin and immich I guess.
I get the impression you don’t really understand how local LLMs work, you likely wouldn’t need a very large model to run basic scraping, just would depend on what OP has in mind really or what kind of schedule it runs on. You should consider the difference between a mega corps server farm compared to some rando using this locally on consumer hardware. (which seems to be the intent from OP)
Does it? I haven’t seen that in my instance settings, will have to take another gander.
You realize the gpu site idle when not actively being used right?
It’d be cheaper if you host it locally, essentially just your normal electricity bill which is the entire point of what op is saying lol.
Seems nifty, bake in stuff like selecting your AI provider (support local llama, local openAI api, and if you have to use a third party I guess lol) make sure it’s dockerized (or is relatively easy to do, bonus points for including a compose)
OH being able to hook into a self host engine like searxng would be nice too, can do that with Oogabooga web search plug-in currently as an example.
WG Ez worked fine for me? Basically just VPNs me right into my LAN.
OH I’m an idiot, I forgot I connect to my domain for the wire guard connection lmao
Though I did mean just tunnel into the Lan then the vpn is applied on outbound connections on the Lan using something like Gluetun or w/e
Correct, trackers will work but DHT or whatever it’s called won’t, end up with a lot of dead torrents trying to run it through mull, but I paid a bit in advance so I can’t swap yet.
Nzbs work most of the time anyway
Why not just skip that and just use a wire guard tunnel?
I use FreeTube on linux, SmartTube on my android TV and the native Yt app on mobile (got tired of apps dying)
FreeTube and SmartTube have zero suggestion algo afaik
Well it seems like that’s not working for you lol
Lol it’s fine, their comment can be taken many ways, I just wanted to be sarcastic :p