Deployment of NC on kubernetes/docker (and maintenance thereof) is super scary. They copy config files around in dockerfile, e.g., it’s a hell of a mess. (And not just docker: I have one instance running on an old-fashioned webhosting with only ftp access and I have to manually edit .ini and apache config after each update since they’re being overwritten.) As the documentation of OCIS is growing and it gets more features, I might actually change even the larger instances, but for now I must consider it as not feature complete (since people have expectations from nextcloud that aren’t met by ocis and its extensions). Moreover, I have more trust in the long term openness of nextcloud as opposed to owncloud, for historical reasons.
- 1 Post
- 10 Comments
dont@lemmy.worldto Selfhosted@lemmy.world•Migrating from Nextcloud AIO to Owncloud Infinite Scale: Good Idea?English3·6 months agoI am considering switching as well, for similar reasons. What has been holding me back (besides missing time to plan and do the migration) is thst I don’t quite trust ownCloud any more, and due to a lack of documentation, I would want to run it in parallel for some time to get the hang of it before migrating the other users (which adds to the time constraint).
I’ll most likely deploy using their helm chart – does anyone have any real-world experience with it?
Thanks, the bootstrapping idea was not mentioned in the comments, yet. And your blog looks promising, will have a more through look soon.
Nice, thanks, again! I overlooked the dependency instructions in the container service file, which is why I wondered how the heck podman figures out the dependencies. It makes a lot of sense to do it like this, now that I think of it.
Awesome, so, essentially, you create a name.pod file like so:
[Unit] Description=Pod Description [Pod] # stuff like PublishPort or networking
and join every container into the pod through the following line in the .container files:
Pod=name.pod
and I presume this all gets started via
systemctl --user start name.service
and systemd/podman figures out somehow which containers will have to be created and joined into the pod, or do they all have to be started individually?(Either way, I find the documentation of this feature lacking. When I tested this stuff myself, I’ll look into improving it.)
dont@lemmy.worldto Selfhosted@lemmy.world•Installing Jellyfin as a Podman QuadletEnglish2·7 months agoI’ve wondered myself and asked here https://lemmy.world/post/20435712 – got some very reasonable answers
Thank you, I think the “less heavy than managing a local micro-k8s cluster”-part was a great portion of what I was missing here.
Understood, thanks, but if I may ask, just to be sure: It seems to me that without interacting with the kubernetes layer, I’m not getting pods, only standalone containers, correct? (Not that I’m afraid of writing kube configuration, as others have inferred incorrectly. At this point, I’m mostly curious how this configuration would be looking, because I couldn’t find any examples.)
Thank you for those very convincing points. I think I’ll give it a try at some point. It seems to me that what you’re getting in return for writing quadlet configuration in addition to the kubernetes style pod/container config is that you don’t need to maintain an independent kubernetes distro since podman and systemd take care of it and allow for system-native management. This makes a lot of sense.
I love the simplicity of this, I really do, but I don’t consider this SSO. It may be if you’re a single user, but even then, many things I’m hosting have their own authentication layer and allow offloading only to some oidc-/oauth or ldap-provider.