New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert docker compose to Kubernetes Resources k3s #687
Comments
I have zero experience with Kubernetes, so hopefully another user who knows how to do it can help you. Or maybe this pull-request from another (but very similar) project of mine: https://github.com/dockur/windows/pull/304/files |
Could someone please help with this? |
I started experimenting with installing it on my Harvester Kube cluster. My manifests are here: Download them, customize for you needs and apply. Logs:
so far looking good. I want to test it few days and will send PR... I noticed, that disk space units doesn't match: Kubernetes uses P.S. as I'm testing, I found that if I open UI via IP port - everything works. But if I use Ingress and open via domain name - I'm getting error:
|
@mmishael I have successfully run vDSM on Truenas scale 24.04.0 for a period of time and enabled vDSM to obtain IP through DHCP. I have provided some important configuration screenshots, hoping they can be helpful to you. |
Wow thanks!!! I will look at it. |
Thanks! |
There is now an example Kubernetes manifest thanks to @SlavikCA : https://github.com/vdsm/virtual-dsm/blob/master/kubernetes.yml |
The config above is the basic one. I'm using Harvester 1.3.0 cluster with 2 NIC (one for management and
With that config I have my Samba shares working on Windows computers in LAN. |
@mmishael , |
Hi Thanks, |
@mmishael hi, You can try setting these two options for "Kubernetes Settings". You should choose the corresponding value based on your network card. enp4s0 is my PCIe 10 Gigabit network card, and the network card that comes with your device is generally called "eno1". Please also fill in your own gateway address for the "Route v4 Gateway" option below The description I provided in the previous screenshot was not clear enough. The value of "VM_NET_DEV" is the name of the network card in the container, not the name of the network card in Truenas. I speculate that your environment should be filled in with "eth0", and "Host Interface" should be selected with "eno1 'Interface". You can use @wy19xx to contact me. Without @, I shouldn't receive GitHub's email notification. |
@wy19xx The container automaticly detects the network interface name inside the container:
If this method does not work correctly in your case, it would be better to improve this auto-detection code, than overriding the name manually using an environmeny variable. |
@kroese
I think that's because the VM had few interfaces, which were enabled to access WAN, but only one was reachable from outside. Should the |
@SlavikCA Normally under Docker the interface inside the container is always called I never saw names like |
Yes, I used to guess it was automatically detected, so there shouldn't be a need to fill in when there was only one network card, but I have two network cards. Previously, when I didn't fill in this item and started the container, an error message would be displayed. I have sent out all my configurations without any changes in order to improve the success rate of the configuration. |
No,
I'm not the network or Kubernetes expert, but here is my understanding of how it works in Kubernetes:
|
Yes, normally there will always be 1 interface that has internet connection, so my command:
just selects the first one I guess, and doesnt account for the situation where there can be multiple. So if one of you can get terminal access inside the running container ( in Portainer its called "attach to console" ) then it would be helpful if you could execute this command:
and post the output here. Because if the KubernetesIP always has a certain range (always starts with 123.x.x.x for example) we can use that info to ignore that interface. |
If configured with 2 interfaces, here is what I see:
|
Mmhh.. This So I dont have any idea how I could detect that it should not use |
@kroese This is my output: # cat /proc/net/route
Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT
eth0 00000000 010010AC 0003 0 0 0 00000000 0 0 0
eth0 000010AC 00000000 0001 0 0 0 0000FFFF 0 0 0
net1 0001A8C0 00000000 0001 0 0 0 00FFFFFF 0 0 0
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether fa:4f:52:dc:85:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.16.0.114/16 brd 172.16.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f84f:52ff:fedc:85c3/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: net1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether b2:ca:5f:0d:ac:60 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.185/24 brd 192.168.1.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::b0ca:5fff:fe0d:ac60/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: dsm@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 500
link/ether 02:11:32:28:79:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::11:32ff:fe28:7905/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever |
@wy19xx Thanks. The whole problem is that Kubernetes makes So the only fix I could do is to check if a device called Anyways, I released a new image (v7.13) that includes this fix now. |
Hi @wy19xx, I appreciate your efforts. As you mentioned, I can see my interfaces in the truenas dashboard, and it's "eno1", I gave my IPV4 gateway and still no change. Please see uploaded log, not sure it can help .... |
@mmishael Hi, |
I have Truenas scale, Docker was removed from the base OS. The apps system on scale is k3s, But k3s switched form docker to containerd as runtime, so docker was removed.
I'm not familiar with k3s, And tired to convert the compose file to k3s, but was unable to do so, tried to deploy it via portioner, and it failed.
Could you post k3s yaml to deploy via portioner?
Thanks
The text was updated successfully, but these errors were encountered: