Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the Gluster disk usage metrics if i am running glusterfs-prometheous container as a sidecar along with glusterfs pod #177

Open
kannanvr opened this issue Jan 23, 2020 · 5 comments

Comments

@kannanvr
Copy link

Hi,
We need a Help from Gluster-prometheous team for setting up the Gfs metrics for our cluster.
We are running Glusterfs as a Pod. We are planning to run the gluster-prometheus as a side car along with Glusterfs pod.

I am getting the error when we are trying to get the Gluster disk usage metrics.
I have mounted /var/lib/heketi path from Host to the container.
Now It seems, I am not able to access the complete path for what it is expecting.
For example
Till the path "/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_6c4220c32d00b242c8f47e27f7108d06/" We are able to access. But Stats is expecting the folder bricks after this path which is not available. But this path is available at Glusterfs pod. It seems brick is mounted volume .

Now, How to get the Gluster disk usage metrics if i am running glusterfs-prometheous container as a sidecar along with glusterfs pod.

Thanks,
Kannan V

time="2020-01-23 13:14:58.270645" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_6c4220c32d00b242c8f47e27f7108d06/brick error="no such file or directory" volume=heketidbstorage
time="2020-01-23 13:14:58.270744" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_0537c55bd41ef9d79e14ef0a12fa5d9d/brick error="no such file or directory" volume=vol_0254adf66b9007754f7b4813a9bc7f82
time="2020-01-23 13:14:58.270782" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_977aefe3a0d2cd8eb2082e371ae8a3ad/brick error="no such file or directory" volume=vol_14bd20824965be5cc4daf8f3d09b9d09
time="2020-01-23 13:14:58.270816" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_c00577ccc2e9445392e9e6f30c7e32e6/brick error="no such file or directory" volume=vol_2bdd68443587361d64a93cb71aa2cddd
time="2020-01-23 13:14:58.270847" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_f5fea3a3e5344928507b9531b8905d99/brick error="no such file or directory" volume=vol_33a2fd67d388605be30d7447ffb894df
time="2020-01-23 13:14:58.270875" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_5faf3383629fb2afee06af22ed71a9bd/brick error="no such file or directory" volume=vol_33a2fd67d388605be30d7447ffb894df
time="2020-01-23 13:14:58.270902" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72ccb2f4cd714e4/brick_3f57d07ef2ae99925451b16f2ea547d2/brick error="no such file or directory" volume=vol_5804b0348e601b2998712c53e9e7d8a4
time="2020-01-23 13:14:58.270928" level=debug msg="Error getting disk usage" brick_path=/var/lib/heketi/mounts/vg_bb2793a5afec08e2b72cc```
@aravindavk
Copy link
Member

Gluster Prometheus gets the list of brick paths from Volume info, that is why it is expecting that path including /bricks as the suffix.

/var/lib/heketi is mounted to both containers in Pod during start, any mounts made after container start will not be visible for sidecar container.

You can try setting /var/lib/heketi path as bidirectional mount to make /bricks dir accessible in side car container. (Example use-case of Bidirectional mount: https://github.com/kadalu/kadalu/blob/master/templates/server.yaml.j2#L87)

Off-topic: A few developers from Gluster started a new solution for Kubernetes Storage. This solution aims to provide Persistent volumes to applications running in Kubernetes. This project does not use Heketi or Glusterd.

Home page: https://kadalu.io
Github page: https://github.com/kadalu/kadalu

The latest blog post explains the design and different configurations available.

@kannanvr
Copy link
Author

@aravindavk , Thanks,... Its really working nice...

I have gone through Kadalu project... Its really nice. I will also start using it.
Its so impressed to see the project Kadalu and its CRD. Its Really simple...

@kannanvr kannanvr reopened this Jan 28, 2020
@kannanvr
Copy link
Author

@aravindavk , Thanks for your Suggestion. Now we have enabled the option "Bidirectional mount".
I am able to see the bricks within side car container. Also, I am able to get the complete statistics.
But When i want to tear-down the Cluster, I am getting the error when i clean up the device which i have mounted.

I have executed the following command to clean up the device
"vgremove vg_3545029c3085324df80eb282e1486e44 --yes --force --verbose"
I am getting the below error

    Device dm-6 (253:6) appears to be mounted on /var/lib/heketi/mounts/vg_3545029c3085324df80eb282e1486e44/brick_138a3297c570353a0734376c5352bafc.
  Logical volume vg_3545029c3085324df80eb282e1486e44/brick_138a3297c570353a0734376c5352bafc contains a filesystem in use.

If we disable the option and created the Glusterfs cluster, I am able to wipeout the device when i am teardown the cluster.
Now How to teardown the device after enabling the Bidirectional Mount.
Request to provide your suggestion. It will help us

@aravindavk
Copy link
Member

@kannanvr with Bidirectional mount, unmount will not happen automatically. Use pre stop hook to stop the brick process and unmount the volume.

Stop script: https://github.com/kadalu/kadalu/blob/master/server/stop-server.sh
Used here: https://github.com/kadalu/kadalu/blob/master/templates/server.yaml.j2#L45

@networkingana
Copy link

I'm questioning maybe the same, I have written gluster exporter which executes gluster volume status <volname> for example but I cannot access this command from my sidecar container, any help? Should I run the exporter in the same container instead of sidecar?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants