You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the k8s environment, all detection targets are dynamic, so cloudprober has designed the k8s endpoint to support dynamic service discovery. However, this mode still needs improvement in terms of configuration file management and detection source definition.Below is a brief description of these two requirements
1. Definition of cloudprober source location:
cloudprober recommends managing all probes through deployment and configuration files. Each cloudprober instance reads the same configuration file. In this deployment mode, if I want to initiate probes on different nodes and availability zones, I must control the node where the cloudprober instance is located through affinity scheduling during deployment. And when I have many detection tasks, which node cloudprober should be deployed on becomes an extremely complex management task.
2. Define detection tasks through CRD
So can we define the detection task through the CRD of k8s, such as' CloudProberTask '. When we receive a user-defined 'CloudProberTask' object, we can decide to create a cloud probe workload that meets the user's scheduling needs based on the cloud probe source location defined by the object. When the user deletes the 'CloudProberTask', we can remove the workload. All cloudprober instances in this workload use the configuration in CRD to create probes and alarms.
Another benefit of doing this is that in large k8s clusters, each namespace may represent an organization, allowing each organization to easily meet their network monitoring needs through 'CloudProberTask'.
The text was updated successfully, but these errors were encountered:
I think this kind of feature would require doing something along the lines of what Prometheus does with prometheus-operator - an operator app would be able to:
Define a cloudprober deployment, including a config reloader (assuming rds is not used)
Define a cloudprober rds deployment (if rds is desired to be used)
Define the spec for how to convert a CRD schema to a Cloudprober probe schema, plus other configuration objects.
In the k8s environment, all detection targets are dynamic, so cloudprober has designed the k8s endpoint to support dynamic service discovery. However, this mode still needs improvement in terms of configuration file management and detection source definition.Below is a brief description of these two requirements
1. Definition of cloudprober source location:
cloudprober recommends managing all probes through deployment and configuration files. Each cloudprober instance reads the same configuration file. In this deployment mode, if I want to initiate probes on different nodes and availability zones, I must control the node where the cloudprober instance is located through affinity scheduling during deployment. And when I have many detection tasks, which node cloudprober should be deployed on becomes an extremely complex management task.
2. Define detection tasks through CRD
So can we define the detection task through the CRD of k8s, such as' CloudProberTask '. When we receive a user-defined 'CloudProberTask' object, we can decide to create a cloud probe workload that meets the user's scheduling needs based on the cloud probe source location defined by the object. When the user deletes the 'CloudProberTask', we can remove the workload. All cloudprober instances in this workload use the configuration in CRD to create probes and alarms.
Another benefit of doing this is that in large k8s clusters, each namespace may represent an organization, allowing each organization to easily meet their network monitoring needs through 'CloudProberTask'.
The text was updated successfully, but these errors were encountered: