Skip to content
This repository has been archived by the owner on Jan 22, 2021. It is now read-only.

Latest commit

 

History

History
executable file
·
173 lines (120 loc) · 7.8 KB

File metadata and controls

executable file
·
173 lines (120 loc) · 7.8 KB

Requests per Second Custom Metric Scaling

This is an example on using custom metric from Application insights to scale a deployment.

Walkthrough

Prerequisites:

Get this repository and cd to this folder (on your GOPATH):

go get -u github.com/Azure/azure-k8s-metrics-adapter
cd $GOPATH/src/github.com/Azure/azure-k8s-metrics-adapter/samples/servicebus-queue/

Configure Application Insights

Create Application Insights

First thing to do is create an application insights instance.

Get your instrumentation key

After the application instance is created get your instrumentation key.

Get your appid and api key

Get your appid and key. Then deploy the adapter.

Using Azure Application Insights API Key

helm install --name sample-release ../../charts/azure-k8s-metrics-adapter --namespace custom-metrics \ 
    --set appInsights.appId=<your app id> \
    --set appInsights.key=<your api key> \
    --set azureAuthentication.createSecret=true

note: if you plan to use the adapter with External Metrics you may need additional configuration. See the Service Bus Queue Example.

Using Azure AD Pod Identity

If you prefer to use Azure AD Pod Identity, then you don't need to specify an Application Insights API key:

kubectl create secret generic app-insights-api -n custom-metrics --from-literal=app-insights-app-id=<appid>

Deploy this modified adapter-aad-pod-identity.yaml file that includes Azure Identity and Azure Identity Binding:

kubectl apply -f https://gist.githubusercontent.com/jcorioland/947af2c02acd3bc2b4d8438f1e36a6bd/raw/9ff013c18d3a76a9c41d9fce40ad445b166013fa/adapter-aad-pod-identity.yaml

Note: the managed user identity you are using should be authorized to read the Azure Application Insights resource through RBAC.

Deploy the app that will be scaled

Create a secret with the application insights key that you retrieved in the earlier step:

kubectl create secret generic appinsightskey --from-literal=instrumentation-key=<your-key-here>

kubectl apply -f deploy/rps-deployment.yaml

optional: build and push to your own copy of the example with docker build -t metric-rps-example -f webapp/Dockerfile webapp

Double check you can hit the endpoint:

export RPS_ENDPOINT="$(k get svc rps-sample  -o json | jq .status.loadBalancer.ingress | jq -r '.[0]'.ip)"

curl http://$RPS_ENDPOINT

Scale on Requests per Second (RPS)

Deploy the Custom Metric Configuration

kubectl apply -f deploy/custom-metric.yaml

note: the CustomMetric configuration is deployed per namespace.

You can list of the configured custom metrics via:

kubectl get acm #shortcut for custommetric

Deploy the HPA

Deploy the HPA:

kubectl apply -f deploy/hpa.yaml

note: the metrics.pods.metricName defined on the HPA must match the metadata.name on the CustomMetric declaration, in this case rps

Put it under load and scale by RPS

Hey is a simple way to create load on an api from the command line.

go get -u github.com/rakyll/hey 

# 100000 requests at 100 RPS
hey -n 10000 -q 10 -c 10 http://$RPS_ENDPOINT

Watch it scale

In a separate window you can watch the HPA to see the RPS go up and the pods scale:

kubectl get hpa rps-sample -w
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
rps-sample   Deployment/rps-sample   0/10      2         10        2          4d                                                            
rps-sample   Deployment/rps-sample   36/10     2         10        2         4d                                            
rps-sample   Deployment/rps-sample   36/10     2         10        4         4d
rps-sample   Deployment/rps-sample   36/10     2         10        4         4d                
rps-sample   Deployment/rps-sample   36/10     2         10        4         4d                                                          
rps-sample   Deployment/rps-sample   36/10     2         10        4         4d                                                             
rps-sample   Deployment/rps-sample   49/10     2         10        4         4d                                                      
rps-sample   Deployment/rps-sample   49/10     2         10        4         4d                                                             
rps-sample   Deployment/rps-sample   49/10     2         10        4         4d                                                     
rps-sample   Deployment/rps-sample   49/10     2         10        4         4d                                                             
rps-sample   Deployment/rps-sample   49/10     2         10        4         4d                                                     
rps-sample   Deployment/rps-sample   33/10     2         10        4         4d                                                             
rps-sample   Deployment/rps-sample   33/10     2         10        4         4d                                                          
rps-sample   Deployment/rps-sample   25/10     2         10        4         4d                                                             
rps-sample   Deployment/rps-sample   29/10     2         10        4         4d                                                          
rps-sample   Deployment/rps-sample   24/10     2         10        4         4d                                                             
rps-sample   Deployment/rps-sample   0/10      2         10        4         4d                                        

Clean up

Once you are done with this experiment you can delete you Application Insights instance via portal.

Also remove resources created in cluster:

kubectl delete -f deploy/hpa.yaml
kubectl delete -f deploy/custom-metric.yaml
kubectl delete -f deploy/rps-deployment.yaml
helm delete --purge sample-release