Skip to content

Sample Python/Flask Application to demonstrate OpenShift

License

Notifications You must be signed in to change notification settings

sa-ne/testFlask

Repository files navigation

testFlask

Test Flask is a simple flask application to show some parts of the openshift application process

Steps to Run

Steps 1 & 2 are only necessary if you are using a private git repo

1 Create Secret in Openshift for Private/Cluster, example is for github ssh key
oc create secret generic $SECRET_NAME --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=$SSHKEY_PATH -n $NAMESPACE

2 Link Secret with your Service Account,the default Service account for builds is usually builder so will link with builder
oc secrets link builder $SECRET_NAME -n $NAMESPACE

3 Create a New Secret to host our database credentials
oc create secret generic my-secret --from-literal=MYSQL_USER=$MYSQL_USER --from-literal=MYSQL_PASSWORD=$MYSQL_PASSWORD -n $NAMESPACE

4 Create a new mysql instance(Application will use sqlite if no mysql detail is provided)
oc new-app $MYSQL_NAME --env=MYSQL_DATABASE=$MYSQL_DB -l db=mysql -l app=testflask -n $NAMESPACE

5 The new app above will fail because we have not provided the MYSQL user and password,we can provide the database secret to the mysql deployment
oc set env dc/$MYSQL_NAME --from=secret/my-secret -n $NAMESPACE

6 Create a new application on openshift, using the oc new-app command. With the oc new-app command you have multiple options to specify how you would like to build a running container.Please see Openshift Builds and Openshift S2i,
Example below uses source-secret created earlier,if you want to use sqlite in the same pod instead of the mysql we created above skip all the database environment variables
- Private Repo with Source Secret
oc new-app python:3.6~git@github.com:MoOyeg/testFlask.git --name=$APP_NAME --source-secret=github-secret -l app=testflask --strategy=source --env=APP_CONFIG=gunicorn.conf.py --env=APP_MODULE=testapp:app --env=MYSQL_NAME=$MYSQL_NAME --env=MYSQL_DB=$MYSQL_DB -n $NAMESPACE
- Public Repo without Source Secret(s2i Building)
oc new-app https://github.com/MoOyeg/testFlask.git --name=$APP_NAME -l app=testflask --strategy=source --env=APP_CONFIG=gunicorn.conf.py --env=APP_MODULE=testapp:app --env=MYSQL_NAME=$MYSQL_NAME --env=MYSQL_DB=$MYSQL_DB -n $NAMESPACE
- Public Repo using the Dockerfile to build(Docker Strategy)
oc new-app https://github.com/MoOyeg/testFlask.git --name=$APP_NAME -l app=testflask --env=MYSQL_NAME=$MYSQL_NAME --env=MYSQL_DB=$MYSQL_DB -n $NAMESPACE

7 Expose the service to the outside world with an openshift route
oc expose svc/$APP_NAME

8 We can provide your database secret to your app deployment, so your app can use those details
oc set env dc/$APP_NAME --from=secret/my-secret -n $NAMESPACE

9 You should be able to log into the openshift console now to get a better look at the application, all the commands above can be run in the console, to get more info about the developer console please visit Openshift Developer Console

10 To make the seperate deployments appear as one app in the Developer Console, you can label them. This step does not change app behaviour or performance is a visual aid and would not be required if app was created from developer console
oc label dc/$APP_NAME app.kubernetes.io/part-of=$APP_NAME
oc label dc/$MYSQL_NAME app.kubernetes.io/part-of=$APP_NAME
oc annotate dc/$APP_NAME app.openshift.io/connects-to=$MYSQL_NAME

11 You can attach a WebHook to your application , so when there is application code change the application is rebuilt to take adavantage of that, you can see steps to this via the developer console .Opensshift will create the html link and secret for you which you can configure in github/gitlab other generic VCS. See more here Openshift Triggers and see github webhooks
- To get the Webhook Link from the CLI
oc describe bc/$APP_NAME | grep -i -A1 "webhook generic"
- To get the Webhook Secret from the CLI
oc get bc/$APP_NAME -o jsonpath='{.spec.triggers[*].github.secret}'
- Content Type is application/json and disable ssl verification if your ingress does not have a trusted cert.

12 It is important to be able to provide the status of your application to the platform so the platform does not send requests to application instances not ready or available to recieve them, this can be done with a liveliness and a health probe, please see Health Checks. This application has sample /health and /ready uri that provide responses about the status of the application

  • Create a readiness probe for our application
    oc set probe dc/$APP_NAME --readiness --get-url=http://:8080/ready --initial-delay-seconds=10

  • Create a liveliness probe for our application
    oc set probe dc/$APP_NAME --liveness --get-url=http://:8080/health --timeout-seconds=30 --failure-threshold=3 --period-seconds=10 -n $NAMESPACE

  • We can test Openshift Readiness by opening the application page and setting the application ready to down, after a while the application endpoint will be removed from the list of endpoints that recieve traffic for the service,you can confirm by

    • oc get ep/$APP_NAME -n $NAMESPACE
    • Since the readiness removes the pod endpoint from the service we will not be able to access the app page anymore
    • We will need to log into the pod to enable the readiness back
      • POD_NAME=$(oc get pods -l deploymentconfig=$APP_NAME -n $NAMESPACE -o name)
      • Exec the Pod and curl the pod API to start the pod readiness
      • oc exec $POD_NAME curl http://localhost:8080/ready_down?status=up

13 Openshift also provides a way for you to use Openshift's platform monitoring to monitor your application metrics and provide alerts on those metrics.Note, this functionality is still in Tech Preview.This only works for applications that expose a /metrics endpoint that can be scraped which this application does. Please visit Monitoring Your Applications and you can see an example of how to do that here, before running any of the below steps please enable monitoring using info from the links above

  • Create a servicemonitor using below code (Please enable cluster monitoring with info from above first), servicemonitor label must match label specified from the deployment config above.
cat << EOF | oc create -f -
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: prometheus-testflask-monitor
  name: prometheus-testflask-monitor
  namespace: $NAMESPACE
spec:
  endpoints:
  - interval: 30s
    targetPort: 8080
    scheme: http
  selector:
    matchLabels:
      app: testflask
EOF

  • After the servicemonitor is created we can confirm by looking up the application metrics under monitoring-->metrics, one of the metrics exposed is Available_Keys(Type Available_Keys in query and run) so as more keys are added on the application webpage we should see this metric increase

  • We can also create alerts based on Application Metrics using the Openshift's Platform AlertManager via Prometheus,Openshift Alerting.We need to create an Alerting Rule to recieve Alerts

cat << EOF | oc create -f -
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: testflask-alert
  namespace: $NAMESPACE
spec:
  groups:
  - name: app-testflask
    rules:
    - alert: DB_Alert
      expr: Available_Keys{job="testflask"} > 4
EOF
  • The above alert should only fire when the we have more than 4 keys in the application, go to the application webpage and add more than 4 keys to the DB, we should be able to get an alert when we go to Monitoring-Alerts-AlertManager UI(Top of Page)