Skip to content

AnishmMore/Cloud-Serverless-Fast-Start-Minimizing-Cold-Start-Time

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cloud-Serverless-Fast-Start-Minimizing-Cold-Start-Time

About

This repository provides a solution for minimizing cold start time in Knative services using Kubernetes cronjobs and shell scripting. By implementing this solution, you can mitigate cold start times and achieve better performance for your serverless applications on Knative.

Contents

Method 1: Container Startup Time Reduction using cronjob

  • hello.yaml: This Knative Service file is used to deploy functions.
  • cronjob.yaml: This Kubernetes cronjob calls the Knative function every 3 minutes to keep the pods warmed up.
  • script.sh: This shell script stops the cronjob when there are 3 successful jobs, ensuring optimal resource utilization.

Use the provided files to implement the solution and achieve optimal cold start time for your Knative services.

Steps to Run the Solution

  1. Install Knative and Kubernetes on your system. You can refer to the official documentation for installation instructions.
  2. Clone this repository to your local system.
    git clone <repository_url>
  3. Navigate to the cloned repository directory.
    cd <repository_directory>
  4. Deploy the Knative Service using the hello.yaml file.
    kubectl apply -f hello.yaml
  5. Deploy the Kubernetes cronjob using the cronjob.yaml file.
    kubectl apply -f cronjob.yaml
  6. Verify that the cronjob is running.
    kubectl get cronjob
  7. Wait for a few minutes to allow the pods to warm up.
  8. Check the logs of the Knative service to confirm that it is running.
    kubectl get svc
  9. Stop the cronjob using the script.sh file after 3 successful jobs.
    ./script.sh

By following these steps, you can run the solution and mitigate cold start time for your Knative services.

Method 2: Optimizing Cold Start Time for Knative Services with Image Caching

Another approach to minimize cold start time in Knative Services is to use image caching. By caching the images used by your services, you can significantly reduce the time required to create new pods, resulting in faster cold start times.

Here's how you can implement image caching for your Knative Services:

  1. Push the image to your container registry.
  2. Configure your Knative Service to use the image with the unique tag.
  3. Enable image caching in Knative by configuring the `imagePullPolicy` property of the container to `IfNotPresent` or `Always`. This tells Kubernetes to use the locally cached image if available, instead of downloading the image from the container registry.
  4. Set the `imagePullPolicy` property to `Always` for the first request to the service. This ensures that the latest version of the image is always used for the first request, even if it is already cached.
  5. cache.yaml: This file defines the ImageCache resource and specifies the most commonly used image to cache.

By implementing image caching for your Knative Services, you can achieve faster cold start times and better performance for your serverless applications.

Steps to Implement Image Caching with cache.yaml

  1. Install Knative and Kubernetes on your system. You can refer to the official documentation for installation instructions.
  2. Clone the repository containing the cache.yaml file to your local system.
    git clone <repository_url>
  3. Navigate to the cloned repository directory.
    cd <repository_directory>
  4. Apply the cache.yaml file to create the ImageCache resource.
    kubectl apply -f cache.yaml
  5. Update your Knative pod to use the ImageCache resource. You can add the following annotation to your pod spec:
    annotations: caching.internal.knative.dev/image: <cache_name>
    Replace <cache_name> with the name of the ImageCache resource created in step 4.
  6. Deploy the Knative pod to the Kubernetes cluster.
    kubectl apply -f hello.yaml
  7. Compare the cold start and warm start time of your Knative Service by using the following command:
    time curl http://<external-link>
    Replace <external-link> with the URL of your Knative Service. In a Knative service, a cold start happens when there are no pods running, and a warm start happens when the image is already cached and doesn't need to be pulled again. By using image caching, you can significantly reduce cold start time and accelerate pod creation.

By following these steps, you can implement image caching with cache.yaml and accelerate pod creation for your Knative Services. You can also measure the impact of image caching on cold start time by comparing cold and warm start times with the time curl command.

Method 3: Using SARIMA Time Series Forecasting to Minimize Cold Start Time

For this approach, we have developed a predictive model using SARIMA time series forecasting to predict traffic volume for a Knative service. We have implemented the model in the following files:

  • arima_model.pkl: This file contains the trained SARIMA model, with its parameters stored in pickle format.
  • ARIMA.ipynb: This Jupyter notebook generates randomized data for the last 10 days and uses the SARIMA model to forecast future traffic.
  • pba.py: This Python script uses the trained arima_model.pkl file to forecast traffic based on the current datetime and predict and deploy autoscaling pods to prepare our pods for incoming traffic. This approach helps mitigate the impact of cold start in a graceful manner.

By using a time series forecasting approach and implementing the SARIMA model, you can predict future traffic for your Knative service and take proactive steps to prepare your pods for incoming traffic. This approach can help ensure that your service is always available and performs optimally for your users.

Reference: A. P. Jegannathan, R. Saha and S. K. Addya, "A Time Series Forecasting Approach to Minimize Cold Start Time in Cloud-Serverless Platform," 2022 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom), Sofia, Bulgaria, 2022, pp. 325-330, doi: 10.1109/BlackSeaCom54372.2022.9858271.

Steps to Implement SARIMA Time Series Forecasting for Minimizing Cold Start Time

Here are the steps to run the pba.py script for implementing SARIMA time series forecasting to minimize cold start time:

  1. Install Python and required libraries on your system. You can refer to the official documentation for installation instructions.
  2. Clone the repository containing the pba.py file to your local system.
    git clone <repository_url>
  3. Navigate to the cloned repository directory.
    cd <repository_directory>
  4. Run the pba.py script using the following command:
    python pba.py
  5. The script will get the current timestamp and use the trained SARIMA model to predict the incoming traffic for the Knative service.
  6. Autoscaling pods will be deployed to prepare for the incoming traffic, mitigating the impact of cold start in a graceful manner.
  7. The script will sleep for 55 minutes before repeating the process.
  8. You can modify the time interval between runs by changing the sleep duration in the script.

By following these steps, you can run the pba.py script and implement SARIMA time series forecasting to minimize cold start time for your Knative service. This approach can help ensure that your service is always available and performs optimally for your users.

Results

Using the methods described in this repository, we were able to significantly minimize the cold start time for our Knative services. Here are some of the results we achieved:

  • Method 1 - Container Startup Time Reduction: We were able to reduce the container startup time by up to 90% by optimizing the container image size and using the init container approach.
  • Simple-api image of size 3 MB:
  • 1
  • Java image of size 35 MB:
  • 2
  • Method 2 - Image Caching: We were able to reduce the image layer pull time by up to 50% using image caching.
  • Screenshot 2023-03-29 at 2 04 49 PM

Team Members and Contributors

  • Anish More
  • Ajay Wayase

We are grateful to all the contributors who have helped make this project possible. Thank you for your support and contributions!

If you are interested in contributing to this project, please feel free to reach out to us or submit a pull request. We welcome your contributions and look forward to collaborating with you!!