Skip to content

Build, run and manage your data pipelines with Python or SQL on any cloud

License

Notifications You must be signed in to change notification settings

aaalzya/versatile-data-kit

 
 

Repository files navigation

Versatile Data Kit Versatile Data Kit

Last Activity license pre-commit build status twitter YouTube Channel Subscribers

Overview

Versatile Data Kit (VDK) is a data framework that enables Data Engineers to

  • 🧑‍💻develop,
  • ▶️run,
  • 📊and manage data workloads, aka data jobs

Its Lego-like design consists of lightweight Python modules installed via pip package manager. All VDK plugins are easy to combine.

VDK CLI can generate a data job and run your Python code and SQL queries.

🎯VDK SDK makes your code shorter, more readable, and faster to create.
🚦Ready-to-use data ETL/ELT patterns make Data Engineering with VDK efficient.

Data Engineers use VDK to implement automatic pull ingestion (E in ELT) and batch data transformation (T in ELT) into a database or any other data storage.

Data Journey and Versatile Data Kit

VDK creates data processing workflows to:

  • Ingest data (extract)
  • Transform data (transform)
  • Export data (load)

Data Journey Data Journey

Solve common data engineering problems

  • Ingest data from different sources, including CSV files, JSON objects, and data from REST API services.
  • Use Python/SQL and VDK templates to transform data.
  • Ensure data applications are packaged, versioned, and deployed correctly while dealing with credentials, retries, and reconnects.
  • Provide built-in monitoring and smart notification capabilities.
  • Track both code and data modifications and the relationship between them, allowing quicker troubleshooting and version rollback.

Without / With Versatile Data Kit Without / With Versatile Data Kit Without / With Versatile Data Kit code Without / With Versatile Data Kit code

What VDK can do

Getting Started

Create and run data jobs locally

pip install quickstart-vdk

This installs the core vdk packages and the vdk command line interface. You can use them to run jobs in your local shell environment.

See also the Getting Started section of the wiki

Run the Control Service locally with Docker and Kubernetes

Using Kubernetes for your data jobs workflow provides additional benefits, such as continuous delivery, easier collaboration, streamlined data job orchestration, high availability, security, and job runtime isolation

More info https://kubernetes.io/docs/concepts/overview/

Prerequisites

vdk server --install

You can then use the vdk cli to create and deploy jobs and the UI to manage them.

Next Steps

▶️ Getting started with VDK Operations UI
📖 Use case examples that show how VDK fits into the data workflow.
📖 VDK with Trino DB.
🗣️ Get to know us and ask questions at our community meeting

Additional Resources

📖 Running in production
📖 Documentation for VDK.
▶️ VDK Operations UI Overview

Contributing

Create an issue or pull request on GitHub to submit suggestions or changes. If you are interested in contributing as a developer, visit the contributing page.

Contacts

Code of Conduct

Everyone involved in working on the project's source code, or engaging in any issue trackers, Slack channels, and mailing lists is expected to be familiar with and follow the Code of Conduct.

About

Build, run and manage your data pipelines with Python or SQL on any cloud

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 32.5%
  • TypeScript 30.8%
  • Java 22.7%
  • JavaScript 6.6%
  • HTML 4.4%
  • SCSS 1.7%
  • Other 1.3%