Skip to content

DependencyTrack/hyades

Hyades

Build Status Coverage Maintainability Rating Reliability Rating Security Rating

What is this? πŸ€”

Hyades, named after the star cluster closest to earth, is an incubating project for decoupling responsibilities from Dependency-Track's monolithic API server into separate, scalableβ„’ services. We're using Apache Kafka (or Kafka-compatible brokers like Redpanda) for communicating between API server and Hyades services.

If you're interested in the technical background of this project, please refer to πŸ‘‰ WTF.md πŸ‘ˆ.

The main objectives of Hyades are:

  • Enable Dependency-Track to handle portfolios spanning hundreds of thousands of projects
  • Improve resilience of Dependency-Track, providing more confidence when relying on it in critical workflows
  • Improve deployment and configuration management experience for containerized / cloud native tech stacks

Other than separating responsibilities, the API server has been modified to allow for high availability (active-active) deployments. Various "hot paths", like processing of uploaded BOMs, have been optimized in the existing code. Further optimization is an ongoing effort.

Hyades already is a superset of Dependency-Track, as changes up to Dependency-Track v4.9.1 were ported, and features made possible by the new architecture have been implemented on top. Where possible, improvements made in Hyades are, or will be, backported to Dependency-Track v4.x.

Features

Generally, Hyades can do everything Dependency-Track can do.

On top of that, it is capable of:

  • Evaluating policies defined in the Common Expression Language (CEL)
  • Verifying the integrity of components, based on hashes consumed from BOMs and remote repositories

Architecture

Rough overview of the architecture:

Architecture Overview

Except the mirror service (which is not actively involved in event processing), all services can be scaled up and down, to and from multiple instances. Despite being written in Java, all services except the API server can optionally be deployed as self-contained native binaries, offering a lower resource footprint.

To read more about the individual services, refer to their respective REAMDE.md:

Great, can I try it? πŸ™Œ

Yes! And all you need to kick the tires is Docker Compose!

docker compose --profile demo up -d --pull always

This will launch all required services, and expose the following endpoints:

Service URL
API Server http://localhost:8080
Frontend http://localhost:8081
Redpanda Console http://localhost:28080
PostgreSQL localhost:5432
Redpanda Kafka API localhost:9092

Simply navigate to the frontend to get started!
The initial admin credentials are admin / admin 🌚

Deployment 🚒

The recommended way to deploy Hyades is via Helm. Our chart is not officially published to any repository yet, so for now you'll have to clone this repository to access it.

The chart does not include:

  • a database
  • a Kafka-compatible broker

Helm charts to deploy Kafka brokers to Kubernetes are provided by both Strimzi and Redpanda.

Minikube

Deploying to a local Minikube cluster is a great way to get started.

Note
For now, database and Kafka broker are deployed using Docker Compose.

  1. Start PostgreSQL and Redpanda via Docker Compose
docker compose up -d
  1. Start a local Minikube cluster, exposing NodePorts for API server (30080) and frontend (30081)
minikube start --ports 30080:30080,30081:30081
  1. Deploy Hyades
helm install hyades ./helm-charts/hyades \
  -n hyades --create-namespace \
  -f ./helm-charts/hyades/values.yaml \
  -f ./helm-charts/hyades/values-minikube.yaml
  1. Wait a moment for all deployments to become ready
kubectl -n hyades rollout status deployment \
  --selector 'app.kubernetes.io/instance=hyades' \
  --watch --timeout 3m
  1. Visit http://localhost:30081 in your browser to access the frontend

Monitoring πŸ“Š

Metrics

A basic metrics monitoring stack is provided, consisting of Prometheus and Grafana.
To start both services, run:

docker compose --profile monitoring up -d

The services will be available locally at the following locations:

Prometheus is configured to scrape metrics from the following services in a 5s intervals:

  • Redpanda Broker
  • API Server
  • Notification Publisher
  • Repository Meta Analyzer
  • Vulnerability Analyzer

The Grafana instance will be automatically provisioned to use Prometheus as data source. Additionally, dashboards for the following services are automatically set up:

  • Redpanda Broker
  • API Server
  • Vulnerability Analyzer

Redpanda Console 🐼

The provided docker-compose.yml includes an instance of Redpanda Console to aid with gaining insight into what's happening in the message broker. Among many other things, it can be used to inspect messages inside any given topic.

The console is exposed at http://127.0.0.1:28080 and does not require authentication. It's intended for local use only.

Technical Documentation πŸ’»

Configuration πŸ“

Refer to the Configuration documentation.

Development

Prerequisites

  • JDK 21+
  • Maven
  • Docker

Building

mvn clean install -DskipTests

Running locally

Running the Hyades services locally requires both a Kafka broker and a database server to be present. Containers for Redpanda and PostgreSQL can be launched using Docker Compose:

docker compose up -d

To launch individual services execute the quarkus:dev Maven goal for the respective module:

mvn -pl vulnerability-analyzer quarkus:dev

Make sure you've built the project at least once, otherwise the above command will fail.

Note
If you're unfamiliar with Quarkus' Dev Mode, you can read more about it here

Testing 🀞

Unit Testing πŸ•΅οΈβ€β™‚οΈ

To execute the unit tests for all Hyades modules:

mvn clean verify

End-To-End Testing 🧟

Note
End-to-end tests are based on container images. The tags of those images are currently hardcoded. For the Hyades services, the tags are set to latest. If you want to test local changes, you'll have to first:

  • Build container images locally
  • Update the tags in AbstractE2ET

To execute end-to-end tests as part of the build:

mvn clean verify -Pe2e-all

To execute only the end-to-end tests:

mvn -pl e2e clean verify -Pe2e-all