Skip to main content

Setting Up Dependency-Track on Azure Container Apps

In my previous post, I introduced Dependency-Track and explained why we chose it to manage our software supply chain security. Now it's time to get practical. In this post, I'll walk you through how we deployed Dependency-Track on Azure Container Apps, including our architecture decisions, configuration choices, and lessons learned along the way.

Why Azure Container Apps?

Before diving into the setup, let me explain why we chose Azure Container Apps for hosting Dependency-Track. We evaluated several deployment options including Azure Kubernetes Service (AKS), Azure Container Instances (ACI), and App Service, but Container Apps emerged as the best fit for our needs:

Simplified Management: Container Apps abstracts away much of the complexity of Kubernetes while still providing container orchestration capabilities. We don't need to manage nodes, clusters, or complex networking configurations.

Cost-Effective: With built-in autoscaling and the ability to scale to zero for non-production environments, Container Apps helps us optimize costs without sacrificing performance.

Azure Integration: Integration with Azure services like Azure Database for PostgreSQL, Key Vault, and Azure Monitor made our deployment more secure and observable.

HTTPS Out of the Box: Automatic HTTPS certificate provisioning and management eliminated the need for additional ingress configuration.

Architecture Overview

Our Dependency-Track deployment on Azure Container Apps consists of three main components:

  1. API Server Container App: Runs the Dependency-Track API server, which handles all business logic, vulnerability processing, and API requests
  2. Frontend Container App: Serves the web UI(only static assets) as a separate container
  3. Azure Database for PostgreSQL: Managed database service that stores all Dependency-Track data

We also use:

  • Azure Key Vault: For storing sensitive configuration like database credentials and API keys
  • Docker Hub: For consuming the Dependency Track container images
  • Azure Monitor: For logging and monitoring
  • Azure Storage Account: For persistent storage of vulnerability data mirrors

Prerequisites

Before starting, ensure you have:

  • An Azure subscription with appropriate permissions
  • Azure CLI installed and configured
  • A resource group created for your deployment

Remark: I’ll share a simplied version of our real setup as we don’t expose our Dependency Track externally and use a VNET  to expose the tool internally

Step 1: Create the PostgreSQL Database

Dependency-Track requires PostgreSQL 11 or later. We'll use Azure Database for PostgreSQL Flexible Server:

Important: Store the database password securely. We'll add it to Key Vault in the next step.

Step 2: Set Up Azure Key Vault Store sensitive configuration in Key Vault:

Step 3: Create Container Apps Environment

The Container Apps Environment is a secure boundary around our container apps:

Step 4: Create Storage for Vulnerability Data

Dependency-Track mirrors vulnerability databases. We'll use Azure Files for persistent storage

Step 5: Deploy the API Server

Now we'll deploy the Dependency-Track API server container app:

Step 6: Deploy the Frontend

The frontend serves the web UI and communicates with the API server:

Step 7: Initial Configuration

After deployment, access the Dependency-Track UI:

  • Get the frontend URL:
  • Log in with default credentials:
    • Username: admin
    • Password: admin
  • Immediately change the admin password via the UI (Administration → Access Management → Users):
  • Configure vulnerability data sources (Administration →  Vulnerability Sources):
    • Enable NVD mirroring:

    • Optionally also enable GitHub Advisories (requires a GitHub PAT):

  • Configure analyzers
    • Enable SonaType OSS Index analyzer (requires API token from sonatype.com):

Configuration Best Practices

Based on our experience, here are some configuration recommendations:

Performance Tuning

For the API server, adjust these environment variables based on your load:

  • ALPINE_WORKER_THREADS: Set to 2-4 for typical workloads
  • ALPINE_WORKER_THREAD_MULTIPLIER: Set to 4-8 depending on CPU cores
  • ALPINE_DATABASE_POOL_MAX_SIZE: Start with 20 and adjust based on connection metrics
Security Hardening
  1. Enable Azure AD Authentication: Configure OIDC integration with Azure AD for centralized identity management (I’ll leave that for the next post)
  2. Use Private Endpoints: As I mentioned for production, use VNet integration and private endpoints to keep traffic internal

What's next?

Now that you have Dependency-Track up and running on Azure Container Apps, the next step is integrating it into your CI/CD pipelines. In the final post of this series, I'll show you how to:

  • Generate SBOMs automatically in your build pipelines
  • Upload SBOMs to Dependency-Track via the API
  • Implement policy checks that can fail builds
  • Set up automated notifications for new vulnerabilities
But before I do that, I'll share an extra post discussing user management in general and OIDC integration through Microsoft Entra specifically.

Popular posts from this blog

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...