Distributing software reliably and securely

Let's Start Simple

You've built an amazing application and now you need to distribute it to users. What's the simplest approach?

Solution 1: Just Put Files on a Server

The most straightforward solution is to upload your application files to a web server:

# Upload your app to an Apache server or S3 bucket
aws s3 cp my-app-v1.0.0.tar.gz s3://my-software-bucket/

Users download it:

# Client downloads the file
wget https://my-software-bucket.s3.amazonaws.com/my-app-v1.0.0.tar.gz
tar -xzf my-app-v1.0.0.tar.gz
./install.sh

Simple, right? But what could go wrong?

The Problems with Naive Distribution

Let's think about the attack vectors:

Problem 1: Man-in-the-Middle Attack
An attacker intercepts the download and serves malicious code:

User → [Attacker] → Server
       ↓
   Malicious file!

Problem 2: Compromised Server
If your S3 bucket credentials leak or your Apache server is hacked, attackers can replace your legitimate files with malware. Users have no way to know.

Problem 3: No Integrity Verification
Even without malicious intent, network errors could corrupt the download. Users install broken software without knowing.

Real-world impact:

  • SolarWinds (2020): Build system compromised, malicious updates sent to 18,000+ organizations
  • CCleaner (2017): Legitimate installer backdoored, affecting 2.3 million users
  • NotPetya (2017): Spread via compromised software update mechanism

This is clearly not good enough. Let's fix it!

Solution 2: Add Signatures

The next logical step is to sign your files so users can verify authenticity:

# Developer signs the file
gpg --sign my-app-v1.0.0.tar.gz

# Upload both file and signature
aws s3 cp my-app-v1.0.0.tar.gz s3://my-software-bucket/
aws s3 cp my-app-v1.0.0.tar.gz.sig s3://my-software-bucket/

Users verify before installing:

# Client downloads and verifies
wget https://my-software-bucket.s3.amazonaws.com/my-app-v1.0.0.tar.gz
wget https://my-software-bucket.s3.amazonaws.com/my-app-v1.0.0.tar.gz.sig

gpg --verify my-app-v1.0.0.tar.gz.sig my-app-v1.0.0.tar.gz
# Good signature from "Developer <dev@company.com>"

Much better! But still not enough...

The Problems with Just Signatures

Problem 4: Signature Rollback Attack
An attacker can serve an old, vulnerable version with its valid signature:

User requests: my-app-v2.0.0.tar.gz (secure)
Attacker serves: my-app-v1.0.0.tar.gz (vulnerable, but validly signed!)

The signature is valid, so the user installs vulnerable software.

Problem 5: Freeze Attack
An attacker prevents you from getting updates by serving old (but validly signed) versions:

Current version: v3.0.0 (fixes critical security bug)
Attacker serves: v2.5.0 (old but valid signature)

You think you're up to date, but you're not.

Problem 6: Key Compromise = Total Failure
If your signing key is stolen, the attacker can:

  • Sign malicious packages
  • Users trust them (valid signature!)
  • Game over - complete compromise

Problem 7: Mix-and-Match Attack
An attacker combines files from different versions:

my-app-v2.0-binary (from version 2.0, has bug)
my-app-v1.5-config (from version 1.5, incompatible)
Result: Exploitable state with valid signatures!

We need something more sophisticated. Let's think step by step...

Solution 3: Add Metadata with Version Information

To prevent rollback attacks, we need to track versions. Let's create a metadata file:

{
  "files": {
    "my-app-v1.0.0.tar.gz": {
      "version": "1.0.0",
      "sha256": "abc123...",
      "length": 1048576
    },
    "my-app-v2.0.0.tar.gz": {
      "version": "2.0.0",
      "sha256": "def456...",
      "length": 1234567
    }
  },
  "metadata_version": 5,
  "expires": "2025-12-31T23:59:59Z"
}

Sign this metadata:

gpg --sign metadata.json

Now clients can:

  1. Download and verify metadata
  2. Check they're getting the latest version
  3. Verify file hashes match

Better! But new problems emerge...

The Problems with Simple Metadata

Problem 8: Metadata Freeze Attack
An attacker can serve old metadata (with valid signature) that lists old versions. The client thinks old software is the latest.

Problem 9: Partial Compromise Still Catastrophic
If the metadata signing key is compromised, attackers control everything. We need defense in depth.

Problem 10: How Often to Update?

  • Update metadata every release? (Could be months)
  • Update frequently to prove freshness? (High operational burden)

Solution 4: Separate Metadata Concerns

Let's think about different types of information that need different update frequencies:

Type 1: List of valid files (changes per release)
Type 2: Freshness proof (needs frequent updates)
Type 3: Snapshot of repository state (changes when files change)

Let's separate these into different metadata files!

Introducing Three Metadata Files

targets.json - Lists actual software files:

{
  "version": 10,
  "expires": "2026-01-01T00:00:00Z",
  "targets": {
    "my-app-v2.0.0.tar.gz": {
      "length": 1234567,
      "hashes": {
        "sha256": "abc123..."
      }
    }
  }
}

Updates: When releasing new software (weekly/monthly)

snapshot.json - Records current state of all metadata:

{
  "version": 42,
  "expires": "2025-11-23T00:00:00Z",
  "meta": {
    "targets.json": {
      "version": 10,
      "hashes": {
        "sha256": "def456..."
      }
    }
  }
}

Updates: When targets changes (weekly/monthly)

timestamp.json - Proves repository freshness:

{
  "version": 1337,
  "expires": "2025-11-17T00:00:00Z",
  "meta": {
    "snapshot.json": {
      "version": 42,
      "hashes": {
        "sha256": "ghi789..."
      }
    }
  }
}

Updates: Frequently (hourly/daily) to prove liveness

Why This Structure?

Prevents Mix-and-Match:
The snapshot.json cryptographically binds all metadata together. You can't mix targets from version 10 with snapshot from version 41.

Prevents Freeze Attacks:
The timestamp.json expires quickly (e.g., 24 hours). If an attacker freezes updates, clients detect the expired timestamp.

Operational Flexibility:
You can update timestamp.json hourly (automated) without touching targets.json (manual, per release).

Client Verification Flow:

1. Download timestamp.json → Verify signature → Check not expired
2. Download snapshot.json (version from timestamp) → Verify signature
3. Download targets.json (version from snapshot) → Verify signature
4. Download actual file → Verify hash matches targets.json

Better! But we still have one critical vulnerability...

Solution 5: Separate Signing Keys

We've been using one signing key for everything. If it's compromised, game over.

Key insight: Different metadata has different security requirements:

  • targets.json: High security (authorizes what users install)
  • snapshot.json: Medium security (can be semi-automated)
  • timestamp.json: Lower security (fully automated, updated hourly)

Solution: Use different keys for different roles!

Targets Key (offline, manual)
    ↓ signs
targets.json

Snapshot Key (online, secure server)
    ↓ signs
snapshot.json

Timestamp Key (online, automated)
    ↓ signs
timestamp.json

Now if the timestamp key is compromised:

  • Attacker can only create new timestamps
  • Cannot modify targets.json (needs targets key)
  • Cannot inject malicious software
  • Limited blast radius!

Still one problem remains...

Solution 6: The Root of Trust

How do clients know which keys to trust?

Problem 11: Key Distribution

  • How does the client get the public keys?
  • What if an attacker substitutes malicious public keys?
  • How do we rotate keys if they're compromised?

Solution: A root metadata file that establishes trust!

root.json - The root of trust:

{
  "version": 1,
  "expires": "2026-11-16T00:00:00Z",
  "keys": {
    "key-id-1": {
      "keytype": "ed25519",
      "scheme": "ed25519",
      "keyval": {"public": "abc123..."}
    },
    "key-id-2": {...},
    "key-id-3": {...},
    "key-id-4": {...}
  },
  "roles": {
    "root": {
      "keyids": ["key-id-1"],
      "threshold": 1
    },
    "targets": {
      "keyids": ["key-id-2"],
      "threshold": 1
    },
    "snapshot": {
      "keyids": ["key-id-3"],
      "threshold": 1
    },
    "timestamp": {
      "keyids": ["key-id-4"],
      "threshold": 1
    }
  }
}

The root.json file:

  • Lists all trusted public keys
  • Specifies which keys can sign which roles
  • Is signed by the root key itself
  • Expires very slowly (1 year+)

Root Key Management:

  • Kept offline (air-gapped computer, HSM, bank vault)
  • Used only to sign root.json
  • Can rotate other keys if compromised
  • Often uses threshold signatures (3 of 5 people must sign)

Bootstrapping Trust:

# Client gets root.json through trusted channel:
# - Bundled with application
# - Downloaded over verified HTTPS
# - Verified with out-of-band hash

# After that, root.json can update itself!

The Complete TUF Architecture: All Pieces Together

Now we've built The Update Framework step by step! Let's see how all the pieces work together:

The Four Key Roles

1. Root Key 🔐

  • Purpose: Root of trust, signs root.json
  • Security: Highest (offline, HSM, multi-signature)
  • Update frequency: Rarely (yearly or on key rotation)
  • If compromised: Entire system at risk (but can be threshold-signed)

2. Targets Key 🎯

  • Purpose: Authorizes which files users can download
  • Security: High (offline when not signing)
  • Update frequency: Per software release (weekly/monthly)
  • If compromised: Malicious software can be injected (but needs root key to hide compromise)

3. Snapshot Key 📸

  • Purpose: Prevents mix-and-match attacks, binds metadata together
  • Security: Medium (online but secured)
  • Update frequency: When targets changes (weekly/monthly)
  • If compromised: Can't inject malware, but can create confusion (limited damage)

4. Timestamp Key

  • Purpose: Proves freshness, prevents freeze attacks
  • Security: Lower (online, automated)
  • Update frequency: Very frequent (hourly/daily)
  • If compromised: Minimal damage, easy to rotate

The Complete Client Update Process

Here's how a client securely downloads software using TUF:

1. [Bootstrap Trust]
   Client has trusted root.json (bundled or verified out-of-band)

2. [Check Freshness]
   Download timestamp.json
   → Verify signature using timestamp key from root.json
   → Check expiration (e.g., < 24 hours old)
   → Extract snapshot.json version and hash

3. [Get Consistent Snapshot]
   Download snapshot.json
   → Verify version matches timestamp.json
   → Verify hash matches timestamp.json
   → Verify signature using snapshot key from root.json
   → Extract targets.json version and hash

4. [Get File List]
   Download targets.json
   → Verify version matches snapshot.json
   → Verify hash matches snapshot.json  
   → Verify signature using targets key from root.json
   → Extract target file hash and length

5. [Download Actual File]
   Download my-app-v2.0.0.tar.gz
   → Verify hash matches targets.json
   → Verify length matches targets.json

6. [Success!]
   File is verified authentic and current

Attack Prevention Matrix

Let's see how TUF prevents all the attacks we identified:

Attack How TUF Prevents It
Man-in-the-Middle All metadata signed, file hashes verified
Compromised Server Attacker needs signing keys, not just server access
Corrupted Download Hash verification catches corruption
Rollback Attack Version numbers in metadata prevent downgrades
Freeze Attack Timestamp expiration detects frozen repository
Mix-and-Match Snapshot binds all metadata versions together
Arbitrary Package Only files listed in signed targets.json accepted
Key Compromise Multi-key system limits blast radius
Metadata Substitution Root of trust established via root.json

Why This Architecture is Brilliant

Separation of Concerns:
Each metadata file has a single, well-defined purpose. This makes the system easier to understand, implement, and secure.

Defense in Depth:
Multiple layers of protection. Compromising one component doesn't break the entire system.

Operational Flexibility:

  • Can automate timestamp updates (low risk)
  • Manually control targets (high risk)
  • Balance security with convenience

Compromise Resilience:
Even if some keys are stolen, the damage is contained. The root key can revoke compromised keys and establish new trust.

Proven in Production:
This isn't theoretical - Docker, Python PyPI, automotive systems, and many others use TUF successfully.

Metadata Organization: The File Structure

Now that we understand why TUF needs four metadata files, let's look at how they're organized:

repository/
├── metadata/
│   ├── root.json          # Root of trust (rarely changes)
│   ├── targets.json       # Lists valid software files
│   ├── snapshot.json      # Binds metadata together
│   └── timestamp.json     # Proves freshness
└── targets/
    ├── my-app-v1.0.0.tar.gz
    ├── my-app-v2.0.0.tar.gz
    └── my-app-v3.0.0.tar.gz

Why Four Files Instead of One?

Different update frequencies:

  • root.json: Yearly or when rotating keys
  • targets.json: Per software release (weekly/monthly)
  • snapshot.json: When targets changes
  • timestamp.json: Hourly to prove freshness

Different security requirements:

  • root.json: Maximum security (offline, multi-sig)
  • targets.json: High security (offline when not signing)
  • snapshot.json: Medium security (can be online)
  • timestamp.json: Lower security (automated)

Defense in depth:
If one key is compromised, the damage is limited by the other keys.

Understanding Metadata Expiration

Each metadata file has an expiration date. This is crucial for security:

{
  "expires": "2025-11-17T23:59:59Z",
  ...
}

Why expiration matters:

For timestamp.json (short expiration: 1 day):

Scenario: Attacker performs freeze attack
- Stops serving new updates
- Client downloads old timestamp.json
- Client checks: "Expires 2025-11-15" but today is 2025-11-17
- Client rejects: "This repository is frozen or compromised!"

For targets.json (longer expiration: 3 months):

Scenario: Normal operation
- Developer is on vacation
- No new releases for 2 months
- targets.json doesn't expire (valid for 3 months)
- timestamp.json still updates hourly to prove liveness

The key insight: Frequent metadata updates (timestamp) prove liveness, while infrequent metadata (targets) has longer expiration to reduce operational burden.

From Theory to Practice: Let's Build It!

We've built up the TUF architecture step by step, discovering why each component is necessary. Now let's implement it using Python and S3 storage.

What we'll build:

  1. A complete TUF repository with all four key types
  2. Scripts to add software packages to the repository
  3. S3 infrastructure with proper IAM security
  4. A client that safely downloads and verifies software
  5. Automated timestamp updates for production use

By the end of this implementation, you'll have a working TUF repository that provides all the security guarantees we've discussed.

Setting Up TUF with Python and S3: A Complete Example

Now let's see how to implement TUF in practice. We'll build a complete example that:

  1. Creates a TUF repository
  2. Generates all four key types
  3. Stores repository metadata in Amazon S3
  4. Adds software packages
  5. Implements a client that securely downloads updates

Prerequisites

pip install tuf securesystemslib boto3

Part 1: AWS S3 Setup

First, let's set up the S3 infrastructure with proper IAM policies.

Step 1: Create an S3 Bucket

aws s3 mb s3://my-tuf-repository --region us-east-1

Step 2: Create IAM Policy for TUF Repository Management

Save this as tuf-repo-manager-policy.json:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "TUFRepositoryWrite",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:DeleteObject"
      ],
      "Resource": [
        "arn:aws:s3:::my-tuf-repository",
        "arn:aws:s3:::my-tuf-repository/*"
      ]
    }
  ]
}

Create the policy:

aws iam create-policy \
  --policy-name TUFRepositoryManager \
  --policy-document file://tuf-repo-manager-policy.json

Step 3: Create IAM Policy for TUF Clients (Read-Only)

Save this as tuf-client-policy.json:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "TUFRepositoryRead",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::my-tuf-repository",
        "arn:aws:s3:::my-tuf-repository/*"
      ]
    }
  ]
}

Create the policy:

aws iam create-policy \
  --policy-name TUFRepositoryClient \
  --policy-document file://tuf-client-policy.json

Step 4: Create IAM Users and Attach Policies

# Create repository manager user
aws iam create-user --user-name tuf-repo-manager
aws iam attach-user-policy \
  --user-name tuf-repo-manager \
  --policy-arn arn:aws:iam::YOUR_ACCOUNT_ID:policy/TUFRepositoryManager

# Create access keys for the manager
aws iam create-access-key --user-name tuf-repo-manager

# Create client user
aws iam create-user --user-name tuf-client
aws iam attach-user-policy \
  --user-name tuf-client \
  --policy-arn arn:aws:iam::YOUR_ACCOUNT_ID:policy/TUFRepositoryClient

# Create access keys for the client
aws iam create-access-key --user-name tuf-client

Step 5: Configure S3 Bucket Policy for Public Read Access (Optional)

If you want clients to access the repository without AWS credentials, add this bucket policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::my-tuf-repository/*"
    }
  ]
}

Apply it:

aws s3api put-bucket-policy \
  --bucket my-tuf-repository \
  --policy file://bucket-policy.json

Part 2: Repository Creation and Key Generation

Now let's create the TUF repository structure and generate all four key types.

create_repository.py

#!/usr/bin/env python3
"""
Create a TUF repository with all metadata and keys.
This script should be run in a secure environment.
"""

import os
import shutil
from pathlib import Path
from datetime import datetime, timedelta
from tuf.repository_tool import (
    create_new_repository,
    import_ed25519_privatekey_from_file,
    import_ed25519_publickey_from_file
)
from securesystemslib.interface import (
    generate_and_write_ed25519_keypair,
)

# Configuration
REPO_DIR = Path("./tuf-repository")
KEYSTORE_DIR = Path("./keystore")
METADATA_DIR = REPO_DIR / "metadata"
TARGETS_DIR = REPO_DIR / "targets"

# Key passwords - IN PRODUCTION, USE SECURE KEY MANAGEMENT!
# Consider using environment variables, HSM, or key management services
ROOT_KEY_PASSWORD = "root-password-change-me"
TARGETS_KEY_PASSWORD = "targets-password-change-me"
SNAPSHOT_KEY_PASSWORD = "snapshot-password-change-me"
TIMESTAMP_KEY_PASSWORD = "timestamp-password-change-me"


def setup_directories():
    """Create necessary directory structure."""
    print("[*] Setting up directory structure...")
    
    # Clean up if exists
    if REPO_DIR.exists():
        shutil.rmtree(REPO_DIR)
    if KEYSTORE_DIR.exists():
        shutil.rmtree(KEYSTORE_DIR)
    
    # Create directories
    REPO_DIR.mkdir(parents=True)
    KEYSTORE_DIR.mkdir(parents=True)
    METADATA_DIR.mkdir(parents=True)
    TARGETS_DIR.mkdir(parents=True)
    
    print(f"[+] Created directories at {REPO_DIR}")


def generate_keys():
    """Generate all four types of TUF keys."""
    print("\n[*] Generating TUF keys...")
    
    keys = {}
    
    # Generate Root key (most secure, kept offline)
    print("  [*] Generating ROOT key (keep this VERY secure, offline)...")
    generate_and_write_ed25519_keypair(
        str(KEYSTORE_DIR / "root_key"),
        password=ROOT_KEY_PASSWORD
    )
    keys['root'] = str(KEYSTORE_DIR / "root_key")
    print("  [+] Root key generated")
    
    # Generate Targets key (high security, offline when not signing)
    print("  [*] Generating TARGETS key (high security, use for signing releases)...")
    generate_and_write_ed25519_keypair(
        str(KEYSTORE_DIR / "targets_key"),
        password=TARGETS_KEY_PASSWORD
    )
    keys['targets'] = str(KEYSTORE_DIR / "targets_key")
    print("  [+] Targets key generated")
    
    # Generate Snapshot key (medium security, can be online)
    print("  [*] Generating SNAPSHOT key (can be online in secure environment)...")
    generate_and_write_ed25519_keypair(
        str(KEYSTORE_DIR / "snapshot_key"),
        password=SNAPSHOT_KEY_PASSWORD
    )
    keys['snapshot'] = str(KEYSTORE_DIR / "snapshot_key")
    print("  [+] Snapshot key generated")
    
    # Generate Timestamp key (lower security, automated)
    print("  [*] Generating TIMESTAMP key (can be automated)...")
    generate_and_write_ed25519_keypair(
        str(KEYSTORE_DIR / "timestamp_key"),
        password=TIMESTAMP_KEY_PASSWORD
    )
    keys['timestamp'] = str(KEYSTORE_DIR / "timestamp_key")
    print("  [+] Timestamp key generated")
    
    print("\n[!] IMPORTANT: In production, store these keys securely:")
    print("    - Root key: HSM or offline vault, multi-signature recommended")
    print("    - Targets key: Encrypted offline storage")
    print("    - Snapshot key: Online secure server")
    print("    - Timestamp key: Automated system with rotation")
    
    return keys


def create_metadata(keys):
    """Create and sign TUF metadata files."""
    print("\n[*] Creating TUF repository and metadata...")
    
    # Create a new repository
    repository = create_new_repository(str(REPO_DIR))
    
    # Import private keys
    print("  [*] Loading private keys...")
    root_private = import_ed25519_privatekey_from_file(
        keys['root'],
        password=ROOT_KEY_PASSWORD
    )
    targets_private = import_ed25519_privatekey_from_file(
        keys['targets'],
        password=TARGETS_KEY_PASSWORD
    )
    snapshot_private = import_ed25519_privatekey_from_file(
        keys['snapshot'],
        password=SNAPSHOT_KEY_PASSWORD
    )
    timestamp_private = import_ed25519_privatekey_from_file(
        keys['timestamp'],
        password=TIMESTAMP_KEY_PASSWORD
    )
    
    # Import public keys
    root_public = import_ed25519_publickey_from_file(keys['root'] + '.pub')
    targets_public = import_ed25519_publickey_from_file(keys['targets'] + '.pub')
    snapshot_public = import_ed25519_publickey_from_file(keys['snapshot'] + '.pub')
    timestamp_public = import_ed25519_publickey_from_file(keys['timestamp'] + '.pub')
    
    # Set expiration dates
    # Root: 1 year (changed infrequently)
    # Targets: 3 months (changed per release cycle)
    # Snapshot: 1 week (changed frequently)
    # Timestamp: 1 day (changed very frequently)
    one_year = datetime.now() + timedelta(days=365)
    three_months = datetime.now() + timedelta(days=90)
    one_week = datetime.now() + timedelta(days=7)
    one_day = datetime.now() + timedelta(days=1)
    
    # Configure Root role
    print("  [*] Configuring ROOT role...")
    repository.root.add_verification_key(root_public)
    repository.root.load_signing_key(root_private)
    repository.root.expiration = one_year
    repository.root.threshold = 1  # In production, consider threshold signatures
    
    # Configure Targets role
    print("  [*] Configuring TARGETS role...")
    repository.targets.add_verification_key(targets_public)
    repository.targets.load_signing_key(targets_private)
    repository.targets.expiration = three_months
    
    # Configure Snapshot role
    print("  [*] Configuring SNAPSHOT role...")
    repository.snapshot.add_verification_key(snapshot_public)
    repository.snapshot.load_signing_key(snapshot_private)
    repository.snapshot.expiration = one_week
    
    # Configure Timestamp role
    print("  [*] Configuring TIMESTAMP role...")
    repository.timestamp.add_verification_key(timestamp_public)
    repository.timestamp.load_signing_key(timestamp_private)
    repository.timestamp.expiration = one_day
    
    # Write all metadata
    print("  [*] Writing metadata files...")
    repository.writeall()
    
    print("[+] TUF repository created successfully!")
    print(f"[+] Metadata location: {METADATA_DIR}")
    print(f"[+] Targets location: {TARGETS_DIR}")
    
    return repository


def main():
    """Main function to create TUF repository."""
    print("=" * 70)
    print("TUF Repository Creation Tool")
    print("=" * 70)
    
    # Step 1: Setup directories
    setup_directories()
    
    # Step 2: Generate keys
    keys = generate_keys()
    
    # Step 3: Create metadata
    repository = create_metadata(keys)
    
    print("\n" + "=" * 70)
    print("Repository creation complete!")
    print("=" * 70)
    print("\nNext steps:")
    print("1. Securely backup your keys in ./keystore/")
    print("2. Add target files to ./tuf-repository/targets/")
    print("3. Run add_targets.py to sign and publish targets")
    print("4. Upload to S3 using upload_to_s3.py")
    print("\n[!] Remember: Keep root and targets keys OFFLINE and SECURE!")


if __name__ == "__main__":
    main()

Part 3: Adding Targets to the Repository

add_targets.py

#!/usr/bin/env python3
"""
Add target files to the TUF repository and update metadata.
Run this whenever you want to publish new software versions.
"""

import os
import shutil
from pathlib import Path
from datetime import datetime, timedelta
from tuf.repository_tool import load_repository
from securesystemslib.interface import import_ed25519_privatekey_from_file

# Configuration
REPO_DIR = Path("./tuf-repository")
KEYSTORE_DIR = Path("./keystore")
TARGETS_DIR = REPO_DIR / "targets"

# Key passwords
TARGETS_KEY_PASSWORD = "targets-password-change-me"
SNAPSHOT_KEY_PASSWORD = "snapshot-password-change-me"
TIMESTAMP_KEY_PASSWORD = "timestamp-password-change-me"


def add_target_files(repository, target_files):
    """Add target files to the repository and update metadata."""
    print("\n[*] Adding target files to repository...")
    
    # Copy files to targets directory and add to metadata
    for source_file in target_files:
        source_path = Path(source_file)
        if not source_path.exists():
            print(f"[!] Warning: {source_file} not found, skipping...")
            continue
        
        # Copy to targets directory
        dest_path = TARGETS_DIR / source_path.name
        shutil.copy2(source_path, dest_path)
        print(f"  [+] Copied {source_path.name} to targets/")
        
        # Add to targets metadata
        repository.targets.add_target(str(dest_path))
        print(f"  [+] Added {source_path.name} to targets metadata")
    
    print("[+] All target files added")


def update_metadata(repository):
    """Sign and update all metadata files."""
    print("\n[*] Updating and signing metadata...")
    
    # Load signing keys
    print("  [*] Loading signing keys...")
    targets_private = import_ed25519_privatekey_from_file(
        str(KEYSTORE_DIR / "targets_key"),
        password=TARGETS_KEY_PASSWORD
    )
    snapshot_private = import_ed25519_privatekey_from_file(
        str(KEYSTORE_DIR / "snapshot_key"),
        password=SNAPSHOT_KEY_PASSWORD
    )
    timestamp_private = import_ed25519_privatekey_from_file(
        str(KEYSTORE_DIR / "timestamp_key"),
        password=TIMESTAMP_KEY_PASSWORD
    )
    
    # Load keys into repository
    repository.targets.load_signing_key(targets_private)
    repository.snapshot.load_signing_key(snapshot_private)
    repository.timestamp.load_signing_key(timestamp_private)
    
    # Update expirations
    three_months = datetime.now() + timedelta(days=90)
    one_week = datetime.now() + timedelta(days=7)
    one_day = datetime.now() + timedelta(days=1)
    
    repository.targets.expiration = three_months
    repository.snapshot.expiration = one_week
    repository.timestamp.expiration = one_day
    
    # Write updated metadata (this signs with the loaded keys)
    print("  [*] Writing and signing metadata files...")
    repository.writeall()
    
    print("[+] Metadata updated and signed successfully!")


def main():
    """Main function to add targets."""
    print("=" * 70)
    print("TUF Repository - Add Targets")
    print("=" * 70)
    
    # Example: Create a sample target file
    print("\n[*] Creating sample target files...")
    sample_file = Path("./sample-app-v1.0.0.tar.gz")
    with open(sample_file, 'w') as f:
        f.write("This is a sample application package v1.0.0\n")
        f.write("In production, this would be your actual software release.\n")
    print(f"[+] Created sample file: {sample_file}")
    
    # Load existing repository
    print("\n[*] Loading TUF repository...")
    repository = load_repository(str(REPO_DIR))
    print("[+] Repository loaded")
    
    # Add target files
    target_files = [
        str(sample_file),
        # Add more files here as needed
        # "./my-app-v2.0.0.tar.gz",
        # "./my-app-v2.0.0.tar.gz.sig",
    ]
    
    add_target_files(repository, target_files)
    
    # Update and sign metadata
    update_metadata(repository)
    
    print("\n" + "=" * 70)
    print("Target addition complete!")
    print("=" * 70)
    print("\nNext step: Upload to S3 using upload_to_s3.py")


if __name__ == "__main__":
    main()

Part 4: Uploading to S3

upload_to_s3.py

#!/usr/bin/env python3
"""
Upload TUF repository to Amazon S3.
Run this after creating or updating the repository.
"""

import os
import boto3
from pathlib import Path
from botocore.exceptions import ClientError

# Configuration
REPO_DIR = Path("./tuf-repository")
BUCKET_NAME = "my-tuf-repository"
AWS_REGION = "us-east-1"

# Set these from environment variables or AWS credentials file
# AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should be set


def upload_directory_to_s3(local_directory, bucket_name, s3_prefix=''):
    """Upload a directory to S3 bucket."""
    s3_client = boto3.client('s3', region_name=AWS_REGION)
    
    local_path = Path(local_directory)
    
    for file_path in local_path.rglob('*'):
        if file_path.is_file():
            # Calculate relative path
            relative_path = file_path.relative_to(local_path)
            s3_key = str(Path(s3_prefix) / relative_path)
            
            # Determine content type
            content_type = 'application/octet-stream'
            if file_path.suffix == '.json':
                content_type = 'application/json'
            
            try:
                print(f"  [*] Uploading {relative_path} to s3://{bucket_name}/{s3_key}")
                s3_client.upload_file(
                    str(file_path),
                    bucket_name,
                    s3_key,
                    ExtraArgs={'ContentType': content_type}
                )
                print(f"  [+] Uploaded {relative_path}")
            except ClientError as e:
                print(f"  [!] Error uploading {relative_path}: {e}")
                raise


def verify_bucket_exists(bucket_name):
    """Verify that the S3 bucket exists and is accessible."""
    s3_client = boto3.client('s3', region_name=AWS_REGION)
    
    try:
        s3_client.head_bucket(Bucket=bucket_name)
        print(f"[+] Bucket '{bucket_name}' is accessible")
        return True
    except ClientError as e:
        error_code = e.response['Error']['Code']
        if error_code == '404':
            print(f"[!] Bucket '{bucket_name}' does not exist")
        elif error_code == '403':
            print(f"[!] Access denied to bucket '{bucket_name}'")
        else:
            print(f"[!] Error accessing bucket: {e}")
        return False


def main():
    """Main function to upload repository to S3."""
    print("=" * 70)
    print("TUF Repository - Upload to S3")
    print("=" * 70)
    
    # Verify repository exists
    if not REPO_DIR.exists():
        print(f"[!] Error: Repository directory not found at {REPO_DIR}")
        print("    Run create_repository.py first!")
        return
    
    # Verify bucket exists
    print(f"\n[*] Verifying S3 bucket '{BUCKET_NAME}'...")
    if not verify_bucket_exists(BUCKET_NAME):
        print("\n[!] Please create the bucket first:")
        print(f"    aws s3 mb s3://{BUCKET_NAME} --region {AWS_REGION}")
        return
    
    # Upload metadata
    print("\n[*] Uploading metadata...")
    metadata_dir = REPO_DIR / "metadata"
    upload_directory_to_s3(metadata_dir, BUCKET_NAME, 'metadata')
    print("[+] Metadata uploaded")
    
    # Upload targets
    print("\n[*] Uploading targets...")
    targets_dir = REPO_DIR / "targets"
    upload_directory_to_s3(targets_dir, BUCKET_NAME, 'targets')
    print("[+] Targets uploaded")
    
    print("\n" + "=" * 70)
    print("Upload complete!")
    print("=" * 70)
    print(f"\nRepository URL: https://{BUCKET_NAME}.s3.{AWS_REGION}.amazonaws.com/")
    print("\nClients can now download from this repository securely!")


if __name__ == "__main__":
    main()

Part 5: Client Implementation

tuf_client.py

#!/usr/bin/env python3
"""
TUF client to securely download and verify targets from S3 repository.
This demonstrates how clients use TUF to ensure update security.
"""

import os
import shutil
from pathlib import Path
from tuf.ngclient import Updater
from tuf.ngclient.config import UpdaterConfig

# Configuration
REPO_URL = "https://my-tuf-repository.s3.us-east-1.amazonaws.com/"
CLIENT_DIR = Path("./tuf-client")
METADATA_DIR = CLIENT_DIR / "metadata"
DOWNLOAD_DIR = CLIENT_DIR / "downloads"


def setup_client():
    """Set up client directory structure."""
    print("[*] Setting up client environment...")
    
    # Clean up if exists
    if CLIENT_DIR.exists():
        shutil.rmtree(CLIENT_DIR)
    
    # Create directories
    CLIENT_DIR.mkdir(parents=True)
    METADATA_DIR.mkdir(parents=True)
    (METADATA_DIR / "current").mkdir()
    (METADATA_DIR / "previous").mkdir()
    DOWNLOAD_DIR.mkdir(parents=True)
    
    print(f"[+] Client directory created at {CLIENT_DIR}")


def bootstrap_trust(root_metadata_path):
    """
    Bootstrap trust with initial root metadata.
    In production, this would come from a trusted source (bundled with app,
    downloaded over HTTPS and verified, etc.)
    """
    print("\n[*] Bootstrapping trust with root metadata...")
    
    # Copy trusted root metadata to client
    dest_path = METADATA_DIR / "current" / "root.json"
    shutil.copy2(root_metadata_path, dest_path)
    
    print("[+] Trust bootstrapped with root.json")
    print("[!] In production, verify root.json authenticity through:")
    print("    - Bundle with application")
    print("    - Download over verified HTTPS")
    print("    - Verify with out-of-band hash/signature")


def download_target(target_name):
    """
    Download and verify a target file using TUF.
    This is where TUF's security guarantees are enforced.
    """
    print(f"\n[*] Downloading target: {target_name}")
    print("[*] TUF will verify:")
    print("    ✓ Metadata signatures")
    print("    ✓ Metadata expiration")
    print("    ✓ Metadata version (no rollback)")
    print("    ✓ Target file hash")
    print("    ✓ Consistent repository snapshot")
    
    try:
        # Create updater configuration
        config = UpdaterConfig(
            max_root_rotations=32,
            max_delegations=8,
            root_max_length=512000,  # 500 KB
            timestamp_max_length=16384,  # 16 KB
            snapshot_max_length=2000000,  # 2 MB
            targets_max_length=5000000,  # 5 MB
        )
        
        # Initialize updater
        updater = Updater(
            metadata_dir=str(METADATA_DIR / "current"),
            metadata_base_url=REPO_URL + "metadata/",
            target_base_url=REPO_URL + "targets/",
            target_dir=str(DOWNLOAD_DIR),
            config=config
        )
        
        # Refresh metadata (downloads and verifies all metadata)
        print("\n  [*] Refreshing metadata...")
        updater.refresh()
        print("  [+] Metadata refreshed and verified")
        
        # Get target info
        print(f"\n  [*] Looking up target info for '{target_name}'...")
        target_info = updater.get_targetinfo(target_name)
        
        if target_info is None:
            print(f"  [!] Target '{target_name}' not found in repository")
            return None
        
        print(f"  [+] Target found:")
        print(f"      Length: {target_info.length} bytes")
        print(f"      Hashes: {target_info.hashes}")
        
        # Download target file
        print(f"\n  [*] Downloading and verifying '{target_name}'...")
        target_path = updater.download_target(target_info)
        print(f"  [+] Download complete and verified!")
        print(f"  [+] Saved to: {target_path}")
        
        return target_path
        
    except Exception as e:
        print(f"  [!] Error during download: {e}")
        print("  [!] This could indicate:")
        print("      - Metadata signature verification failed")
        print("      - Metadata has expired")
        print("      - Target hash doesn't match")
        print("      - Rollback attack detected")
        print("      - Repository is inconsistent")
        return None


def main():
    """Main function for TUF client."""
    print("=" * 70)
    print("TUF Client - Secure Download Example")
    print("=" * 70)
    
    # Setup client environment
    setup_client()
    
    # Bootstrap trust with root metadata
    # In this example, we copy from our local repository
    # In production, this would come from a trusted source
    root_metadata = "./tuf-repository/metadata/root.json"
    
    if not Path(root_metadata).exists():
        print(f"\n[!] Error: Root metadata not found at {root_metadata}")
        print("    Run create_repository.py and upload_to_s3.py first!")
        return
    
    bootstrap_trust(root_metadata)
    
    # Download a target
    target_name = "sample-app-v1.0.0.tar.gz"
    result = download_target(target_name)
    
    if result:
        print("\n" + "=" * 70)
        print("Success! Target downloaded and verified securely.")
        print("=" * 70)
        print("\nTUF protected you against:")
        print("  ✓ Arbitrary package attacks")
        print("  ✓ Rollback attacks")
        print("  ✓ Freeze attacks")
        print("  ✓ Mix-and-match attacks")
        print("  ✓ Compromised mirrors")
        print("\nYou can safely use this file!")
    else:
        print("\n" + "=" * 70)
        print("Download failed or was rejected by TUF.")
        print("=" * 70)
        print("\nDO NOT use files that fail TUF verification!")


if __name__ == "__main__":
    main()

Part 6: Automated Timestamp Updates

For production use, you'll want to automate timestamp updates. Here's a script that can run on a schedule (e.g., via cron or AWS Lambda):

update_timestamp.py

#!/usr/bin/env python3
"""
Automated timestamp update script.
Run this periodically (e.g., every hour) to keep the repository fresh.
"""

import boto3
from pathlib import Path
from datetime import datetime, timedelta
from tuf.repository_tool import load_repository
from securesystemslib.interface import import_ed25519_privatekey_from_file

# Configuration
REPO_DIR = Path("./tuf-repository")
KEYSTORE_DIR = Path("./keystore")
BUCKET_NAME = "my-tuf-repository"
AWS_REGION = "us-east-1"
TIMESTAMP_KEY_PASSWORD = "timestamp-password-change-me"


def update_timestamp(repository):
    """Update and sign timestamp metadata."""
    print("[*] Updating timestamp metadata...")
    
    # Load timestamp key
    timestamp_private = import_ed25519_privatekey_from_file(
        str(KEYSTORE_DIR / "timestamp_key"),
        password=TIMESTAMP_KEY_PASSWORD
    )
    
    repository.timestamp.load_signing_key(timestamp_private)
    
    # Update expiration (1 day from now)
    repository.timestamp.expiration = datetime.now() + timedelta(days=1)
    
    # Write only timestamp metadata
    repository.writeall(write_partial=True)
    
    print("[+] Timestamp metadata updated")


def upload_timestamp_to_s3():
    """Upload updated timestamp metadata to S3."""
    print("[*] Uploading timestamp to S3...")
    
    s3_client = boto3.client('s3', region_name=AWS_REGION)
    
    timestamp_file = REPO_DIR / "metadata" / "timestamp.json"
    s3_key = "metadata/timestamp.json"
    
    s3_client.upload_file(
        str(timestamp_file),
        BUCKET_NAME,
        s3_key,
        ExtraArgs={'ContentType': 'application/json'}
    )
    
    print(f"[+] Uploaded to s3://{BUCKET_NAME}/{s3_key}")


def main():
    """Main function for timestamp update."""
    print("=" * 70)
    print(f"TUF Timestamp Update - {datetime.now().isoformat()}")
    print("=" * 70)
    
    # Load repository
    repository = load_repository(str(REPO_DIR))
    
    # Update timestamp
    update_timestamp(repository)
    
    # Upload to S3
    upload_timestamp_to_s3()
    
    print("\n[+] Timestamp update complete!")
    print("    This proves the repository is fresh and not frozen.")


if __name__ == "__main__":
    main()

Running the Complete Example

Here's how to use all these scripts together:

# 1. Install dependencies
pip install tuf securesystemslib boto3

# 2. Set up AWS credentials
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"

# 3. Create S3 bucket and IAM policies (run AWS CLI commands from Part 1)

# 4. Create TUF repository and generate keys
python create_repository.py

# 5. Add target files to the repository
python add_targets.py

# 6. Upload repository to S3
python upload_to_s3.py

# 7. Test client download
python tuf_client.py

# 8. Set up automated timestamp updates (cron example)
# Add to crontab: 0 * * * * /path/to/update_timestamp.py

Expected Output

When you run the client, you should see output like:

======================================================================
TUF Client - Secure Download Example
======================================================================
[*] Setting up client environment...
[+] Client directory created at ./tuf-client

[*] Bootstrapping trust with root metadata...
[+] Trust bootstrapped with root.json

[*] Downloading target: sample-app-v1.0.0.tar.gz
[*] TUF will verify:
    ✓ Metadata signatures
    ✓ Metadata expiration
    ✓ Metadata version (no rollback)
    ✓ Target file hash
    ✓ Consistent repository snapshot

  [*] Refreshing metadata...
  [+] Metadata refreshed and verified

  [*] Looking up target info for 'sample-app-v1.0.0.tar.gz'...
  [+] Target found:
      Length: 123 bytes
      Hashes: {'sha256': 'abc123...'}

  [*] Downloading and verifying 'sample-app-v1.0.0.tar.gz'...
  [+] Download complete and verified!
  [+] Saved to: ./tuf-client/downloads/sample-app-v1.0.0.tar.gz

======================================================================
Success! Target downloaded and verified securely.
======================================================================

TUF protected you against:
  ✓ Arbitrary package attacks
  ✓ Rollback attacks
  ✓ Freeze attacks
  ✓ Mix-and-match attacks
  ✓ Compromised mirrors

You can safely use this file!

Real-World Deployment Considerations

Key Management Best Practices

  1. Root Key:

    • Store in a hardware security module (HSM) like YubiKey
    • Use threshold signatures (e.g., 3 of 5 keyholders must sign)
    • Keep in a physically secure location
    • Document emergency rotation procedures
  2. Targets Key:

    • Store encrypted on an offline machine
    • Only load when signing new releases
    • Require two-person rule for access
    • Rotate annually or after suspected compromise
  3. Snapshot Key:

    • Can be online but in a hardened, monitored environment
    • Use AWS KMS, HashiCorp Vault, or similar
    • Rotate every 3-6 months
    • Enable audit logging
  4. Timestamp Key:

    • Fully automated in CI/CD
    • Stored in secrets management (AWS Secrets Manager, etc.)
    • Rotate monthly
    • Monitor for unusual signing patterns

S3 Security Hardening

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "EnforceSSLOnly",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::my-tuf-repository",
        "arn:aws:s3:::my-tuf-repository/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    },
    {
      "Sid": "EnableVersioning",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::YOUR_ACCOUNT:user/tuf-repo-manager"
      },
      "Action": [
        "s3:PutBucketVersioning"
      ],
      "Resource": "arn:aws:s3:::my-tuf-repository"
    }
  ]
}

Enable S3 versioning for recovery:

aws s3api put-bucket-versioning \
  --bucket my-tuf-repository \
  --versioning-configuration Status=Enabled

Monitoring and Alerting

Set up CloudWatch alerts for:

  • Unusual metadata update patterns
  • Failed signature verifications
  • Expired metadata not being updated
  • Unexpected key usage

Disaster Recovery

  1. Key Compromise:

    # Use root key to rotate compromised key
    repository.root.revoke_key(compromised_key)
    repository.root.add_verification_key(new_key)
    repository.root.load_signing_key(root_private_key)
    repository.root.write()
    
  2. Metadata Corruption:

    • Use S3 versioning to rollback
    • Regenerate from offline backups
    • Use root key to re-establish trust

Conclusion

The Update Framework provides comprehensive protection against software supply chain attacks through its sophisticated key management system and metadata architecture. The four-key hierarchy—root, targets, snapshot, and timestamp—creates layers of defense that balance security with operational flexibility.

By implementing TUF with Python and S3, you can:

  • Protect against arbitrary package, rollback, freeze, and mix-and-match attacks
  • Scale your software distribution securely to millions of users
  • Recover from key compromises without catastrophic failures
  • Automate updates while maintaining strong security guarantees

The complete code examples provided in this guide give you a working foundation to build a production-ready TUF repository. Remember: security is a journey, not a destination. Start with these basics, then enhance with HSMs, threshold signatures, and comprehensive monitoring as your needs grow.

Key Takeaways:

  1. Software updates are a prime attack vector that needs protection
  2. TUF's four-key system provides defense in depth
  3. Each key type has specific security and operational characteristics
  4. Implementation with Python and S3 is straightforward and practical
  5. Key management and rotation are critical for long-term security

Stay secure, and happy updating! 🔐


Additional Resources