Get 50 GB free!Register for a free account and start migrating today — no credit card required.Register now →
GodwitGodwit Sync
HomePricingDownloadsDocs
Customer PortalGet Started
Get Started

How to Verify S3 Migrations with Godwit Sync

2026-03-26

Verify S3 migrations, list failed objects, inspect runs, and validate checksums with Godwit Sync. Build an audit trail for every object.

Share:XLinkedInFacebook

Hands-on lab available: Walk through a complete validation workflow against real S3-compatible stores. Go to the lab

Why You Need to Verify S3 Migration Results

After a large S3 migration, you need answers: how many objects transferred? Which ones failed? Is the destination data intact? Most transfer tools dump output to stdout and forget it. Once the process exits, there is no record of what happened. If something went wrong, you parse logs.

Godwit Sync takes a different approach. Every transfer writes each object's status to a local SQLite state database, creating a queryable audit trail that survives process restarts. You can track S3 transfer progress in real time or verify S3 migration results after the fact. Four commands give you full S3 migration validation:

Command Purpose
godwit plan list View all runs with status, object count, and duration
godwit plan inspect Detailed breakdown of a single run
godwit plan list objects Per-object listing filtered by status
godwit plan verify Re-read destination objects and compare checksums

All four commands read from the state database (--state-path). No network calls are needed except plan verify, which re-reads the destination for data integrity verification.

Prerequisites

  • Godwit Sync binary on PATH
  • A godwit.state.db file with completed runs (produced by any godwit sync command)

Verify Godwit Sync is available:

godwit version

The examples below reference run IDs and endpoints from the S3 Migration Guide. Replace them with your own values.

Verify S3 Migration History: godwit plan list

godwit plan list shows every run in the state database. Each row includes the run ID, status, start time, object count, bytes transferred, duration, and failure count. This is the starting point for any S3 migration report.

godwit plan list \
  --state-path ./godwit.state.db
godwit plan list `
  --state-path ./godwit.state.db

Sample output:

RUN       STATUS      STARTED              OBJECTS   BYTES       DURATION   FAILURES
------    ------      ----                 -------   -----       --------   --------
upload    completed   2026-03-21 10:01:12  180       4.1 GB      1m 22s     0
transfer  completed   2026-03-21 10:02:45  180       4.1 GB      1m 35s     0
download  completed   2026-03-21 10:04:18  180       4.1 GB      1m 28s     0

Filter by status

godwit plan list \
  --state-path ./godwit.state.db \
  --status failed

This is useful as a first check after a batch of migrations: if every run shows completed with zero failures, you can move on. If any run shows failed, drill in with plan inspect.

JSON output

godwit plan list \
  --state-path ./godwit.state.db \
  --json

The JSON output can be piped to jq or ingested by an S3 migration monitoring system. In CI pipelines, parse the JSON to gate deployment on zero failures.

For example, filter runs that completed in under 60 seconds:

godwit plan list \
  --state-path ./godwit.state.db \
  --json | jq '[.[] | select(.duration_seconds < 60)]'

Sample output:

[
  {
    "run_id": "download",
    "status": "completed",
    "date": "2026-03-26",
    "started_at": "2026-03-26T17:18:13Z",
    "objects": 360,
    "bytes": 4620293760,
    "duration_seconds": 27.698119586,
    "failures": 0
  },
  {
    "run_id": "transfer",
    "status": "completed",
    "date": "2026-03-26",
    "started_at": "2026-03-26T17:17:17Z",
    "objects": 360,
    "bytes": 4620293760,
    "duration_seconds": 55.51705473,
    "failures": 0
  }
]

Inspect a Migration Run in Detail: godwit plan inspect

godwit plan inspect shows a detailed breakdown for a single run: status, timing, object counts by state, data volumes, storage class distribution, and potential issues.

godwit plan inspect \
  --run-id transfer \
  --state-path ./godwit.state.db
godwit plan inspect `
  --run-id transfer `
  --state-path ./godwit.state.db

Sample output:

Plan Summary for Run: transfer
─────────────────────────────────────
Status:            completed
Started At:        2026-03-26 17:17:17
Finished At:       2026-03-26 17:18:13

Objects:
  Total:           360
  Pending:         0
  Running:         0
  Finished:        180
  Skipped:         0
  Failed:          0
  Excluded:        180

Data:
  Transferred:     4.3 GB
  Left:            0 B
  Total:           4.3 GB
  Excluded:        5.6 KB

Storage classes detected:
  STANDARD:            100.0%   360 objects   4.3 GB

Version History:
  Complete History:      180 keys
  Partial History:       0 keys
  Fully Skipped:         0 keys

Reading the output

  • Pending > 0 after a run finishes means objects were planned but never processed. This usually indicates the process was interrupted. Re-run the original godwit sync command with --resume to pick them up.
  • Skipped means objects already existed at the destination with matching checksums. This is normal when re-running a completed transfer.
  • Excluded means objects were filtered out by --skip or --match during planning.
  • Failed > 0 means objects encountered errors during transfer. Use plan list objects failed to see per-object error messages.

List Failed or Pending Objects: godwit plan list objects

godwit plan list objects lists individual objects from a run, filtered by status. This is the core debugging command for any S3 migration -- it shows each object's key, size, ETag, storage class, timestamps, and error message (for failed objects).

godwit plan list objects failed \
  --run-id transfer \
  --state-path ./godwit.state.db
godwit plan list objects failed `
  --run-id transfer `
  --state-path ./godwit.state.db

The first argument after objects is the status filter. Valid values: all, pending, running, finished, skipped, failed, excluded, glacier, unsupported_key.

Combining statuses

Use + to combine statuses in a single query:

godwit plan list objects pending+failed \
  --run-id transfer \
  --state-path ./godwit.state.db

This shows everything that still needs work -- useful before deciding whether to retry failed objects with --resume.

The --case-conflict, --unsupported, and --partial-history flags are mutually exclusive.

Filter by storage class

godwit plan list objects all \
  --run-id transfer \
  --state-path ./godwit.state.db \
  --storage-class GLACIER

Objects in GLACIER or DEEP_ARCHIVE require a restore before they can be transferred. This filter helps you identify them upfront.

Detect filesystem compatibility issues

When migrating S3 objects to a local filesystem, key names may contain characters that are invalid on Windows or macOS:

godwit plan list objects all \
  --run-id download \
  --state-path ./godwit.state.db \
  --unsupported

Similarly, S3 keys that differ only by case (e.g., Report.csv and report.csv) will collide on case-insensitive filesystems:

godwit plan list objects all \
  --run-id download \
  --state-path ./godwit.state.db \
  --case-conflict

During the sync itself, the --unsupported-key-action flag controls what happens when an object key contains characters the destination cannot store. The default is skip, which silently skips the object. Set it to warn to skip but log each occurrence, or fail to abort the transfer immediately:

godwit sync \
  --source s3://source-bucket \
  --destination /mnt/data \
  --unsupported-key-action warn \
  --state-path ./godwit.state.db

Verify Data Integrity with Checksums: godwit plan verify

godwit plan verify re-reads each object at the destination and compares its checksum against the value stored in the plan database. This S3 checksum verification catches silent corruption, partial writes, and destination-side modifications -- all from a single CLI command.

godwit plan verify \
  --run-id transfer \
  --destination s3://demo-bucket \
  --destination-endpoint localhost:3900 \
  --destination-access-key garageadmin \
  --destination-secret-key garageadmin00000000000000000000000 \
  --destination-secure=false \
  --state-path ./godwit.state.db \
  --brief
godwit plan verify `
  --run-id transfer `
  --destination s3://demo-bucket `
  --destination-endpoint localhost:3900 `
  --destination-access-key garageadmin `
  --destination-secret-key garageadmin00000000000000000000000 `
  --destination-secure=false `
  --state-path ./godwit.state.db `
  --brief

A clean run prints:

Verified: 180 objects, 0 mismatches, 0 errors

Any mismatch is reported per-object with the expected and actual checksum:

MISMATCH: images/hero.png (expected: a3f2b8c1, got: 7e91d4f0, worker 2)

Resuming an interrupted verification

If verification is interrupted, re-run with --resume to skip objects that have already been verified:

godwit plan verify \
  --run-id transfer \
  --destination s3://demo-bucket \
  --destination-endpoint localhost:3900 \
  --destination-access-key garageadmin \
  --destination-secret-key garageadmin00000000000000000000000 \
  --destination-secure=false \
  --state-path ./godwit.state.db \
  --resume \
  --brief

When to run verification

  • After a migration completes, before reporting success to stakeholders
  • Before decommissioning the source system
  • For compliance audits that require per-object proof of data integrity

Post-Migration Validation Checklist

A typical data migration validation checklist uses all four commands in sequence:

# 1. Are all runs complete?
godwit plan list --state-path ./godwit.state.db

# 2. Any anomalies in the run?
godwit plan inspect --run-id transfer --state-path ./godwit.state.db

# 3. What failed?
godwit plan list objects failed --run-id transfer --state-path ./godwit.state.db

# 4. Are checksums intact?
godwit plan verify \
  --run-id transfer \
  --destination s3://demo-bucket \
  --destination-endpoint localhost:3900 \
  --destination-access-key garageadmin \
  --destination-secret-key garageadmin00000000000000000000000 \
  --destination-secure=false \
  --state-path ./godwit.state.db \
  --brief

Debugging Failed Transfers

When plan inspect shows failures, use plan list objects to get per-object error messages, then retry with --resume:

# See how many failed
godwit plan inspect --run-id transfer --state-path ./godwit.state.db

# Get per-object error messages
godwit plan list objects failed --run-id transfer --state-path ./godwit.state.db

# Retry failed objects by re-running the original sync command with --resume
godwit sync \
  --source s3://source-bucket \
  --destination s3://dest-bucket \
  --state-path ./godwit.state.db \
  --run-id transfer \
  --resume \
  --brief

The --resume flag skips already-finished objects and retries only those in pending or failed state. Multipart uploads continue from the last completed part rather than restarting from the beginning. Objects that were already copied are only re-read to generate the .md5 sidecar checksum if it was not written before the interruption. This is the standard way to handle resumable S3 transfers and retry failed objects without re-planning from scratch.

Generating a Compliance Audit Report

For compliance audits that require a full S3 migration audit report, export each layer as JSON. These files provide per-object proof of what was transferred, when, and whether checksums match:

# Run history
godwit plan list --state-path ./godwit.state.db --json > runs.json

# Per-run summary with storage class breakdown
godwit plan inspect --run-id transfer --state-path ./godwit.state.db --json > summary.json

# Per-object manifest with status, timestamps, and checksums
godwit plan list objects all --run-id transfer --state-path ./godwit.state.db --json > objects.json

Together, runs.json, summary.json, and objects.json form a complete migration report that auditors can review without access to the source or destination systems.

Try It Hands-On

The companion lab provides a complete Docker environment with MinIO and Garage. It runs a transfer automatically, then walks through the full validation workflow against real data.

Next Steps

This article showed how to verify S3 migration results, list failed objects, and produce a compliance audit report using the four godwit plan commands. For the transfer commands themselves, see the S3 Migration Guide.

On this page

  • Why You Need to Verify S3 Migration Results
  • Prerequisites
  • Verify S3 Migration History: godwit plan list
    • Filter by status
    • JSON output
  • Inspect a Migration Run in Detail: godwit plan inspect
    • Reading the output
  • List Failed or Pending Objects: godwit plan list objects
    • Combining statuses
    • Filter by storage class
    • Detect filesystem compatibility issues
  • Verify Data Integrity with Checksums: godwit plan verify
    • Resuming an interrupted verification
    • When to run verification
  • Post-Migration Validation Checklist
    • Debugging Failed Transfers
    • Generating a Compliance Audit Report
  • Try It Hands-On
  • Next Steps
Godwit Sync

Production-grade data migration and synchronization for large object storage. Control, predictability, and safety at scale.

Product

  • Pricing
  • Documentation

Legal

  • Terms of Service
  • User Agreement
  • Privacy Policy

© 2026 Godwit Sync. All rights reserved.

Version v1.0.24