Godwit vs rclone: A Technical Comparison for S3 Sync and Migration
Godwit vs rclone compared across execution model, verification, observability, resume, and safety. See which S3 migration tool fits repeatable, auditable sync workflows.
Godwit Sync produces a queryable plan before any data moves, tracks every object in SQLite, verifies checksums per run, and exposes per-run Prometheus metrics. rclone covers 70+ storage backends and excels at breadth. The Godwit vs rclone choice comes down to that tradeoff: breadth across providers, or depth on S3 with a full audit trail and resumable transfers.
This S3 migration tool comparison covers seven dimensions: execution model, verification, observability, resume behavior, production safety, version history, and object lock. For a broader introduction to S3 migration planning, see the S3 Migration Guide.
Godwit Sync Plans Before Any Data Moves
Godwit Sync splits every sync into two explicit phases: plan, then execute.
godwit sync \
--source s3://source-bucket \
--source-endpoint minio:9000 \
--source-access-key $SRC_KEY --source-secret-key $SRC_SECRET \
--source-secure=false \
--destination s3://dest-bucket \
--destination-endpoint garage:3900 \
--destination-access-key $DST_KEY --destination-secret-key $DST_SECRET \
--destination-secure=false \
--run-id migration-01 \
--plan-only \
--brief
--plan-only lists every source object, checks the destination in parallel, and writes the result to a local SQLite database. Each object gets a status: pending, skipped, or excluded. You can query object counts, total bytes, and key anomalies before committing a single byte to the network:
godwit plan inspect --run-id migration-01
When the plan looks right, replace --plan-only with --resume on the same command to start transferring.
rclone sync starts transferring immediately. --dry-run previews the result as text output (to the terminal or a log file), but it is not structured or queryable. Godwit Sync's plan is a SQLite database with built-in commands (plan inspect, plan list, plan list objects) and JSON export for scripting and CI integration.
Verification Goes Beyond Size and Timestamp
Comparison at Transfer Time
Godwit Sync uses a configurable --compare-policy (default: size,etag). For multipart-uploaded objects with composite ETags, it checks for a .md5 sidecar file and falls back to size + mtime if needed. A single code path with clear precedence means you always know which comparison ran for each object.
rclone compares size and modification time by default. --checksum switches to size + hash. rclone mitigates composite ETags from multipart uploads by storing its own X-Amz-Meta-Md5chksum metadata, but objects uploaded by other tools lack this header, so --checksum falls back to size + mtime and may re-copy objects that are already identical.
Post-Transfer Verification
godwit plan verify scopes verification to a specific run:
godwit plan verify \
--run-id migration-01 \
--destination s3://dest-bucket \
--destination-endpoint garage:3900 \
--destination-access-key $DST_KEY --destination-secret-key $DST_SECRET \
--destination-secure=false \
--brief
This command reads every completed object from the run's state database, streams it from the destination, computes a fresh MD5, and compares it against the .md5 sidecar written during upload. Verification is scoped to one run and only reads the destination; the source does not need to be available. If 3 out of 50,000 objects have mismatches, the report shows exactly those 3.
rclone check re-reads both source and destination for every object. It is stateless: it does not know which objects the last run transferred, so it compares the entire bucket pair every time. See the Quickstart Guide for a full plan-verify walkthrough.
Built-In S3 Migration Monitoring
Godwit Sync provides three observability layers that run simultaneously. A dedicated set of per-run metrics (keyed by run_id) lets you track, compare, and alert on individual runs independently.
Terminal output uses three mutually exclusive modes: --ui for a live dashboard with per-object progress and worker status, --brief for one line per phase transition, and --silent for no output.
HTTP metrics and status activate with --status-addr:
curl http://localhost:8080/metrics
The /metrics endpoint exposes 50+ Prometheus metrics, including per-run counters and gauges:
- Per-state gauges: pending, running, done, failed, skipped, excluded at any moment
- Distribution histograms: object sizes, transfer durations, retry counts
- S3 API breakdown: requests by operation and HTTP status code (useful for spotting 429 throttling or 5xx errors)
- Verification outcomes: matched, mismatched, and error counts
- Run lifecycle: stage transitions, start/complete timestamps, plan creation time
The /status endpoint returns a JSON snapshot. CI pipelines can poll it: curl :8080/status | jq '.objects.pending' returns 0 when the transfer completes.
Structured JSON logs via --logs-dir produce a full audit trail of every lifecycle event: task resumed, part uploaded, verify matched/mismatched, and errors.
After the transfer, godwit plan inspect shows object totals, byte totals, storage class breakdown, and key conflict analysis for any past run. See the full Prometheus metrics reference for the complete list.
rclone exposes a Prometheus-compatible endpoint via --rc --rc-enable-metrics. The metrics cover process-level totals rather than per-run breakdowns.
Resume Replays Only Pending and Failed Objects
Godwit Sync reads its state database and queues only objects in pending or failed state. Objects that already succeeded are never touched: no re-queuing, no re-comparing, no re-scanning.
For a run with 200,000 objects where 195,000 succeeded before the process was killed, --resume queues exactly the 5,000 remaining. Godwit Sync tracks the attempt count per object (--retry controls the maximum), so you can identify which objects needed multiple tries.
rclone has no per-object state. A restart re-scans the entire source and destination. Successfully copied objects get skipped if size and mtime still match, but for large buckets with millions of objects that re-scan alone can take hours.
Godwit Sync Blocks Overwrites by Default
Godwit Sync defaults to the cautious choice:
--overrideis off by default. Uploads include a conditional header that makes S3 reject the PUT if the object already exists.- The plan-first workflow ensures you review the full scope before any data moves.
- Glacier-archived objects are excluded at plan time with a warning, not left to cause S3 errors during transfer.
- Case conflicts and unsupported destination characters surface during planning via
--unsupported-key-action.
rclone sync overwrites destination objects that differ from the source. --immutable treats any mismatch as an error instead, but it is not the default.
Godwit Sync Migrates S3 Version History in Order
Godwit Sync supports version-aware synchronization with --version-mode, preserving delete markers and noncurrent versions:
latest(default): only the current version of each keyall: every version replayed in chronological order, with delete markers preservedsince:<RFC3339>: versions created after a specific timestamp
The planner serializes version operations per key while parallelizing across keys, so ordering is correct without sacrificing throughput. godwit plan inspect reports version history completeness per key: complete, partial, or fully skipped. Keys with partial history (some versions in Glacier) are flagged so you know exactly where gaps exist.
rclone can list versions with --s3-versions, but it treats each version as a separate file by appending a timestamp suffix to the key. There is no mechanism for replaying versions in chronological order per key. For a deep dive into version-aware workflows, see Migrating S3 Version History.
S3 Object Lock Migration with Plan-Time Visibility
S3 Object Lock prevents objects from being deleted or overwritten for a retention period. Migrating locked objects in compliance or governance mode means copying both the data and the lock metadata: retention mode, retain-until date, and legal hold status.
Godwit Sync handles this with a single --object-lock flag. It reads the retention mode, retain-until date, and legal hold status from each source object and applies them during the destination upload. Combined with --version-mode all, lock metadata is preserved per version. The state database tracks lock settings, and godwit plan inspect reports object lock statistics (governance, compliance, legal hold counts) before any data moves.
rclone does not replicate Object Lock settings. Its --metadata flag preserves standard S3 system metadata (content-type, cache-control, etc.) but not retention mode, retain-until date, or legal hold status. These must be applied separately through the AWS CLI or provider console after the copy completes.
Feature Comparison
| Feature | Godwit Sync | rclone |
|---|---|---|
| Execution model | Plan to SQLite, then resume | Direct execution; --dry-run is ephemeral |
| Resume | Per-object state; only pending/failed re-queued | Full re-scan on restart |
| Post-transfer verify | Run-scoped MD5 verification | Stateless; re-reads both sides |
| Prometheus metrics | Per-run metrics keyed by run_id |
Process-level totals only |
| Default overwrite | Blocked (requires --override) |
Overwrites in sync mode |
| Pre-transfer review | godwit plan inspect |
Not available |
| Version migration | --version-mode all replays versions in order with delete markers |
Versions as separate files; no per-key ordering |
| Object lock | Single --object-lock flag; per-version; plan-time stats |
Not supported; requires separate tooling |
| Storage backends | S3-compatible + local filesystem | 70+ backends |
| Transfer directions | fs→s3, s3→s3, s3→fs | Any supported backend pair |
Godwit Sync for Verifiable S3 Workflows, rclone for Backend Breadth
Choose Godwit Sync when you need every sync run to be auditable and repeatable: production migrations, scheduled synchronization jobs, CI-driven transfers, or any workflow where you need to prove what was transferred and resume cleanly after failures. Its plan-first model, per-object state, run-scoped verification, and per-run Prometheus metrics make each run self-documenting.
rclone is the better tool when you need backends beyond S3 (Google Drive, Dropbox, Azure Blob, SFTP) or bidirectional sync. Its strength is breadth across storage providers. Godwit Sync's strength is depth on S3-compatible storage: durable plans, per-object resume, run-scoped verification, and real-time observability for every sync run.
Get Started with Godwit Sync
Download Godwit Sync and run your first --plan-only sync in minutes. The Quickstart Guide walks you through plan, execute, and verify in a single session. Once transfers are running, connect your Grafana stack to the Prometheus metrics endpoint for real-time dashboards. See Pricing for free-tier limits and licensed plans.