How to bulk-delete S3 files
Use the DeleteObjects API (plural) — up to 1,000 keys per request. From the AWS CLI: `aws s3 rm --recursive` against a prefix. From a tool: multi-select in S3 Viewer and bulk-delete in one round trip. If versioning is on, each delete only adds a delete marker — you'll need to delete every version explicitly to actually purge.
Step-by-step.
- 01
In S3 Viewer: shift-click a range, then Delete
Multi-select keys in the file browser (shift-click for ranges, cmd-click for individual). Right-click → Delete or press the toolbar button. S3 Viewer batches the selection into a singleDeleteObjectscall — up to 1,000 keys per request, the way the API was designed. - 02
AWS CLI: delete a whole prefix
The CLI lists every key under the prefix and deletes them in batches of 1,000 under the hood.aws s3 rm s3://my-bucket/old-logs/ --recursive - 03
AWS CLI: delete from a key list
For non-contiguous keys, build a JSON delete request and pass it toaws s3api delete-objects.aws s3api delete-objects \ --bucket my-bucket \ --delete '{"Objects":[{"Key":"a.log"},{"Key":"b.log"}]}' - 04
AWS SDK: DeleteObjectsCommand
For programmatic bulk delete, useDeleteObjectsCommandin batches of up to 1,000. The response includes any per-key errors, so you can retry failures without re-running the whole batch.const result = await s3.send( new DeleteObjectsCommand({ Bucket: 'my-bucket', Delete: { Objects: keys.map((Key) => ({ Key })), Quiet: false, }, }), ); - 05
If versioning is on: also delete the versions
On a versioned bucket, a normal delete just adds a delete marker. To actually purge, list every version and delete each one explicitly with itsVersionId. S3 Viewer prompts you when versioning is on so you know you're doing a soft delete vs. a hard one.aws s3api list-object-versions \ --bucket my-bucket --prefix old-logs/ \ --query 'Versions[].{Key:Key,VersionId:VersionId}' \ > versions.json - 06
Lifecycle rules for ongoing cleanup
For predictable cleanup (e.g., delete logs older than 30 days), use S3 Lifecycle rules instead of running deletes yourself. Cheap, automatic, and survives you forgetting.
What's actually happening.
S3's DeleteObjects API takes up to 1,000 keys per request and returns a per-key result list — successes and failures together. For larger deletes you batch by 1,000 and reissue any failed keys. The AWS CLI's rm --recursive handles the listing and batching for you under the hood. S3 Viewer wraps the same API in a multi-select UI: shift-click a range, hit Delete, one round trip. On versioned buckets, every “delete” is really a soft delete (it writes a delete marker); to actually purge, you have to enumerate versions and delete each one with its VersionId. For predictable cleanup, lifecycle rules are the right long-term tool.
Common questions.
How do I bulk-delete files from S3?
How many objects can I delete in one S3 API call?
How do I delete all files in an S3 bucket prefix?
Why didn't my bulk delete remove my files?
Can I undo a bulk delete in S3?
Does Cloudflare R2 support bulk delete?
Skip the CLI. Try it in the browser.
S3 Viewer turns the steps above into a single click. Open source, self-hostable, free for personal use.
Why teams pick this
More how-tos
Delete a file
Versioning, delete markers, MFA Delete — and how to actually purge instead of soft-deleting.
Rename an S3 file
Amazon S3 has no rename API — keys are immutable. Here's the standard copy + delete pattern, with metadata and tags preserved.
Search across buckets
How to actually search S3 with Tab autocomplete, and when S3 Inventory + Athena is the right pattern.