How to upload large files to S3 (multipart upload)
Amazon S3's single PutObject caps at 5 GB and is non-resumable — a network blip on a 4 GB upload restarts from byte zero. The S3 multipart upload API (5 MB minimum part, 10,000 max parts, 5 TB ceiling) splits the file into pieces, uploads them in parallel, and lets you retry only the failed parts. S3 Viewer wraps that automatically above 50 MB with presigned per-part URLs, so the browser uploads directly to the bucket without your access key ever leaving the server.
Step-by-step.
- 01
In S3 Viewer: drop the file in
Multipart kicks in automatically above 50 MB — S3 Viewer splits the file into 25 MB parts and uploads them in parallel via presigned per-part URLs signed server-side. Your browser uploads directly to S3 (or R2, or MinIO); nothing streams through our server. - 02
Failed parts retry without restarting
If your connection drops on part 47 of 200, only that part redoes — not the first 46. - 03
AWS CLI: aws s3 cp
The CLI uses multipart automatically above 8 MB. No flags needed for normal files; tunemultipart_chunksizein your AWS config for very large ones.aws s3 cp big.zip s3://bucket/big.zip - 04
AWS SDK: Upload from @aws-sdk/lib-storage
Handles part splitting, parallelism, and retries for you. The right pattern for streaming uploads from a backend.import { Upload } from '@aws-sdk/lib-storage'; const u = new Upload({ client: s3, params: { Bucket: 'b', Key: 'big.zip', Body: stream }, queueSize: 4, // parts in flight partSize: 8 * 1024 * 1024, }); await u.done(); - 05
Tune part size for very large files
DefaultpartSizein the SDK is 5 MB; raise to 64–100 MB for files over 50 GB so you don't hit the 10,000-parts cap. Math: max object size = partSize × 10,000. - 06
Verify with HeadObject
After the upload completes, confirm the size and ETag match what you intended to upload.aws s3api head-object --bucket b --key big.zip - 07
Set a lifecycle rule to abort failed uploads
Incomplete multipart uploads sit in the bucket and you pay for them until aborted. Add an S3 lifecycle rule to auto-abort uploads older than a few days — AWS recommends this in the well-architected framework.
What's actually happening.
S3's PutObject caps at 5 GB and is non-resumable — a network blip on a 4 GB upload restarts from byte zero. Multipart upload (CreateMultipartUpload → many UploadPart → CompleteMultipartUpload) splits the file into parts (5 MB minimum, 10,000 parts max), uploads them in parallel, and lets you retry only the failed parts. It supports objects up to 5 TB. S3 Viewer uses multipart above 50 MB by default with 25 MB part sizes and automatic retries; per-part URLs are presigned server-side so your browser uploads directly to the bucket — never through our infrastructure. The same pattern works against Cloudflare R2, MinIO, B2, and any other S3-compatible provider.
Common questions.
What's the max file size I can upload to S3?
What is multipart upload in S3?
What happens if my upload fails halfway?
Do I get charged for failed multipart uploads?
Does Cloudflare R2 support multipart upload?
How do I upload large files to S3 from a browser safely?
Skip the CLI. Try it in the browser.
S3 Viewer turns the steps above into a single click. Open source, self-hostable, free for personal use.
Why teams pick this
More how-tos
Download a file
Browser, AWS CLI, or presigned URL — three ways, with auto-inferred filenames and zero key exposure.
Rename an S3 file
Amazon S3 has no rename API — keys are immutable. Here's the standard copy + delete pattern, with metadata and tags preserved.
Delete a file
Versioning, delete markers, MFA Delete — and how to actually purge instead of soft-deleting.