How to upload large files to S3 (multipart upload)

TL;DR

Amazon S3's single PutObject caps at 5 GB and is non-resumable — a network blip on a 4 GB upload restarts from byte zero. The S3 multipart upload API (5 MB minimum part, 10,000 max parts, 5 TB ceiling) splits the file into pieces, uploads them in parallel, and lets you retry only the failed parts. S3 Viewer wraps that automatically above 50 MB with presigned per-part URLs, so the browser uploads directly to the bucket without your access key ever leaving the server.

Steps

Step-by-step.

  1. 01

    In S3 Viewer: drop the file in

    Multipart kicks in automatically above 50 MB — S3 Viewer splits the file into 25 MB parts and uploads them in parallel via presigned per-part URLs signed server-side. Your browser uploads directly to S3 (or R2, or MinIO); nothing streams through our server.
  2. 02

    Failed parts retry without restarting

    If your connection drops on part 47 of 200, only that part redoes — not the first 46.
  3. 03

    AWS CLI: aws s3 cp

    The CLI uses multipart automatically above 8 MB. No flags needed for normal files; tune multipart_chunksize in your AWS config for very large ones.
    aws s3 cp big.zip s3://bucket/big.zip
  4. 04

    AWS SDK: Upload from @aws-sdk/lib-storage

    Handles part splitting, parallelism, and retries for you. The right pattern for streaming uploads from a backend.
    import { Upload } from '@aws-sdk/lib-storage';
    
    const u = new Upload({
      client: s3,
      params: { Bucket: 'b', Key: 'big.zip', Body: stream },
      queueSize: 4,    // parts in flight
      partSize: 8 * 1024 * 1024,
    });
    await u.done();
  5. 05

    Tune part size for very large files

    Default partSize in the SDK is 5 MB; raise to 64–100 MB for files over 50 GB so you don't hit the 10,000-parts cap. Math: max object size = partSize × 10,000.
  6. 06

    Verify with HeadObject

    After the upload completes, confirm the size and ETag match what you intended to upload.
    aws s3api head-object --bucket b --key big.zip
  7. 07

    Set a lifecycle rule to abort failed uploads

    Incomplete multipart uploads sit in the bucket and you pay for them until aborted. Add an S3 lifecycle rule to auto-abort uploads older than a few days — AWS recommends this in the well-architected framework.
Under the hood

What's actually happening.

S3's PutObject caps at 5 GB and is non-resumable — a network blip on a 4 GB upload restarts from byte zero. Multipart upload (CreateMultipartUpload → many UploadPart CompleteMultipartUpload) splits the file into parts (5 MB minimum, 10,000 parts max), uploads them in parallel, and lets you retry only the failed parts. It supports objects up to 5 TB. S3 Viewer uses multipart above 50 MB by default with 25 MB part sizes and automatic retries; per-part URLs are presigned server-side so your browser uploads directly to the bucket — never through our infrastructure. The same pattern works against Cloudflare R2, MinIO, B2, and any other S3-compatible provider.

FAQ

Common questions.

What's the max file size I can upload to S3?

5 GB with a single PutObject; 5 TB per object with multipart upload (10,000 parts × 5 GB each). Browser-based uploads in any dashboard run into network and timeout limits long before that, which is why multipart with presigned per-part URLs is the right pattern for large files — each part is its own request, retried independently.

What is multipart upload in S3?

A three-step S3 API flow — CreateMultipartUpload, UploadPart (in parallel, 5 MB minimum each, up to 10,000 parts), and CompleteMultipartUpload. It splits a large file into pieces, uploads them in parallel, and lets you retry only the failed parts. Required for anything over 5 GB; recommended for anything over 100 MB.

What happens if my upload fails halfway?

With single PutObject, you start over from byte zero. With multipart upload, only the failed parts retry — completed parts stay completed. S3 Viewer retries failed parts automatically, so a flaky connection doesn't cost you the whole upload.

Do I get charged for failed multipart uploads?

Yes — incomplete multipart uploads are stored and billed at standard storage rates until you abort them. Set an S3 lifecycle rule to auto-abort incomplete uploads after a few days.

Does Cloudflare R2 support multipart upload?

Yes. R2 implements the same S3 multipart upload API. The same multipart pattern (and the same SDKs) works against R2 buckets without changes — Cloudflare's S3 compatibility is what makes this possible.

How do I upload large files to S3 from a browser safely?

Use presigned URLs for each part. Your backend issues a presigned URL per part via CreateMultipartUpload, the browser uploads parts directly to S3 in parallel, and the backend completes the upload. S3 Viewer's web upload uses exactly this pattern — your file goes browser → S3 directly, the access key never reaches the client, and each presigned URL is short-lived.
Use S3 Viewer for this

Skip the CLI. Try it in the browser.

S3 Viewer turns the steps above into a single click. Open source, self-hostable, free for personal use.