How to upload to S3 securely with presigned URLs

TL;DR

Never ship AWS access keys to the browser. The standard pattern is presigned upload URLs: your backend signs a short-lived URL using the access key, the browser uploads directly to S3 with that URL, and the credential never leaves the server. For files over 5 GB (or anything you can't afford to restart), use multipart upload — one presigned URL per part, uploaded in parallel.

Steps

Step-by-step.

  1. 01

    Why not put the AWS key in the browser?

    Anything in your browser bundle is public — your access key ends up readable to anyone who opens DevTools. Even with scoped IAM, leaking the key lets attackers do anything allowed by that IAM policy until you rotate. Presigned URLs avoid the problem entirely by signing on the server and handing the browser only a short-lived URL.
  2. 02

    Single-file upload: one presigned PUT URL

    Your backend calls getSignedUrl with PutObjectCommand. The browser receives the URL and PUTs the file body directly to S3. The URL expires quickly (15 minutes is a good default) and only allows writing to the specific key.
    import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
    import { PutObjectCommand } from '@aws-sdk/client-s3';
    
    const url = await getSignedUrl(
      s3,
      new PutObjectCommand({
        Bucket: 'my-bucket',
        Key: 'uploads/' + filename,
        ContentType: contentType,
      }),
      { expiresIn: 900 }, // 15 minutes
    );
  3. 03

    Browser PUT with the URL

    From the client, fetch(url, { method: 'PUT', body: file }). No access key in the page; no proxy through your server; the file bytes go browser → S3 directly. Pair with ContentType headers so the object lands with the right type for downloads later.
  4. 04

    Large files: multipart with presigned per-part URLs

    Above ~100 MB, switch to multipart upload. Your backend calls CreateMultipartUpload, signs a presigned URL per part, and returns them to the client. The browser PUTs each part in parallel, collects ETags, and posts them back; your backend completes the upload. Failed parts retry independently.
  5. 05

    How S3 Viewer's upload uses this pattern

    When you drop a file into S3 Viewer above 50 MB, the API calls CreateMultipartUpload and signs a presigned URL for each 25 MB part. The browser uploads parts directly to S3 in parallel via those URLs — your access key, encrypted at rest with RSA-4096 (PKCS1_OAEP, SHA-256) server-side, never travels.
  6. 06

    Restrict the presigned URL further

    You can lock down the presigned URL with conditions: specific content-type, max size, exact key, expiry, source IP. Adding x-amz-meta-* conditions enforces metadata at upload time. The narrower the URL, the less damage if it leaks before expiry.
Under the hood

What's actually happening.

A presigned URL is a signed query-string version of an S3 request. The signature is computed on the server using your AWS access key, then the URL is handed to the browser — which can use it once, before it expires, for the exact operation it was signed for. For uploads, you sign a PutObjectCommand; for multipart, you sign one URL per part. The browser PUTs the bytes directly to S3, so the data path is browser → S3 with no proxy and no credential exposure. S3 Viewer's own upload UI uses this exact pattern: presigned per-part URLs above 50 MB, your encrypted credential signing them server-side, your browser never seeing the key.

FAQ

Common questions.

How do I upload to S3 without exposing my AWS key?

Use presigned upload URLs. Your backend signs a short-lived URL with your AWS access key (server-side, behind your auth), the browser uploads directly to S3 with that URL, and your credential never enters the page. The URL expires after the duration you set (typically 15 minutes); even if it leaks, the blast radius is one specific key for a few minutes.

Are presigned URLs secure?

They're a bearer token to perform one specific S3 operation, scoped by URL. As long as you keep the expiry short and lock down the conditions (specific key, content-type, max size), they're a strong primitive. The risk is leakage during the validity window — short expiries (15 minutes for upload, 1–24 hours for download) are the standard.

Can I use presigned URLs for large file uploads?

Yes, with multipart upload. Your backend issues one presigned URL per part (5 MB minimum, 10,000 parts max, 5 TB ceiling), the browser uploads parts in parallel directly to S3, and your backend calls CompleteMultipartUpload at the end. S3 Viewer uses this pattern automatically above 50 MB.

What's the maximum size of a presigned upload?

5 GB for a single PutObject presigned URL. For larger files, use multipart upload — one presigned URL per part, up to 10,000 parts × 5 GB = 5 TB per object.

Do presigned URLs work with Cloudflare R2?

Yes. R2 implements the same S3 presigned URL pattern. Use the same SDK with the R2 endpoint configured (`endpoint: 'https://<account>.r2.cloudflarestorage.com'`); everything else stays the same.

How short should the expiry be?

As short as your upload reliably finishes. 15 minutes is a good default for browser uploads. For multipart, give each part-URL the same short expiry — the browser uploads parts quickly enough that 15 minutes covers it. Long expiries are mostly for download links, where the recipient opens it later.
Use S3 Viewer for this

Skip the CLI. Try it in the browser.

S3 Viewer turns the steps above into a single click. Open source, self-hostable, free for personal use.