How to copy files between AWS S3 and Cloudflare R2

TL;DR

Cloudflare R2 implements the S3 API, so any S3 client can read from one and write to the other. The naive way is to download the object to your machine and re-upload it — slow, fragile, and bandwidth-expensive. The right way is server-side: stream the bytes directly from the source bucket to the destination. S3 Viewer wraps that in a single right-click; if you're scripting it, use the AWS CLI with `--endpoint-url` against R2 and stream through your own server.

Steps

Step-by-step.

  1. 01

    In S3 Viewer: connect both providers

    Add your AWS S3 credentials and your Cloudflare R2 token. S3 Viewer detects each provider from the endpoint and applies the right region settings. Both buckets show up in the same sidebar.
  2. 02

    Right-click the object → Copy or Move

    Pick the destination bucket on the other provider and confirm the new key. S3 Viewer streams the bytes server-side from the source to the destination using each provider's S3 API. Metadata and tags are preserved by default.
  3. 03

    Why CopyObject doesn't work across providers

    S3's CopyObject is server-side, but it requires source and destination to share an endpoint — by design. Cross-provider copies need separate read and write operations. S3 Viewer does both on the server, so the bytes never touch your local machine.
  4. 04

    AWS CLI: use --endpoint-url for R2

    For scripts, use two AWS profiles — one for AWS, one for R2 with --endpoint-url. Stream the object through a pipe so the bytes don't persist locally.
    aws s3 cp s3://aws-bucket/key.zip - --profile aws |
      aws s3 cp - s3://r2-bucket/key.zip \
        --profile r2 \
        --endpoint-url https://<account>.r2.cloudflarestorage.com
  5. 05

    AWS SDK: stream-to-stream

    Two clients, one stream. GetObject from the source returns a readable body; pipe it into Upload from @aws-sdk/lib-storage against the destination client. Multipart kicks in automatically for large files.
    import { Upload } from '@aws-sdk/lib-storage';
    
    const src = await aws.getObject({
      Bucket: 'aws-bucket', Key: 'key.zip',
    });
    await new Upload({
      client: r2,
      params: {
        Bucket: 'r2-bucket', Key: 'key.zip', Body: src.Body,
      },
    }).done();
  6. 06

    For huge or recurring transfers: rclone

    For one-time bulk migrations or scheduled syncs across many keys, rclone sync is the workhorse. It handles multipart, retries, parallelism, and resumes from where it left off.
    rclone sync aws:my-bucket r2:my-bucket --progress
Under the hood

What's actually happening.

S3's native CopyObject is server-side but requires source and destination to share an endpoint — it doesn't work across providers. Cross-cloud copies become a Get-then-Put pair: read from source, write to destination. The question is where the bytes live in transit. Local download + re-upload is fine for one file, slow for many. S3 Viewer streams source → destination through its own server with metadata and tags preserved by default, so the bytes never touch your machine. For bulk migrations, rclone sync is the standard tool — it parallelizes, retries, and resumes.

FAQ

Common questions.

How do I copy a file from AWS S3 to Cloudflare R2?

Easiest in a UI: connect both providers in S3 Viewer, right-click the object, and choose Copy. S3 Viewer streams the bytes server-side from S3 to R2 — no local download, metadata and tags preserved by default. From the AWS CLI: pipe `aws s3 cp` from one profile through to another with `--endpoint-url` set to the R2 endpoint. For bulk migrations: use rclone sync.

Can I do a server-side copy across providers?

Not with a single CopyObject API call — that requires source and destination to share an endpoint. Cross-provider copies have to read from source and write to destination as two operations. S3 Viewer does this on its server so the bytes don't pass through your local machine; rclone does the equivalent on whichever box you run it on.

Does cross-cloud copy in S3 Viewer cost extra?

You pay AWS egress on the read from S3 and Cloudflare R2 ingress on the write — and Cloudflare R2 has zero egress fees, which is part of why this pattern is popular. S3 Viewer doesn't charge anything per byte; the streaming happens on our server, but no object data is retained.

Will my object's metadata and tags survive the copy?

Yes if your tool sets MetadataDirective and TaggingDirective to COPY (or the equivalent on the destination write). The AWS SDK and CLI defaults to REPLACE on these, which silently drops metadata and tags. S3 Viewer sets both to COPY by default, so cache-control headers, content-type, and object tags carry over.

What about copying R2 to S3, or S3 to MinIO?

Same pattern, any direction — R2 to S3, B2 to MinIO, MinIO to Wasabi. Every S3-compatible provider supports the same Get + Put primitives, and S3 Viewer keeps a separate signer per provider so credentials never cross over.

How do I migrate millions of objects from S3 to R2?

rclone is the right tool. `rclone sync source:bucket dest:bucket` parallelizes the copy, handles multipart, retries failed parts, and resumes from where it left off. For incremental syncs, run it on a schedule. AWS DataSync is another option for staying inside the AWS ecosystem.
Use S3 Viewer for this

Skip the CLI. Try it in the browser.

S3 Viewer turns the steps above into a single click. Open source, self-hostable, free for personal use.