How to copy files between AWS S3 and Cloudflare R2
Cloudflare R2 implements the S3 API, so any S3 client can read from one and write to the other. The naive way is to download the object to your machine and re-upload it — slow, fragile, and bandwidth-expensive. The right way is server-side: stream the bytes directly from the source bucket to the destination. S3 Viewer wraps that in a single right-click; if you're scripting it, use the AWS CLI with `--endpoint-url` against R2 and stream through your own server.
Step-by-step.
- 01
In S3 Viewer: connect both providers
Add your AWS S3 credentials and your Cloudflare R2 token. S3 Viewer detects each provider from the endpoint and applies the right region settings. Both buckets show up in the same sidebar. - 02
Right-click the object → Copy or Move
Pick the destination bucket on the other provider and confirm the new key. S3 Viewer streams the bytes server-side from the source to the destination using each provider's S3 API. Metadata and tags are preserved by default. - 03
Why CopyObject doesn't work across providers
S3'sCopyObjectis server-side, but it requires source and destination to share an endpoint — by design. Cross-provider copies need separate read and write operations. S3 Viewer does both on the server, so the bytes never touch your local machine. - 04
AWS CLI: use --endpoint-url for R2
For scripts, use two AWS profiles — one for AWS, one for R2 with--endpoint-url. Stream the object through a pipe so the bytes don't persist locally.aws s3 cp s3://aws-bucket/key.zip - --profile aws | aws s3 cp - s3://r2-bucket/key.zip \ --profile r2 \ --endpoint-url https://<account>.r2.cloudflarestorage.com - 05
AWS SDK: stream-to-stream
Two clients, one stream.GetObjectfrom the source returns a readable body; pipe it intoUploadfrom@aws-sdk/lib-storageagainst the destination client. Multipart kicks in automatically for large files.import { Upload } from '@aws-sdk/lib-storage'; const src = await aws.getObject({ Bucket: 'aws-bucket', Key: 'key.zip', }); await new Upload({ client: r2, params: { Bucket: 'r2-bucket', Key: 'key.zip', Body: src.Body, }, }).done(); - 06
For huge or recurring transfers: rclone
For one-time bulk migrations or scheduled syncs across many keys,rclone syncis the workhorse. It handles multipart, retries, parallelism, and resumes from where it left off.rclone sync aws:my-bucket r2:my-bucket --progress
What's actually happening.
S3's native CopyObject is server-side but requires source and destination to share an endpoint — it doesn't work across providers. Cross-cloud copies become a Get-then-Put pair: read from source, write to destination. The question is where the bytes live in transit. Local download + re-upload is fine for one file, slow for many. S3 Viewer streams source → destination through its own server with metadata and tags preserved by default, so the bytes never touch your machine. For bulk migrations, rclone sync is the standard tool — it parallelizes, retries, and resumes.
Common questions.
How do I copy a file from AWS S3 to Cloudflare R2?
Can I do a server-side copy across providers?
Does cross-cloud copy in S3 Viewer cost extra?
Will my object's metadata and tags survive the copy?
What about copying R2 to S3, or S3 to MinIO?
How do I migrate millions of objects from S3 to R2?
Skip the CLI. Try it in the browser.
S3 Viewer turns the steps above into a single click. Open source, self-hostable, free for personal use.
Why teams pick this
More how-tos
View S3 + R2 together
One credential per provider, one sidebar, Cmd-K to jump, and right-click to copy across clouds.
Upload large files
Multipart upload — part sizes, parallelism, retries, and the 5 GB single-PUT cap that pushes you to multipart.
Rename an S3 file
Amazon S3 has no rename API — keys are immutable. Here's the standard copy + delete pattern, with metadata and tags preserved.