AuraImage

Migration

Move existing images from Cloudinary, S3/R2, local static files, or any CDN to AuraImage.

Migrating to AuraImage means uploading your existing images through the standard upload flow and updating the URLs in your code. Your image metadata (alt text, captions) lives in your app's database — it doesn't move.

For local images in /public or similar directories, the fastest path is the migrate_assets MCP tool.

Tell your agent:

"Migrate all images in /public/assets to my AuraImage project my-app."

The agent uploads each image and rewrites the surrounding JSX/TSX to use <AuraImage /> with the correct src, width, and height in one step. For remote sources (Cloudinary, S3, etc.), tell the agent the source and it will fetch-and-upload each one.

See AI Integration for setup instructions and the full tool reference.

Option 2 — Manual migration script

Install the SDK and p-limit for concurrency control:

npm install @auraimage/sdk p-limit

From Cloudinary

scripts/migrate-from-cloudinary.ts
import { v2 as cloudinary } from 'cloudinary';
import { AuraImage } from '@auraimage/sdk';
import pLimit from 'p-limit';

cloudinary.config({
  cloud_name: process.env.CLOUDINARY_CLOUD_NAME!,
  api_key: process.env.CLOUDINARY_API_KEY!,
  api_secret: process.env.CLOUDINARY_API_SECRET!,
});

const aura = new AuraImage({
  secretKey: process.env.AURA_SECRET_KEY!,
  projectName: process.env.NEXT_PUBLIC_AURA_PROJECT_NAME!,
});

const limit = pLimit(10);

async function migrateOne(publicId: string, sourceUrl: string) {
  const imageRes = await fetch(sourceUrl);
  const blob = await imageRes.blob();
  const filename = publicId.split('/').pop()! + '.jpg';

  const signature = await aura.signUpload({ maxSize: '50mb', allowedTypes: ['image/*'], expiresIn: 300 });
  const formData = new FormData();
  formData.append('file', blob, filename);
  formData.append('filename', filename);

  const res = await fetch('https://cdn.auraimage.ai/v1/upload', {
    method: 'POST',
    headers: { 'X-Aura-Signature': signature },
    body: formData,
  });

  const { url, key } = await res.json();
  console.log(`${publicId} → ${url}`);
  return { publicId, url, key };
}

async function main() {
  let nextCursor: string | undefined;
  const results: { publicId: string; url: string; key: string }[] = [];

  do {
    const { resources, next_cursor } = await cloudinary.api.resources({
      type: 'upload',
      max_results: 500,
      next_cursor: nextCursor,
    });
    nextCursor = next_cursor;

    const batch = resources.map((r: { public_id: string; secure_url: string }) =>
      limit(() => migrateOne(r.public_id, r.secure_url))
    );
    results.push(...(await Promise.all(batch)));
  } while (nextCursor);

  console.log(`Migrated ${results.length} images.`);
}

main();

From S3 or R2

scripts/migrate-from-s3.ts
import { S3Client, ListObjectsV2Command, GetObjectCommand } from '@aws-sdk/client-s3';
import { AuraImage } from '@auraimage/sdk';
import pLimit from 'p-limit';

const s3 = new S3Client({ region: process.env.AWS_REGION! });
const BUCKET = process.env.S3_BUCKET!;

const aura = new AuraImage({
  secretKey: process.env.AURA_SECRET_KEY!,
  projectName: process.env.NEXT_PUBLIC_AURA_PROJECT_NAME!,
});

const limit = pLimit(10);
const IMAGE_EXTS = new Set(['.jpg', '.jpeg', '.png', '.webp', '.avif', '.gif']);

async function migrateOne(key: string) {
  const { Body, ContentType } = await s3.send(new GetObjectCommand({ Bucket: BUCKET, Key: key }));
  const filename = key.split('/').pop()!;
  const bytes = await Body!.transformToByteArray();
  const blob = new Blob([bytes], { type: ContentType });

  const signature = await aura.signUpload({ maxSize: '50mb', allowedTypes: ['image/*'], expiresIn: 300 });
  const formData = new FormData();
  formData.append('file', blob, filename);
  formData.append('filename', filename);

  const res = await fetch('https://cdn.auraimage.ai/v1/upload', {
    method: 'POST',
    headers: { 'X-Aura-Signature': signature },
    body: formData,
  });

  const { url } = await res.json();
  console.log(`${key} → ${url}`);
}

async function main() {
  let token: string | undefined;

  do {
    const { Contents = [], NextContinuationToken } = await s3.send(
      new ListObjectsV2Command({ Bucket: BUCKET, ContinuationToken: token })
    );
    token = NextContinuationToken;

    const images = Contents.filter((obj) => {
      const ext = '.' + obj.Key!.split('.').pop()!.toLowerCase();
      return IMAGE_EXTS.has(ext);
    });

    await Promise.all(images.map((obj) => limit(() => migrateOne(obj.Key!))));
  } while (token);
}

main();

For Cloudflare R2, swap the S3 client constructor:

const s3 = new S3Client({
  region: 'auto',
  endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID!,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
  },
});

From local static files

Use the aura upload CLI — it handles signing, uploads, and structured output in one command:

aura upload ./public/images --project-name my-app --json > migration-map.ndjson

The output is newline-delimited JSON, one object per file:

{"filename": "hero.jpg", "url": "https://cdn.auraimage.ai/my-app/abc123xyz0-hero.jpg", "key": "abc123xyz0-hero.jpg", "width": 2400, "height": 1600}
{"filename": "avatar.png", "url": "https://cdn.auraimage.ai/my-app/def456abc1-avatar.png", "key": "def456abc1-avatar.png", "width": 256, "height": 256}

Use migration-map.ndjson in the next step to rewrite your URL references.

From any CDN (generic)

Fetch by URL and upload:

scripts/migrate-from-urls.ts
import { AuraImage } from '@auraimage/sdk';
import pLimit from 'p-limit';

const aura = new AuraImage({
  secretKey: process.env.AURA_SECRET_KEY!,
  projectName: process.env.NEXT_PUBLIC_AURA_PROJECT_NAME!,
});

const limit = pLimit(10);

// Replace with your list of image URLs
const urls = [
  'https://images.imgix.net/my-account/hero.jpg',
  'https://images.imgix.net/my-account/avatar.png',
];

async function migrateOne(sourceUrl: string) {
  const imageRes = await fetch(sourceUrl);
  const blob = await imageRes.blob();
  const filename = sourceUrl.split('/').pop()!.split('?')[0];

  const signature = await aura.signUpload({ maxSize: '50mb', allowedTypes: ['image/*'], expiresIn: 300 });
  const formData = new FormData();
  formData.append('file', blob, filename);
  formData.append('filename', filename);

  const res = await fetch('https://cdn.auraimage.ai/v1/upload', {
    method: 'POST',
    headers: { 'X-Aura-Signature': signature },
    body: formData,
  });

  const { url } = await res.json();
  console.log(`${sourceUrl} → ${url}`);
  return { old: sourceUrl, new: url };
}

const results = await Promise.all(urls.map((url) => limit(() => migrateOne(url))));

Update your URL references

After uploading, swap the old CDN prefix for the new AuraImage URL in your codebase.

BeforeAfter
https://res.cloudinary.com/my-account/image/upload/https://cdn.auraimage.ai/my-project/
https://my-bucket.s3.amazonaws.com/images/https://cdn.auraimage.ai/my-project/
https://my-account.imgix.net/https://cdn.auraimage.ai/my-project/
/public/images/https://cdn.auraimage.ai/my-project/

The filename becomes <10-char-hash>-<original-name> — use the key field from the upload response (or migration-map.ndjson from the CLI) to build the exact mapping. Then run a codebase-wide find-and-replace, or hand the map to your agent:

"Use migration-map.ndjson to rewrite all image src paths in my JSX from /public/images/ to the new AuraImage URLs."

Zero-downtime incremental migration

For large libraries where you can't migrate everything at once:

  1. Keep your old CDN live — don't change any URLs yet.
  2. Run the migration script in batches. Collect the { old, new } URL mapping.
  3. Persist the mapping in your database or a JSON file alongside your app.
  4. In your image-rendering code, look up each stored URL — serve the AuraImage URL if present, fall back to the old URL if not yet migrated.
  5. Once all images have new URLs, remove the fallback and decommission the old CDN.
function resolveImageUrl(storedUrl: string): string {
  return migrationMap[storedUrl] ?? storedUrl;
}

This keeps the site fully functional throughout the migration with no downtime and no deployment coordination required.


On metadata: Alt text, captions, and other image data live in your application's database or CMS — not in AuraImage. Nothing to migrate there; your existing references continue to work against whichever URL you assign.