Softening the stack
We used to treat servers like pets. Then we learned to treat them like cattle. Now, my sense is that we are moving toward a model where we do not think about the servers at all. We are simply consuming infrastructure as a fluid medium.
I want to walk through how a few specific shifts (Bun’s native S3 client and the Vercel AI SDK’s standardization) have collapsed the complexity required to build production AI applications. The result is a “no-database” architecture that relies entirely on an object store for both assets and metadata.
What we used to deal with
Building an image generation app two years ago meant fighting infrastructure instead of writing features. The typical stack looked like this:
-
Heavy SDKs: You had to pull in the official AWS S3 SDK. It was a slog. The API required you to instantiate clients, manually construct command objects, and handle credentials in a way that felt surprisingly brittle.
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3' const client = new S3Client({ region: 'us-east-1', credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY } }) const command = new PutObjectCommand({ Bucket: 'my-bucket', Key: 'photo.jpg', Body: buffer, ContentType: 'image/jpeg' }) await client.send(command) -
Database tax: You needed a Postgres instance just to track a prompt and a file URL. That meant setting up an ORM (like Prisma), managing migrations, and paying for a database that was mostly empty.
-
Dashboard fatigue: You were jumping between Vercel, Supabase, AWS Console, and Replicate. It felt less like engineering and more like systems integration.
The new, simpler stack
A few key updates have landed that allow us to delete most of that boilerplate.
Bun S3 client (native support)
Bun added a native S3 client. It is not an npm package. It is baked into the runtime, exactly like fetch. This seemingly small change is philosophically huge because it treats storage as a standard library capability rather than a third-party concern.
The verbose AWS code collapses into this:
import { S3Client } from 'bun'
const s3 = new S3Client({ bucket: 'my-bucket' })
await s3.write('photo.jpg', buffer, { type: 'image/jpeg' })Railway object storage
Railway’s move to support native Object Storage (Volumes) solved the configuration drift. Instead of managing IAM policies in the AWS console (a notorious productivity killer), you simply attach a volume in the Railway dashboard. The environment variables inject themselves. It just works.
AI SDK image generation
The Vercel AI SDK (specifically the Core API) introduced experimental_generateImage in mid-2024. Before this, we had to manually poll APIs to check if a GPU had finished rendering. Now, that complexity is abstracted away into a single awaitable call.
import { experimental_generateImage as generateImage } from 'ai'
import { createReplicate } from '@ai-sdk/replicate'
// We use a factory function to pass the user's key dynamically
const replicate = createReplicate({
apiToken: userProvidedKey
})
const { image } = await generateImage({
model: replicate.image('black-forest-labs/flux-schnell'),
prompt: 'purple cow eating ice cream',
aspectRatio: '16:9'
})https://sdk.vercel.ai/docs/reference/ai-sdk-core/generate-image
The no-database approach
The most radical part of this stack is the absence of a database.
When we generate an image, we store two files in the S3 bucket (the .png image and a .json metadata file). The object store effectively becomes our index.
Writing data
The /api/generate endpoint accepts a prompt and a temporary API key. We use the key once for inference and then discard it.
import { S3Client } from 'bun'
import { experimental_generateImage as generateImage } from 'ai'
import { createReplicate } from '@ai-sdk/replicate'
const s3 = new S3Client({ bucket: process.env.S3_BUCKET })
export async function POST(req) {
const { prompt, apiKey } = await req.json()
// 1. Generate the image
const replicate = createReplicate({ apiToken: apiKey })
const { image } = await generateImage({
model: replicate.image('black-forest-labs/flux-schnell'),
prompt,
})
// 2. Create a unique ID
const id = crypto.randomUUID()
const imageBuffer = Buffer.from(image.base64, 'base64')
// 3. Write image and metadata to S3 in parallel
await Promise.all([
s3.write(`${id}.png`, imageBuffer, { type: 'image/png' }),
s3.write(`${id}.json`, JSON.stringify({
id,
prompt,
createdAt: new Date().toISOString(),
}), { type: 'application/json' })
])
return Response.json({ id, status: 'success' })
}https://bun.sh/docs/api/s3#writing-files
Reading data
To fetch history, we just list the bucket. It feels like traversing a file system because, in Bun, it essentially is.
import { S3Client } from 'bun'
const s3 = new S3Client({ bucket: process.env.S3_BUCKET })
export async function GET() {
// List all files
const result = await s3.list()
// Filter for metadata files
const metadataFiles = result.contents.filter(f => f.key.endsWith('.json'))
// Read them in parallel
const history = await Promise.all(
metadataFiles.map(async (fileInfo) => {
const file = s3.file(fileInfo.key)
return await file.json()
})
)
// Sort by date and return
const sorted = history.sort((a, b) =>
new Date(b.createdAt) - new Date(a.createdAt)
)
return Response.json(sorted.slice(0, 20))
}https://bun.sh/docs/api/s3#reading-files
Sanity check
One might wonder if this is reliable. Since we are using Bun.S3Client, we can verify our persistence layer with a simple script running locally.
// check-storage.ts
import { S3Client } from 'bun'
const s3 = new S3Client({ bucket: process.env.S3_BUCKET })
console.log('Checking bucket connection...')
const result = await s3.list()
const jsonCount = result.contents.filter(k => k.key.endsWith('.json')).length
const pngCount = result.contents.filter(k => k.key.endsWith('.png')).length
console.log(`Found ${jsonCount} metadata files`)
console.log(`Found ${pngCount} images`)
if (jsonCount !== pngCount) {
console.error('WARNING: Data mismatch detected.')
} else {
console.log('Sanity check passed: Data is consistent.')
}https://bun.sh/docs/api/s3#listing-files
Why this matters
We are seeing capabilities diffuse downwards. What used to be user-space libraries (like fetch or S3 clients) are sinking into the platform itself.
This shift reduces supply chain risk and, perhaps more importantly, cognitive load. The stack feels solid rather than assembled. It allows us to operate at a higher level of abstraction (what I call “vibe coding” infrastructure) where we focus purely on the inputs and outputs.
The “no database” pattern is valid for more use cases than we admit. We are conditioned to reach for Postgres immediately. But for append-only workloads (like AI generation logs), S3 is durable, cheap, and arguably more robust than a managed SQL instance you have to maintain.
My take on the future
So that brings us to where we are today.
Production readiness is often about removing moving parts. If you can remove the database and the external SDKs, you have fewer things that can break in the middle of the night.
- Viability: Is S3 a database? For high-read, append-only content, absolutely. It eliminates the “database tax” of maintenance.
- Lock-in: While
import { S3Client } from 'bun'ties you to the runtime, the logic is standard. Refactoring back to Node.js would be trivial. - Search: This is the trade-off. If you need complex filtering, you need a search index (like Meilisearch). But for simple lists, this is enough.
We are softening the stack until it disappears. And I think that is exactly where we need to be.