
Why We Ditched Our CMS and Made Content Part of the Codebase
When we started rebuilding the 56k.Cloud website in late 2025, this time as ACP Engineering AG, we faced a choice every engineering team eventually confronts: keep the familiar stack, or take the opportunity to build something better. We chose to rebuild.
The old setup
The previous site ran on Strapi v4 as the headless CMS paired with Next.js on the frontend. Strapi lived on its own server at cms.56k.cloud, backed by a database, and the Next.js app fetched content from it at build time via the REST API. On paper, a perfectly reasonable architecture. In practice, it came with a steady tax.
Strapi needs its own infrastructure: a server, a database, regular updates, occasional migrations. Fine for a large team with dedicated platform engineers, but overkill for ours. Content lived in a database rather than in the repository, which meant you couldn't review changes in a pull request, roll back a bad description with git revert, or search across everything with a simple grep. The split between "the code" and "the content" felt increasingly artificial. Every production build also called out to cms.56k.cloud, so if Strapi was slow the build was slow, and if it was down the build failed. A fragile dependency for no real benefit on a marketing site. Add the cost of running a separate backend, and it became hard to justify.
The trigger: a rebrand
The catalyst wasn't a crisis. It was the rebrand from 56k.Cloud to ACP Engineering AG. We needed a new website anyway, and the old site's code was tightly coupled to the 56k.Cloud identity. The rebrand created a clean moment to reconsider every architectural decision. The question we asked ourselves: if we were starting from scratch today, would we use a CMS? For a marketing site maintained by engineers already living in their editors and in Git, the answer was no.
The redesign: AI meets design
With the architectural direction settled, we still had to deal with the visual identity. The old site was 56k.Cloud through and through (colours, typography, layout, tone) and none of it carried over.
We used AI tooling to systematically transform the site's visual layer: swapping brand colours, updating logos, adjusting copy to reflect ACP's corporate identity. For bulk, rule-based transformations that touch dozens of files in predictable ways, this worked well. The result was consistent and technically correct, but technically correct isn't enough. The AI-transformed site looked like what it was: a find-and-replace rebrand. Passable, but no personality.
That required a designer. Specifically, someone who could interpret ACP's broad corporate guidelines and translate them for a cloud-native engineering company. ACP is a generalist IT group; 56k.Cloud, now ACP Engineering AG, is a focused, opinionated team working in cloud-native infrastructure. The visual language needed to reflect that difference. A designer brought the judgment AI couldn't: which rules to follow, which to bend, and how to make the site feel like ours rather than a corporate template. The lesson was straightforward: AI is excellent at executing transformations at scale, but defining what the transformation should mean is still a human problem. We leaned on both, automation for speed and design for intent, and the combination worked.
The design: content as code
The core idea behind the new architecture is simple: content is code. No CMS, no external API, no database. Every page, every blog post, every team member bio lives as a TypeScript file in the repository.
dictionaries/
└── en/
├── blog/
│ └── my-post/
│ ├── index.tsx ← article metadata + page composition
│ └── content.mdx ← article body
├── team-members/
│ └── sandro-pereira/
│ └── index.tsx ← team member data
└── services/
└── cloud-migration/
└── index.tsx ← service page
A single catch-all route, app/[lang]/[[...path]]/page.tsx, handles all URL rendering. At build time, collectPaths() walks the dictionary tree and discovers every page. There's no routing config to maintain; add a file, get a page.
Every blog post follows a consistent structure. The index.tsx holds metadata and composes the page:
export const article = articleSchema.parse({
id: 42,
title: "My Post Title",
slug,
description: "...",
tags: ["Engineering"],
publishedOn: "2026-01-15",
readTime: 5,
image: { src: articleImage, alt: "..." },
author,
});
export const body = [
<ArticleContent key="content" article={article} Content={Content} />,
];
And content.mdx holds the prose. Images are local files imported at the top of the MDX, so Next.js can optimise them at build time:
import heroImage from "@/public/blog/my-post/hero.png";
<Image src={heroImage} alt="Architecture diagram" />
The rest of the post is standard markdown...
Everything is statically typed. Zod schemas validate content structure at build time. A malformed date or a missing required field fails the build immediately, before it ever reaches production.
The migration: automating the tedious part
We had 68 blog posts and 25 team member profiles sitting in Strapi. Migrating them by hand was not an option, so we wrote a one-time migration script that handled the full pipeline automatically:
- Fetch all articles from the Strapi REST API (
/api/articles?locale=en&populate=*) - Download all images, both cover images and embedded content images, to
public/blog/{slug}/ - Convert markdown to MDX by replacing
with<Image src={import} />using local static imports, and replace bare embed URLs with the<Embed>component - Generate
index.tsxfor each article from a template - Generate
content.mdxwith processed content and image imports at the top
The team member script followed the same pattern. Three minutes and one npx tsx invocation later: 68 blog posts with all images downloaded and locally referenced, all metadata validated against Zod schemas, and a git diff ready to review. The migration scripts were removed after the import and exist only in git history. Nothing of Strapi remains in the codebase.
What changed
The difference in day-to-day experience is significant.
Publishing a post is a pull request. Write the MDX, fill in the metadata, open a PR. CI runs pnpm build and pnpm check, catching broken links and type errors before merge. The diff is reviewable, the history is in git, and rollback is git revert. The build has no external dependencies. No API calls, no network timeouts, no CMS availability to worry about. pnpm build reads from the filesystem, and that's it.
Content is searchable. A grep across dictionaries/ finds every mention of a phrase across every blog post, service page, and team bio. Refactoring a product name is a sed away. Type safety covers content too. Zod schemas enforce that every article has a valid publishedOn date, every team member has a role, every page has a metadata.description. These were soft constraints in Strapi, easy to forget and impossible to enforce at deploy time.
i18n is a directory structure. German and French translations live in dictionaries/de/ and dictionaries/fr/, deep-merged with the English base at build time. Adding a translated post is adding a file, and missing translations fall back to English automatically.
AI-ready by design. Clear conventions, Zod-validated schemas, and a predictable file structure make the codebase immediately productive for an AI coding assistant. Creating a new page is a conversation: describe what you need, and Claude scaffolds the files, wires up the imports, and registers the page. The schema validates the output, the build confirms it works, and the PR is ready to review. Try that with a database-backed CMS.
The result
acp-website is a fully static Next.js application. It builds in seconds, deploys to Vercel's edge network, and has no runtime dependencies beyond the CDN. There is no CMS to maintain, no database to back up, no API quotas to monitor.
More importantly, content is a first-class part of the engineering workflow. It lives in the repo, it's reviewed in PRs, and it's covered by the same CI pipeline as the rest of the code. For a team that already thinks in code, and increasingly works alongside AI, this is the natural model. We should have built it this way from the start.
Nous contacter
Parlez-nous de votre projet – nous vous répondons sous 24 heures.
- Décrivez-nous votre défi
- Recevez une proposition d'architecture sur mesure
- Démarrez avec un accompagnement expert