Jean-Pierre GehrigKevin Campbell

Buchen Sie ein kostenloses ErstgesprΓ€ch mit unserem Team.

ACP Logo
Why We Ditched Our CMS and Made Content Part of the Codebase

Why We Ditched Our CMS and Made Content Part of the Codebase

When we started rebuilding the 56k.Cloud website in late 2025 β€” this time as ACP Engineering AG β€” we faced a choice that every engineering team eventually confronts: do we keep the familiar stack, or take the opportunity to build something that fits us better? We chose to rebuild.

The old setup

The previous site, 56kcloud-website, was built on a stack that made a lot of sense at the time: Strapi v4 as the headless CMS, paired with Next.js on the frontend. Strapi sat on its own server at cms.56k.cloud, backed by a database, and the Next.js app fetched content from it at build time via the Strapi REST API.

On paper, this is a perfectly reasonable architecture. In practice, running it for an engineering-led team comes with a steady tax.

Operational overhead. Strapi needs its own infrastructure: a server, a database, regular updates, occasional migrations. That's fine for a large team with dedicated platform engineers.

The CMS UI vs. the code. Strapi's admin panel is feature-rich, but it created a split: content lived in a database, not in the repository. You couldn't review a content change in a pull request. You couldn't roll back a bad description with git revert. You couldn't search across all content with a simple grep. The separation between "the code" and "the content" felt increasingly artificial.

Build-time API dependency. Every production build called out to cms.56k.cloud/api/{...}?locale=en&populate=*. If Strapi was slow, the build was slow. If it was down, the build failed. This is a single point of failure that adds fragility for no real benefit on a marketing site.

Cost. Running a separate backend, even a modest one, adds cost. More importantly, it adds operational complexity that needs to be justified β€” and on a content-light marketing site, it increasingly wasn't.

The trigger: a rebrand

The catalyst wasn't a crisis. It was the rebrand from 56k.Cloud to ACP Engineering AG. We needed a new website anyway. The old site's code was tightly coupled to the 56k.Cloud identity, and the rebrand created a clean moment to reconsider every architectural decision.

The question we asked ourselves: if we were starting from scratch today, would we use a CMS?

For a marketing site maintained by engineers who are already living in their editors and in Git, the answer was no.

The redesign: AI meets design

With the architectural direction settled, we still had to deal with the visual identity. The old site was 56k.Cloud through and through β€” colours, typography, layout, tone. None of it carried over to ACP Engineering AG.

AI handled the mechanical work. We used AI tooling to systematically transform the site's visual layer β€” swapping brand colours, updating logos, adjusting copy to reflect ACP's corporate identity. For the kind of bulk, rule-based transformation that touches dozens of files in predictable ways, this worked well. The result was consistent and technically correct.

But technically correct isn't enough. The AI-transformed site looked like what it was: a find-and-replace rebrand. It was passable, but it had no personality. It didn't communicate anything about who we are or what kind of engineering we do.

That required a designer. Specifically, a designer who could interpret ACP's broad corporate guidelines and translate them for a cloud-native engineering company. ACP is a generalist IT group; 56k.Cloud β€” now ACP Engineering AG β€” is a focused, opinionated team working in cloud-native infrastructure. The visual language needed to reflect that difference. A designer brought the judgment that AI couldn't: which rules to follow, which to bend, and how to make the site feel like it belongs to us rather than to a corporate template.

The lesson was straightforward: AI is excellent at executing a transformation at scale, but defining what the transformation should mean is still a human problem. We leaned on both β€” automation for speed, design for intent β€” and the combination worked.

The design: content as code

The core idea behind the new acp-website architecture is simple: content is code. There is no CMS. No external API. No database. Every page, every blog post, every team member bio lives as a TypeScript file in the repository.

dictionaries/
└── en/
    β”œβ”€β”€ blog/
    β”‚   └── my-post/
    β”‚       β”œβ”€β”€ index.tsx   ← article metadata + page composition
    β”‚       └── content.mdx ← article body
    β”œβ”€β”€ team-members/
    β”‚   └── sandro-pereira/
    β”‚       └── index.tsx   ← team member data
    └── services/
        └── cloud-migration/
            └── index.tsx   ← service page

A single catch-all route β€” app/[lang]/[[...path]]/page.tsx β€” handles all URL rendering. At build time, collectPaths() walks the dictionary tree and discovers every page. There is no routing config to maintain. Add a file, get a page.

Every blog post follows a consistent structure. The index.tsx holds metadata and composes the page:

export const article = articleSchema.parse({
  id: 42,
  title: "My Post Title",
  slug,
  description: "...",
  tags: ["Engineering"],
  publishedOn: "2026-01-15",
  readTime: 5,
  image: { src: articleImage, alt: "..." },
  author,
});

export const body = [
  <ArticleContent key="content" article={article} Content={Content} />,
];

And content.mdx holds the prose. Images are local files imported at the top of the MDX, which means Next.js can optimise them at build time:

import heroImage from "@/public/blog/my-post/hero.png";

<Image src={heroImage} alt="Architecture diagram" />

The rest of the post is standard markdown...

Everything is statically typed. Zod schemas validate content structure at build time. A malformed date or a missing required field fails the build immediately, before it ever reaches production.

The migration: automating the tedious part

We had 68 blog posts and 25 team member profiles sitting in Strapi. Migrating them by hand was not an option.

We wrote a one-time migration script, scripts/import-blog-articles.ts, that handled the full pipeline automatically:

  1. Fetch all articles from the Strapi REST API (/api/articles?locale=en&populate=*)
  2. Download all images β€” both article cover images and embedded content images β€” to public/blog/{slug}/
  3. Convert markdown to MDX β€” replace ![alt](url) with <Image src={import} /> using local static imports, and replace bare embed URLs with the <Embed> component
  4. Generate index.tsx for each article from a template
  5. Generate content.mdx with processed content and image imports at the top

The equivalent script for team members followed the same pattern, hitting /api/team-members?locale=en&populate=* and generating each member's index.tsx with their avatar downloaded locally.

Running the migration looked like this:

npx tsx scripts/import-blog-articles.ts

Three minutes later: 68 blog posts, all images downloaded and locally referenced, all metadata validated against the Zod schema, and a git diff ready to review.

The migration scripts were removed after the import β€” they exist only in git history. There is nothing left of Strapi in the codebase.

What changed

The difference in day-to-day experience is significant.

Publishing a post is a pull request. Write the MDX, fill in the metadata, open a PR. The CI pipeline runs pnpm build and pnpm check, catching broken links and type errors before merge. The diff is reviewable, the history is in git, and rollback is git revert.

The build has no external dependencies. No API calls, no network timeouts, no CMS availability to worry about. pnpm build reads from the filesystem. That's it.

Content is searchable. A grep across dictionaries/ finds every mention of a phrase across every blog post, service page, and team bio. Refactoring a product name is a sed away.

Type safety covers content. Zod schemas enforce that every article has a valid publishedOn date, every team member has a role, every page has a metadata.description. These were soft constraints in Strapi β€” easy to forget, impossible to enforce at deploy time.

i18n is a directory structure. German and French translations live in dictionaries/de/ and dictionaries/fr/, deep-merged with the English base at build time. Adding a translated post is adding a file. Missing translations fall back to English automatically.

AI-ready by design. Clear conventions, Zod-validated schemas, and a predictable file structure make the codebase immediately productive for an AI coding assistant. Creating a new page is a conversation: describe what you need, and Claude scaffolds the files, wires up the imports, and registers the page. The schema validates the output, the build confirms it works, and the PR is ready to review. Try that with a database-backed CMS.

The result

acp-website is a fully static Next.js application. It builds in seconds, deploys to Vercel's edge network, and has no runtime dependencies beyond the CDN. There is no CMS to maintain, no database to back up, no API quotas to monitor.

More importantly, content is a first-class part of the engineering workflow. It lives in the repo, it's reviewed in PRs, and it's covered by the same CI pipeline as the rest of the code.

For a team that already thinks in code β€” and increasingly works alongside AI β€” this is the natural model. We should have built it this way from the start.

Kontakt aufnehmen

ErzΓ€hlen Sie uns von Ihrem Projekt – wir melden uns innerhalb von 24 Stunden.

  • ErzΓ€hlen Sie uns von Ihrer Herausforderung
  • Erhalten Sie einen massgeschneiderten Architekturvorschlag
  • Starten Sie mit professioneller UnterstΓΌtzung