Why Omnichannel Delivery Isn’t Best Suited for Traditional CMS
Content Modeling in dotCMS
How Would Composable Omnichannel Work?
Best Practices for Content Modeling in dotCMS (for Omnichannel Success)
Omnichannel Success with dotCMS and the Right Partner
Share
Contact Us
Customers hop between storefronts, apps, marketplaces, and in-store screens; and they expect the story about your products to match everywhere. The only way to meet that bar is to store content as clean, structured data and serve it to whatever front end needs it. That’s the job of content modeling.
This guide walks through a practical “101” for modeling content in dotCMS. You’ll learn how to define types, fields, and relationships that reflect your business, keep presentation concerns out of the CMS, enable localization and multi-site, and enforce quality with workflow.
Why Omnichannel Delivery Isn’t Best Suited for Traditional CMS
Omnichannel content delivery means your messaging remains consistent across every channel, while also optimized for the context of each channel. In an omnichannel retail example, a customer might discover a product on a marketplace, check details on your website, receive a promotion via email or SMS, and finally buy in-store.
Traditional web-oriented CMS platforms often struggle here. They were built to push content into tightly-coupled webpages, not to serve as a hub for many diverse outputs. Many legacy CMSs lack the flexibility to easily repurpose content for mobile apps, IoT devices, or third-party platforms, and their rigid page-centric models make omnichannel consistency hard to maintain.
In contrast, modern headless and hybrid CMS solutions promise to meet omnichannel needs. Headless CMS decouples the content repository from any specific delivery channel, exposing content via APIs. This decoupling gives organizations the agility to present content on any channel from a unified backend.
However, pure headless systems sometimes sacrifice the user-friendly authoring experience that content teams desire. This is where hybrid CMS platforms like dotCMS.com shine. They combine the flexibility of headless architecture with tools for easy authoring and preview. dotCMS enables a true “create once, publish everywhere” paradigm, which drives omnichannel content distribution, reduces duplicate work, and speeds up time-to-market.
What is Content Modeling? (And Why It Matters for Omnichannel)
Content modeling is the process of defining a structured blueprint for your content; identifying what types of content you have, what fields or elements make up each type, and how those pieces relate to each other. Instead of treating a piece of content as a big blob of text (like a lengthy web page), you break it down into meaningful chunks: titles, descriptions, images, metadata, categories, and so on.
For example, a Product content type might include fields for product name, description, price, images, specifications, and related products or categories. A Blog Article content type might include title, author, publish date, body text, and tags. When content is neatly structured and stored, you can present it on a website in one format, in a mobile app with a different layout, or even feed it into a chatbot or voice assistant; all without duplicating content or manual copy-paste.
Content Modeling in dotCMS
dotCMS is an enterprise-grade content management system built with omnichannel needs in mind. It advertises itself as a "Hybrid Headless" or "Universal CMS", meaning it blends the API-first flexibility of a headless CMS with the ease-of-use of a traditional CMS for editors. From a content modeling
perspective, dotCMS provides a rich toolset to define and manage your content structure without heavy development effort.
Here are some of the dotCMS features to understand:
Custom Content Types and Fields
In dotCMS, you can define unlimited content types to represent each kind of content in your business (products, articles, events, locations, customer testimonials, etc.). Each content type has fields (text, rich text, images, dates, geolocation, etc.) that you configure. Importantly, all content types are completely customizable through a no-code interface; you don’t need a developer to add a new field or change a content model. Content in dotCMS is stored in a central repository and structured by these content types, rather than as unstructured blobs.
Structured Content and Reuse
dotCMS treats content as data. A single piece of content (say a Product or an Article) lives in one place in the CMS but can be referenced or pulled into any number of front-end presentations. Authors can create content once and then use dotCMS’s features to share it across pages, sites, and channels effortlessly. For example, you might have one canonical product entry that is displayed on your public website, inside your mobile app, and on an in-store screen, all fed from the same content item.
Taxonomy, Tags, and Relationships
dotCMS provides no-code tools for tagging and categorizing content, as well as defining relationships between content types. For instance, you can relate an “Author” content item to a “Blog Post” item to model a one-to-many relationship, or relate “Products” to “Categories”. These relationships make it possible to build rich, dynamic experiences (e.g., listing all articles by a certain author, or showing related products in the same category). They also improve content discovery and personalization. The ability to create and manage these relations through a friendly UI means your content model can truly reflect the reality of your business (how things connect) without custom development.
Multi-Channel Content Management
dotCMS was built with multi-channel delivery in mind. It allows you to create content once and deliver it anywhere via APIs. Under the hood, dotCMS offers both REST and GraphQL APIs to retrieve content, so your front-end applications (website, mobile app, IoT device, etc.) can query the content they need. The content model you define is enforced across all channels..
Hybrid Editing Experience
One standout feature of dotCMS is its Universal Editing capabilities for content creators. Even though content may be delivered headlessly, dotCMS provides content teams with a visual editing and preview experience that works across different channel outputs. For example, the dotCMS Universal View Editor allows authors to assemble and preview content as it might appear on various devices or channels, all within the CMS interface. This means marketers can, say, adjust a landing page and see how it will look on a desktop site, a mobile app, or other contexts without needing separate systems.
Multi-Site and Multi-Tenancy
Large enterprises often serve multiple websites, brands, or regions, each a "channel" of its own. dotCMS supports multi-tenant content management, meaning you can run multiple sites or digital experiences from one dotCMS instance, reusing content where appropriate and varying it where needed. For example, you might have a global site and several country-specific sites; with dotCMS, you can share the core content model and even specific content items across them, while still allowing localization and differences where necessary. This feature amplifies content reusability for omnichannel because not only are you delivering to different device channels, but also to different sites/audiences from the same content hub.
API-First and Integration Ready
A true omnichannel content strategy rarely lives in a vacuum; it often needs to integrate with other systems (e.g., e-commerce platforms, CRMs, personalization engines, mobile apps). dotCMS is API-first and integrates effortlessly with any tech stack via REST, GraphQL, webhooks, and plugins. This openness means your content model in dotCMS can be the central content service not just for your own channels, but it can feed into other applications as well. For instance, if you have a separate mobile app or a voice assistant platform, they can pull content from dotCMS. If you use a third-party search engine or commerce engine, dotCMS can connect to it. The ability to plug into a composable architecture is important; dotCMS is often deployed alongside best-of-breed solutions (for example, integrating with e-commerce engines like Commercetools, Fynd etc. is supported out-of-the-box).
How Would Composable Omnichannel Work?
(dotCMS for content · MedusaJS for commerce · Fynd for channel sync)
TL;DR: Use dotCMS as your single source of truth for content, MedusaJS as the headless commerce engine, and Fynd to unify online/offline channels. Each does what it’s best at; together they deliver one seamless customer experience.
Why this matters (for CTOs)
Swap parts without a replatform
No more copy-paste drift across channels
Web, app, kiosk, marketplace, POS, consistent by design
Omnichannel isn’t just “show the same thing everywhere.” It’s a promise that product info, branding, and availability stay consistent across web, app, kiosk, marketplace, and POS without your teams duplicating work. The cleanest way to keep that promise is a composable stack where each system does one job extremely well and everything talks over APIs.
Start with dotCMS as the content brain. Treat product copy, specs, imagery, promos, and SEO as structured content modeled once, governed once, and delivered everywhere through REST or GraphQL. Editors get a friendly, hybrid authoring experience; engineers get predictable schemas and stable APIs. Because content is cleanly separated from presentation, you can render it in any UX—from a React website to a kiosk UI—without rewriting the source.
Pair that with MedusaJS on the commerce side. MedusaJS is a headless Node.js engine for catalogs, variants, pricing, carts, checkout, orders, and payments. It doesn’t prescribe a front end and plays nicely with webhooks and plugins. Think of it as the transactional core that your UIs (and channel tools) can query for the real-time bits: price, stock, and order state.
Now widen the lens with Fynd to unify online and offline channels. Fynd syncs inventory and orders across marketplaces and in-store systems, so the pair of shoes shown on your site matches what’s available at the mall and what’s listed on third-party marketplaces. When Fynd needs rich product content: names, descriptions, images, feature bullets, it can pull that from dotCMS.
Here’s how a typical flow feels. Your content team creates or updates a Product entry in dotCMS (title, short/long descriptions, hero image, spec table, locale variants). Front ends request that content via GraphQL and render it alongside live commerce data from MedusaJS (price, stock by variant). Fynd consumes the same product content from dotCMS and the same inventory signals from MedusaJS to populate marketplaces and POS.
Content modeling is the linchpin. Define types like Product, Category, Brand, Promo, and Store with clear relationships (Product↔Category, Product↔Promo). Add channel-aware fields: short descriptions for mobile cards, alt text for accessibility, locale fields for multi-region delivery. Wrap it all with dotCMS workflows so high-impact edits are reviewed before they propagate to every channel. The result is “create once, deliver everywhere” with actual guardrails.
Today, a CTO’s goal should be to avoid monolithic systems that try to do everything and instead orchestrate best-of-breed platforms. By modeling your content well in dotCMS and integrating it with your commerce and channel platforms, you achieve the coveted omnichannel experience: the customer gets a unified journey, and your internal teams get maintainable, specialized systems.
Best Practices for Content Modeling in dotCMS (for Omnichannel Success)
To ensure we “cover everything,” let’s distill some practical best practices for content modeling in dotCMS, especially geared toward omnichannel readiness:
Identify All Content Domains
Begin by listing the types of content your business uses (or will use) across channels. Typical domains include products, articles, landing pages, promotions, user profiles, store locations, FAQs, etc. Don’t forget content that might be unique to certain channels (for example, push notification messages or chatbot prompts). Having a comprehensive view prevents surprises later.
Design Content Types and Fields Thoughtfully
For each content type, define the fields it needs. Ensure each field represents a single piece of information (e.g. separate fields for title, subtitle, body text, instead of one blob). Determine which fields are required, which are multi-valued (like multiple images or tags), and use appropriate field types (dates, numbers, boolean, etc., not just text for everything). In dotCMS, setting this up is straightforward via the Content Type builder. Keep future channels in mind: for instance, if you might need voice assistants to read out product info, having a short summary field could be useful. Example: A News Article content type might include fields for Headline, Summary, Body, Author (related content), Publish Date, Thumbnail Image, and Tags. This way, a mobile news feed can use the Headline and Summary, whereas the full website uses all fields.
Leverage Taxonomy and Tags
Organize your content using categories, tags, or folders offered by dotCMS. Taxonomy is hugely helpful for dynamic content delivery, e.g., pulling “all articles tagged with X for the mobile app home screen” or “all products in category Y for this landing page.” Define a tagging scheme or category hierarchy that makes sense for your domain. dotCMS allows tagging content items and using those tags to assemble content lists without coding. Consistent taxonomy also aids personalization (showing content by user interest) and SEO.
Keep Presentation Out of the Content
A key headless content modeling principle is to separate content from design. In dotCMS, avoid embedding HTML/CSS styling or device-specific details in your content fields. For example, use plain text fields and let your front-end apply the styling. This keeps the content truly channel-agnostic. If you need variations of content for different channels (like a shorter title for mobile), model that explicitly (e.g., a field “Mobile Title” separate from “Desktop Title”) rather than overloading one field with both.
Plan for Localization and Multi-site
If you operate in multiple locales or brands, design your content model to accommodate that. dotCMS has multilingual support and multi-site features. Decide which content types will need translation or variation by locale. dotCMS can manage translations of content items side-by-side. Structuring your content well (and not hard-coding any language-specific text in fields) will pay off when you need to roll out in another language or region. Similarly, if running multiple sites, plan which content types are shared globally and which are specific to a site. dotCMS’s multi-tenant capabilities will allow content to be shared or isolated as needed.
Use Workflow to Enforce Quality
As you roll out an omnichannel content hub, establish workflows and approval processes for content changes. This ensures that a change in content (which could affect many channels at once) is reviewed properly. dotCMS allows you to configure custom workflows with steps like review, approve, publish. Especially for large teams, this is a safety net so that your carefully modeled content isn't inadvertently altered and pushed everywhere without checks. It also helps assign responsibility (e.g., legal can approve terms & conditions content, whereas marketing can freely publish blog content).
Test with Multiple Channels Early
When designing your content model in dotCMS, test it by retrieving content via the API and rendering it in different channel contexts. Build a simple prototype of a website page and a mobile screen (or whatever channels you plan) to see if the content model fits well. You might discover you need an extra field or a different structure. dotCMS’s content API and even its built-in presentation layer (if you choose to use it in hybrid mode) can be used to do these dry runs.
Omnichannel Success with dotCMS and the Right Partner
Technology is only part of the equation. Implementing content modeling for omnichannel success also requires strategy and expertise. Linearloop works on building modern digital platforms and has experience with headless CMS implementation, content strategy, and integrating systems like dotCMS with commerce and other services. Drawing on lessons from past projects can help you avoid common pitfalls and accelerate the adoption of an omnichannel content hub.
Mayank Patel
CEO
Mayank Patel is an accomplished software engineer and entrepreneur with over 10 years of experience in the industry. He holds a B.Tech in Computer Engineering, earned in 2013.
Modern e-commerce sites almost universally employ faceted search and filtering to help users slice through a vast catalog. Faceted search (also called guided navigation) uses those product attributes as dynamic filters, for example, filtering a clothing catalog by size, color, price range, brand, etc., all at once.
This capability is a powerful antidote to the infinite shelf’s chaos. By narrowing the visible options step by step, facets give users a sense of control and progress toward their goal. Each filter applied makes the result set smaller and more relevant.
From an implementation standpoint, faceted search relies on indexing product metadata and often involves clever algorithms to decide which filters to show. With a large catalog, there may be tens of thousands of attribute values across products, so showing every possible filter is neither feasible nor user-friendly.
Instead, e-commerce search engines dynamically present the most relevant filters based on the current query or category context. For example, if a user searches for “running shoes,” the site might immediately offer facets for men’s vs women’s, size, shoe type, etc., instead of unrelated filters like “color of laces” that add little value.
By analyzing the results set, the system can suggest the filters that are likely to matter, essentially reading the shopper’s mind about how they might want to refine the search. This dynamic filtering logic is often backed by data structures like inverted indexes for search and bitsets or specialized databases for fast faceted counts.
Even with a great taxonomy and strong filters, two different shoppers landing on the same mega-catalog will have very different needs. This is where personalization and recommendation algorithms become indispensable.
Advanced e-commerce platforms now use machine learning to dynamically curate and rank products for each user. By analyzing user data: past purchases, browsing behavior, search queries, demographic or contextual signals, algorithms can determine which subset of products out of thousands will be most relevant to that individual.
Recommendation engines are at the heart of this personalized merchandising. These systems use techniques like collaborative filtering (finding patterns from similar users’ behavior), content-based filtering (matching product attributes to user preferences), and hybrid models to surface products a shopper is likely to click or buy.
For example, a personalization engine might note that a visitor has been viewing hiking gear and thus highlight outdoor jackets and boots on the homepage for them, while another visitor sees a completely different set of featured products.
User behavior analytics feed these models: every click, add-to-cart, and dwell time becomes input to refine what the algorithm shows next. Over time, the site “learns” each shopper’s tastes. The benefit is two-fold: customers are less overwhelmed (since they’re shown a tailored slice of the catalog rather than a random assortment) and more delighted by discovery (since the selection feels relevant).
A smart strategy is to vary the merchandising approach for different contexts and customers. For first-time or anonymous visitors (where no prior data is known), showing the entire endless catalog would be counterproductive.
It’s often better to present curated selections like bestsellers or trending products. This “warm start” gives new shoppers a manageable starting point instead of a blank page or an intimidating browse-all experience. On the other hand, returning customers or logged-in users can immediately see personalized recommendations based on their history. The key is using data wisely to guide different customer segments toward discovery without ever letting them feel lost.
Modern recommendation systems also use contextual data and advanced algorithms. For instance, some platforms adjust recommendations in real-time based on the shopper’s current session behavior or even the device they use. (Showing simpler, more general suggestions on a mobile device where screen space is limited can outperform overly detailed personalization, whereas desktop can offer more nuanced recommendations.)
Cutting-edge e-commerce architectures are exploring vector embeddings and deep learning models to capture subtle relationships between products and users to enable features like visual search or chatbot-based product discovery. We can build these algorithms. Talk to us.
Guiding Customers, Not Confusing Them
UX design choices play a huge role in whether the shopping experience feels inspiring or exhausting. Just because you can display thousands of products doesn’t mean you should dump them all in front of the user at once.
Above-the-Fold Impact
The content at the top of category pages, search results, and homepages is disproportionately influential. Critical items (whether they are popular products, lucrative promotions, or highly relevant personalized picks) should be merchandised in those prime slots. As a case in point, product recommendations or banners shown in the top viewport are roughly 1.7× more effective than those displayed below the fold.
Infinite Scroll vs. Structured Browsing
There is an ongoing UX debate in ecommerce about using infinite scrolling versus traditional pagination or curated grouping. Infinite scroll automatically loads more products as the user scrolls down. This can increase engagement time, as users don’t have to click through pages and are continuously presented with new items.
However, infinite scroll can also backfire if not implemented carefully. If shoppers feel they are wading through a bottomless list, they may give up. And once they scroll far, finding their way back or remembering where something was can be difficult. User testing has found that people have a limited tolerance for scrolling, after a certain point, they either find something that catches their eye or they tune out.
A balanced approach is often best. Many sites employ a hybrid: load a substantial chunk of products with an option to “Load More” (giving the user control), or use infinite scroll but with clear segmentation and filtering options always visible.
Aside from search and filters, consider adding guided discovery tools in the UX. This might include features like dynamic product groupings, recommendation carousels, or wizards and quizzes. For example, you can programmatically create curated “shelves” on the fly: e.x. a “Best Gifts for Dog Lovers” collection that appears if the user’s behavior suggests interest in pet products.
These can be powered by the same algorithms we discussed earlier, which can identify meaningful product groupings from trends in data. Such groupings address a common UX gap: a customer may be looking for a concept (“cream colored men’s sweater” or “outdoor kitchen ideas”) that doesn’t neatly map to a single pre-defined category.
Relying solely on static navigation might give them poor results or force them to manually hunt. By dynamically detecting intent clusters and generating pages or sections for them, you improve the chance that every user finds a relevant path. It’s impractical for human merchandisers to pre-create pages for every niche query (there could be effectively infinite intents), so this is an area where algorithmic assistance shines.
Conclusion
Merchandising is no longer a downstream activity that happens after inventory is set; it’s upstream, shaping how catalogs are structured, how data is modeled, and how algorithms are trained. Teams that treat merchandising as a technical capability—not just a marketing function—will be positioned to turn complexity into competitive advantage.
Medusa exposes a RESTful API by default for both storefront and admin interactions. This straightforward approach often means easier onboarding for developers (REST is ubiquitous and simple to test). Saleor is strictly GraphQL API, all queries and mutations go through GraphQL endpoints. Vendure by design also uses GraphQL APIs for both its shop and admin endpoints.
(Vendure does allow adding REST endpoints via custom extensions if needed, but GraphQL is the primary interface).
There are pros and cons here:
GraphQL allows more flexible data retrieval (clients can ask for exactly what they need), which is great for complex UI needs and can reduce network requests. However, GraphQL adds complexity, you need to construct queries and manage a GraphQL client.
REST, on the other hand, is simple and cache-friendly but can sometimes require multiple requests for complex pages.
Importantly, for those who care about GraphQL vs REST, Medusa historically did not have a built-in GraphQL (though you could generate one via OpenAPI specs or community projects), whereas Saleor and Vendure natively support GraphQL out-of-the-box.
If GraphQL is a must-have for you or your dev team, Saleor and Vendure tick that box easily; Medusa might require some extra work or using its REST endpoints.
On the flip side, if GraphQL seems overkill for your needs, Medusa’s simpler REST approach can be a relief. (Note: GraphQL being “language agnostic” means even if Saleor’s core is Python, you can consume its API from any stack; an argument some make that the core language matters a bit less if you treat the platform as a standalone service.)
Architecture and Modular Design
All three are headless and API-first, meaning the back-end business logic is decoupled from any front-end. They each allow (or encourage) running additional services for certain tasks:
Medusa
The architecture is relatively monolithic but modular internally. You run a Medusa server which handles all commerce logic and exposes APIs. Medusa’s philosophy is to keep the core simple and let functionality be added via plugins (which run in the same process).
This design avoids a microservices explosion for small projects; everything is one Node process (plus a database and perhaps a search engine). This is great for smaller teams. Medusa uses a single database (by default Postgres) for storing data, and you can deploy it as a single service (with optional separate services for things like a storefront or an admin dashboard UI).
Saleor
Saleor’s architecture revolves around Django conventions. It’s also monolithic in the sense that the Saleor server handles everything (GraphQL endpoints, business logic, etc.) in one service, backed by a PostgreSQL database. However, Saleor encourages a slightly different extensibility model: you can extend by writing “plugins” within the core or by building “apps” (microservices) that integrate via webhooks and the GraphQL API.
This dual approach means if you want to alter core behavior deeply, you might write a Python plugin that has access to the database and internals. Or, if you prefer to keep your extension separate (or write it in another language), you can create an app that talks to Saleor’s API from the outside and is authorized via API tokens.
The latter is useful for decoupling (and is language-agnostic), but it means that extension can only interact with Saleor through GraphQL calls and webhooks, not direct DB access. Saleor’s design also supports containerization and scaling; it’s easy to run Saleor in Docker and scale out the services (plus it has support for background tasks and uses things like Celery for asynchronous jobs in newer versions).
Vendure
Vendure is structured as a Node application with a built-in modular system. It runs as a central server (plus an optional separate worker process for heavy tasks). Vendure’s internal architecture is plugin-based: features like payment processing, search, etc., are implemented as plugins that can be included or replaced.
Developers can write their own plugins to extend functionality without forking the core. Vendure uses an underlying NestJS framework, which imposes a certain organized structure (modules, providers, controllers, etc.) that leads to a clean separation of concerns.
It also means Vendure can benefit from NestJS features like dependency injection and middleware. Vendure includes a separate Worker process capability, e.x., for sending emails or updating search indexes asynchronously, a background worker can be run to offload those tasks. This is great for scalability, as heavy operations don’t block the main API event loop.
Vendure’s use of GraphQL and a strongly typed schema also means frontends can auto-generate typed SDKs (for example, generating TypeScript query hooks from the GraphQL schema).
Admin & Frontend Architecture
It’s worth noting how each handles the Admin dashboard and starter Storefronts, since these are part of architecture in a broad sense:
Medusa Admin
Medusa provides an admin panel (open source) built with React and GatsbyJS (TypeScript). It’s a separate app that communicates with the Medusa server over REST. You can deploy it separately or together with the server.
The admin is quite feature-rich (products, orders, returns, etc.) and since it’s React-based, it’s relatively straightforward for JS developers to customize or extend with new components. Medusa’s admin UI being a decoupled frontend means it’s optional, if you wanted, you could even build your own admin or integrate Medusa purely via API; but most users will use the provided one for convenience.
Saleor Admin
Saleor’s admin panel is also decoupled and is built with React (they have a design system called Macaw-UI). It interacts with the Saleor core via GraphQL. You can use the official admin or fork/customize it if needed. Saleor allows creating API tokens for private apps via the admin, so you can integrate external back-office systems easily. Saleor’s admin is quite polished and supports common tasks (managing products, orders, discounts, etc.). As with Medusa, the admin is essentially a client of the backend API.
Vendure Admin
Vendure’s admin UI comes as a default part of the package; implemented in Angular and delivered as a plugin (AdminUiPlugin) that serves the admin app. By default, a standard Vendure installation includes this admin. Administrators access it to manage catalog, orders, settings, etc.
Even if you’re not an Angular developer, you can still use the admin as provided. Vendure documentation notes that you “do not need to know Angular to use Vendure” and that the admin can even be extended with custom UI extensions written in other frameworks (they provide some bridging for that).
However, major custom changes to the admin likely require Angular skills. Some teams choose to build a custom admin interface (e.g., in React) by consuming Vendure’s Admin GraphQL API, but that’s a bigger effort. So out-of-the-box, Vendure gives you a functioning admin UI which is sufficient form many cases, though perhaps not as slick as Medusa’s or Saleor’s React-based UIs in terms of look and feel.
Storefronts
All three being headless means you’re expected to build or integrate a storefront. To jump-start development, each provides starter storefront projects:
Medusa offers a Gatsby starter that’s impressively full-featured, including typical e-commerce pages (product listings, cart, checkout) and advanced features like customer login and order returns, all wired up to Medusa’s backend. It basically feels like a ready-made theme you can customize, which is great for fast prototyping. Medusa also has starters or example integrations with Next.js, Nuxt (Vue), Svelte, and others.
Saleor provides a React/Next.js Storefront starter (sometimes referred to as “Saleor React Storefront”). It’s a Next.js app that you can use as a foundation for your shop, already configured to query the Saleor GraphQL API. This covers basics like product pages, cart, etc., but might not be as feature-complete out of the box as Medusa’s Gatsby starter (for example, handling of returns or customer accounts might require additional work).
Vendure, as mentioned, has official starters in Remix, Qwik, and Angular. These starter storefronts include all fundamental e-commerce flows (product listing with facets, product detail, search, cart, checkout, user accounts, etc.) using Vendure’s GraphQL API. The Remix and Qwik starters are particularly interesting as they focus on performance (Remix for fast server-rendered React, Qwik for ultra-fast hydration). Vendure thus gives a few choices depending on your front-end preference, though notably, there isn’t an official Next.js starter from Vendure’s team as of 2025. However, the community or third parties might provide one, and in any case, you can build one easily with their GraphQL API.
Core Features Comparison
All modern e-commerce platforms cover the basics: product listings, shopping cart, checkout, order management, etc. However, differences emerge in how features are implemented and what is provided natively vs. via extensions. Let’s compare some key feature areas and note where each platform stands out:
Product Catalog Management
Product Models
Products in Medusa can have multiple variants (for example, a T-shirt with different sizes/colors) and are grouped into Collections (a collection is essentially a group of products, often used like categories). Medusa also supports tagging products with arbitrary tags for additional grouping or filtering logic.
Medusa’s philosophy is to keep the core product model fairly straightforward, and encourage integration with external Product Information Management (PIM) or CMS if you need extremely detailed product content (e.g., rich descriptions, multiple locale content, etc.). It does provide all the basics like images, description, prices, SKUs, etc., and inventory tracking out of the box.
Saleor’s product catalog is a bit more structured. It supports organizing products by Categories and Collections. A Category in Saleor is a tree structure (like traditional e-commerce categories) and a Collection is more like a curated grouping (similar to Medusa’s collections).
Saleor also has a notion of Product Types and attributes; you can define custom product attributes and assign them to types (for example, a “Shoes” product type might have size and color attributes). These attributes can then be used as filters on the storefront.
This system provides flexibility to extend product data without modifying code, which can be powerful for store owners. Saleor supports multiple product variants per product as well (with the attributes distinguishing them).
As for tagging, Saleor doesn’t have simple tags via the admin either (at least as of that comparison), but because it has custom attributes and categories, that gap is usually filled by those features.
Saleor’s admin also allows adding metadata to products if needed, and its GraphQL API is quite adept at querying any of these structures.
Vendure combines aspects of both. It has Product entities that can have variants, and it supports a Category-like system through a feature called Collections (Vendure’s Collections are hierarchical and can have relations, effectively serving the role of categories).
Vendure also allows defining Custom Fields on products (and other entities) via configuration, meaning you can extend the data model without hacking the core. For example, if you want to add a “brand” field to products, Vendure lets you do that through config and it will generate the GraphQL schema for it. This is part of Vendure’s extensibility.
Vendure supports facets/facet values which can be used as product attributes for filtering (similar to Saleor’s attributes).
Vendure provides a highly customizable catalog structure with a bit of coding, whereas Saleor provides a lot through the admin UI, and Medusa keeps it simpler (with the option to integrate something like a CMS or PIM for additional product enrichment).
Multi-Language (Product Content)
Saleor has built-in multi-language support for product data. Product names, descriptions, etc., can be localized in multiple languages through the admin, and the GraphQL API allows querying in a specified language. This is one of Saleor’s selling points (multi-language, multi-currency).
Vendure supports multi-language by marking certain fields as translatable. Internally, it can store translations for product name, slug, description, etc., in different languages. This is configured at startup (you define which languages you support), and the admin UI allows inputting translations. It’s quite robust in that area for an open-source platform.
MedusaJS does not natively have multi-language fields for products in the core. Typically, merchants using Medusa would handle multi-language by using an external CMS to store translated content (for example, using Contentful or Strapi with Medusa, as suggested by Medusa’s docs).
The Medusa backend itself might not store a French and English version of a product title; you’d either store one in the default language or use metadata fields or region-specific products. However, Medusa’s focus on regions is more about currency and pricing differences, not translations.
Recognizing this gap, the community has created plugins to assist with multilingual catalogs (for instance, there’s a plugin that works with MeiliSearch to index products with internationalized fields). Moreover, Medusa’s Admin recently introduced multi-language support for the admin interface (so the admin UI labels can be in different languages), but that’s separate from actual product content translation.
For a primarily single-language store or one with minimal translation needs, Medusa’s approach is fine, but if you have a complex multi-lingual requirement, Saleor or Vendure may require less custom work.
Multi-Currency and Regional Settings
A highlight of Medusa is its multi-currency and multi-region support. In Medusa, you can define Regions which correspond to markets (e.g., North America, Europe, Asia) and each region has a currency, tax rate, and other settings.
For example, you can have USD pricing for a US region and EUR pricing for an EU region, for the same product. Medusa’s admin and API let you manage different prices for different regions easily. This is extremely useful for DTC brands selling internationally. Medusa also supports setting different fulfillment providers or payment providers per region.
Saleor supports multi-currency through its Channels system. You can set up multiple channels (which could be different countries, or different storefronts) each with their own currency and pricing. Saleor even allows differentiating product availability or pricing by channel.
This covers the multi-currency need effectively (Saleor’s demo often shows, for instance, USD and PLN as two currencies for two channels). Tax calculation in Saleor can integrate with services or be configured per channel as well. So, Saleor is on par with Medusa in multi-currency capabilities, and it additionally handles multi-language as mentioned. It’s truly built for multi-market operation.
Vendure has the concept of Channels too. Channels can represent different storefronts or regions (for example, an EU channel and a US channel). Each channel can have its own currency, default language, and even its own payment/shipping settings.
Vendure allows products to be in multiple channels with different prices if needed. This is basically how Vendure supports multi-currency and multi-store scenarios. It’s quite flexible, although configuring and managing multiple channels requires deliberate setup (like creating a channel, assigning products, etc.).
Vendure’s approach is powerful for multi-tenant or multi-brand setups as well (one Vendure instance could serve multiple shops if configured via channels and perhaps some custom logic).
Search and Navigation
Medusa does not have a full-text search engine built into the core; instead, it provides easy integrations for external search services. You can query products by certain fields via the REST API, but for advanced search (fuzzy search, relevancy ranking, etc.), Medusa leans on plugins.
The Medusa team has provided integration guides or plugins for MeiliSearch and Algolia, two popular search-as-a-service solutions. For example, you can plug in MeiliSearch and have typo-tolerant, fast search on your catalog.
This approach means a bit of setup but results in a better search experience than basic SQL filtering. The trade-off is that search is as good as the external system you use and if you don’t configure one, you only have simple queries.
Saleor’s approach (at least up to recently) for search was relatively basic; you could perform text queries on product name or description via GraphQL to implement a simple search bar. It did not include a built-in advanced search engine or ready connectors to one at that time.
Essentially, to get a robust search in Saleor, you might need to use a third-party service or write a plugin/app. Given that Saleor is GraphQL, one could use something like ElasticSearch by syncing data to it, but that requires development work (some community projects likely exist). In an enterprise context, it’s expected you’ll integrate a dedicated search system.
Vendure includes a built-in search mechanism which is pluggable. By default, it uses a simple SQL-based search (with full-text indexing on certain fields) to allow basic product searches and filtering by facets. For better performance or features, Vendure provides an ElasticsearchPlugin, a drop-in module that, when enabled, syncs product data to Elasticsearch and uses that for search queries.
There’s also mention of a Typesense-based advanced search plugin in development. This shows Vendure’s emphasis on modularity: you can start with the default search and later move to Elastic or another search engine by adding a plugin, without changing your storefront GraphQL queries. Vendure’s search supports faceted filtering (e.g., by attributes, price ranges, etc.), especially when using Elasticsearch. This is great for storefronts with category pages that need filtering by various criteria.
Checkout, Orders, and Payments
All three platforms handle the full checkout flow including cart, payment processing (via integrations), and order management, but with some nuances:
Checkout Process & Shopping Cart
Each platform provides APIs to manage a shopping cart (often called an “order draft” or similar) and then convert it to a completed order at checkout.
MedusaJS has built-in support for typical cart operations (add/remove items, apply discounts, etc.) and a checkout flow that can be customized. Medusa’s APIs handle everything from capturing customer info to selecting shipping and payment method, placing the order, and then updating order status as fulfillment happens.
Saleor similarly has a checkout object in its GraphQL API, where you add items, set shipping, payment, etc., and then complete an order. Saleor’s logic is quite robust, covering digital goods, multiple shipments, etc., because of its focus on enterprise scenarios.
Vendure’s API includes a “shop” GraphQL endpoint where unauthenticated or authenticated users can manage an active order (cart) and proceed to checkout. Vendure even has features like order promotions and custom order states (through its workflow API) if needed.
Payment Gateway Integrations
Medusa ships with several payment providers integrated: Stripe, PayPal, Klarna, Adyen are supported. Medusa abstracts payment logic through a provider interface, so adding a new gateway (say Authorize.net or Razorpay) is a matter of either installing a community plugin or writing a small plugin yourself to implement that interface.
Thanks to this abstraction, developers have successfully extended Medusa with many region-specific providers too. Medusa does not charge any transaction fees on top; you use your gateway directly (and with the new Medusa Cloud, the team behind Medusa emphasize they don’t take a cut either).
Saleor supports Stripe, Authorize.net, Adyen out of the box, and through its plugin system, it also has integration for others like Braintree or Razorpay. Being Python, if an API exists for a gateway, you can integrate it via a Saleor plugin in Python.
Saleor’s approach to payments is also abstracted (it had a payment plugins interface). So both Medusa and Saleor cover the common global gateways, with Saleor perhaps having a slight edge in some additional regional ones via community (e.g., Razorpay as mentioned).
Vendure has a robust plugin library that includes payments such as Stripe (there’s an official Stripe plugin), Braintree, PayPal, Authorize.net, Mollie, etc. Vendure’s documentation guides on implementing custom payment processes as well. So Vendure’s coverage is quite broad given the community contributions.
Order Management & Fulfillment
Medusa shines with some advanced features here. It supports full Return Merchandise Authorization (RMA) workflows. This means customers can request returns/exchanges, and Medusa’s admin allows processing returns, offering exchanges or refunds, tracking inventory back, etc. Medusa also uniquely has the concept of Swaps: allowing exchanges where a returned item can trigger a new order for a replacement.
These are sophisticated capabilities usually found in more expensive platforms, and having them in Medusa is a big plus for fashion and apparel DTC brands that deal with returns often. Medusa’s admin and API let you handle order status transitions (payment authorized, fulfilled, shipped, returned, etc.), and it can integrate with fulfillment providers or you can handle it manually via admin.
Saleor covers standard order management. You can see orders, update statuses, process payments (capture or refund), etc. However, a noted difference is that Saleor’s approach to returns/refunds was a bit more manual or basic at least in earlier versions.
There isn’t a built-in automated RMA flow; a store operator might have to mark an order as returned and manually create a refund in the payment gateway or such. They may improve this over time or provide some apps, but it isn’t as streamlined as Medusa’s RMA feature.
For many businesses, this might be acceptable if returns volume is low or they handle it via customer service processes. But it’s a point where Medusa clearly invested effort to differentiate (likely because Shopify’s base offering lacks easy returns handling too, and Medusa wanted to cover that gap).
Vendure’s core includes order states and a workflow that can be customized. It doesn’t natively have a “magic” RMA module built-in to the same degree, but you can implement returns by leveraging its order modifications.
Vendure does allow refunds (it has an API for initiating refunds through the payment plugins if supported), and partial fulfillments of orders, etc. If a robust returns system is needed, it might require some custom development or use of a community plugin in Vendure. Since Vendure is very modular, one could create a returns plugin that automates some of that.
Discounts and Promotions
Medusa supports discount codes and gift cards from within its own functionality. You can create percentage or fixed-amount discounts, limit them to certain products or customer groups, set expiration, etc. Medusa allows product-level discounts (specific products on sale) easily. It also has a gift card system which many platforms don’t include by default.
Saleor also supports discounts (vouchers) and gift cards. Saleor’s discount system can apply at different levels; one interesting note is that Saleor can do category-level discounts (apply to all products in a category), which might be a built-in concept. Saleor, being oriented to marketing needs, has quite an extensive promotions logic including “sales” and “vouchers” with conditions and requirements.
Vendure includes a Promotions system where you can configure promotions with conditions (e.g., order total above X, or buying a certain product) and actions (e.g., discount percentage or free shipping). It’s quite flexible and is done through config or the admin UI. Vendure doesn’t call them vouchers but you can set up coupon codes associated with promotions. Gift cards might not be in the core, but could be implemented or might exist as a plugin.
Extensibility and Customization
One of the biggest reasons to choose a headless open-source solution over a SaaS platform is the ability to customize and extend it to fit your business, rather than fitting your business into it. Let’s compare how our three contenders enable extension:
MedusaJS is designed with a plugin architecture from the ground up. Medusa encourages developers to add features via plugins rather than forking the code. A plugin in Medusa is essentially an NPM package that can hook into Medusa’s backend; it can add API endpoints, extend models, override services, etc.
For instance, if you wanted to integrate a third-party ERP, you could write a plugin that listens to order creation events and sends data to the ERP. Medusa also prides itself on allowing replacement of almost any component; you could even swap out how certain calculations work by providing a custom implementation via dependency injection (advanced use-case).
Saleor’s extensibility comes in two flavors as noted: Plugins (in-process, written in Python) and Apps (out-of-process, language-agnostic). Saleor’s plugins are used for things like payment gateways, shipping calculations, etc., and run as part of the Saleor server. If you have a specific business logic (say, a custom promotion rule), you might implement it as a plugin so that it can interact with the core logic and database.
On the other hand, Saleor introduced a concept of Saleor Apps which are somewhat analogous to Shopify apps; they are separate services that communicate via the GraphQL API and webhooks. An app can be hosted anywhere, subscribe to events (like “order created”) via webhook, and then call back to the API to do something (like add a loyalty reward, etc.).
This decouples the extension and also means you could use any programming language for the app. The admin panel allows store staff to install and manage these apps (grant permissions, etc.). The advantage of the app approach is safer upgrades (your app doesn’t hack the core) and more flexibility in tech stack; the downside is a slight overhead of maintaining a separate service and the limitations of only using the public API.
Vendure takes an extreme plugin-oriented approach. Almost all features in Vendure (payments, search, reviews, etc.) are implemented as plugins internally, and you can include or exclude them in your server setup. Writing a Vendure plugin means writing a TypeScript class that can tap into the lifecycle of the app, add new GraphQL schema fields, override resolvers or services, etc.
The core of Vendure provides the commerce primitives, and you compose the rest. This is why some view Vendure as ideal if you have very custom requirements. The community has contributed plugins for many needs (reviews system, wishlist, loyalty points, etc.). Vendure’s official plugin list includes not only integrations (like payments, search) but also features (like a plugin that adds support for multi-vendor marketplace functionality, which is something a company might need to add to create a marketplace).
Enterprise Support and Hosting
As of 2025, Medusa has introduced Medusa Cloud, a managed hosting platform for Medusa projects. This caters to teams that want the benefits of Medusa without dealing with server ops. The Medusa Cloud focuses on easy deployments (with Git integration and preview environments) and transparent infrastructure-based pricing (no per-transaction fees).
This shows that Medusa is evolving to serve more established businesses that might require uptime guarantees and easier scaling. Apart from that, Medusa’s core being open-source means you can self-host on AWS, GCP, DigitalOcean, etc., using Docker or Heroku or any Node hosting. Many early-stage companies go that route to save cost.
Saleor Commerce (the company) offers Saleor Cloud, which is a fully managed SaaS version of Saleor. It’s targeted at mid-to-large businesses with a pricing model that starts in the hundreds of dollars per month. This service gives you automatic scaling, backups, etc., and might be attractive if you don’t want to run your own servers.
However, it’s a significant cost that perhaps only later-stage businesses or those with no devops inclination would consider. Saleor’s open-source can also be self-hosted in containers; some agencies specialize in hosting Saleor. Because Saleor is more complex to set up (with services like Redis, etc., possibly needed), the cloud option is a convenient but pricey offering.
Vendure’s company does not currently offer a public cloud SaaS. They focus on the open-source product and consulting. That said, because Vendure is Node, you can host it similarly easily on any Node-friendly platform. Some third-party hosting or PaaS might even have one-click deployments for Vendure.
From a total cost of ownership perspective: all three being open-source means you avoid licensing fees of traditional enterprise software. If self-hosted, your costs are infrastructure (cloud servers, etc.) and developer time.
Saleor might incur higher dev costs if you need both Python and front-end expertise, and possibly higher infrastructure if GraphQL/Python stack needs more scaling.
Medusa and Vendure could be more resource-efficient for moderate scale (Node can handle a lot on modest hardware, and you can optimize with cluster mode, etc.).
Performance and Scalability Considerations
For any growing business, the platform needs to handle increased load: more products, more traffic, flash sales, etc. Let’s consider how each platform fares and what it means for your project’s scalability:
MedusaJS (Node/Express, REST):
Medusa’s lightweight nature can be an advantage for performance. With a lean Express.js core and no GraphQL parsing overhead, each request can be handled relatively fast and with low memory usage.
Node.js can handle a high number of concurrent requests efficiently (non-blocking I/O), so Medusa can serve quite a lot of traffic on a single server. If more power is needed, you can run multiple instances behind a load balancer.
Also, because Medusa can be containerized easily (they provide a Docker deployment guide), scaling horizontally in the cloud is straightforward. For database scaling, you rely on whatever your SQL DB (Postgres, etc.) can do; typically vertical scaling or read replicas if needed.
Medusa being stateless fits cloud scaling well. For small-to-medium businesses, Medusa’s performance is more than enough, and even larger businesses can scale it out.
Saleor (Python/Django, GraphQL):
Saleor is built on Django, which is a robust framework used in many high-scale sites. Performance-wise, GraphQL adds some overhead per request (parsing queries, resolving fields). However, GraphQL also can reduce the number of requests the client needs to make (one query vs multiple REST calls).
Saleor’s architecture can be scaled vertically (powerful servers) or horizontally by running multiple app instances behind a gateway. Because it uses Django, it typically will use more memory per process than a Node process, and handling extremely high concurrency might require more instances.
That said, Saleor has been shown to handle enterprise loads when properly configured (using caching for queries, etc.). Saleor’s advantage is that if you use their cloud or a similar setup, they already incorporate scalability best practices (like auto-scaling on high traffic).
For a new store, Saleor will likely run just fine on modest infrastructure (it’s easy to start with say a $20/mo Heroku dyno or similar), but as you grow, the resource usage might grow faster compared to a Node solution.
Vendure (Node/NestJS, GraphQL):
Vendure, using NestJS and GraphQL, has a performance profile somewhere between Medusa and Saleor. Node.js is generally very performant with I/O, and NestJS adds a bit of overhead due to its structure but also helps by providing tools like a built-in GraphQL engine (Apollo Server).
Vendure can use Node’s ability to handle concurrent connections much better. The use of GraphQL means each request might do more work on the server to assemble the response, but Vendure’s team likely optimized common queries.
Vendure also has the concept of a Worker process for heavy tasks, which means if you have computationally intensive jobs (e.g., rebuilding a search index, sending bulk emails), those can be offloaded, keeping the main API responsive.
Vendure being TypeScript means you can catch performance issues at compile time (to an extent) and ensure you’re using proper types for big data operations.
Handling Growth:
If you anticipate massive scale (millions of users, hundreds of thousands of orders, etc.), Saleor’s approach might be appealing due to its enterprise orientation and cloud offering. However, it doesn’t mean Medusa or Vendure can’t handle it, they absolutely can if engineered well. In fact, the lack of heavy abstractions in Medusa could be a benefit when fine-tuning for performance.
For fast-growing DTC brands (think going from 100 orders/day to 1000+ orders/day after a few influencer hits), Medusa and Vendure provide a lot of agility. Medusa’s focus on being “lightweight, flexible architecture, ideal for speed and adaptability” makes it a strong choice for those who need to iterate quickly. You can optimize or add capabilities as needed without waiting on vendor roadmaps.
Saleor is more like a high-performance sports car; it’s equipped for high speed, but you need a skilled driver (developers who know GraphQL/Python well) to push it to its limits and maintain it.
All three can be customized heavily. If you foresee the need to implement highly unique business logic or integrate unusual systems, consider how you’d do it on each:
With Medusa, you may likely write a plugin in Node or directly modify the server code (since it’s simple JS/Express). Great for quickly adding something like “I want to apply a custom discount rule for VIP customers; just drop in some JS in the right place.”
With Saleor, consider whether it can be done with an App (external service using the API) or needs an internal plugin. If internal, you need Python dev skills and understanding of Saleor’s internals. If external, you need to be comfortable with GraphQL and possibly running an additional service.
With Vendure, write a plugin in TypeScript. If you like structured code and strongly typed schemas, this is very satisfying. If not, it might feel like an extra ceremony.
A Few Final Words
MedusaJS, Saleor, and Vendure all tick the “headless, open-source, flexible” boxes but each wins in different places.
MedusaJS shines for lean, fast-moving teams that want to hack, extend, and own their stack.
Saleor is best when you need enterprise-grade stability, global readiness, and a GraphQL-first mindset.
Vendure appeals to TypeScript-heavy teams that want strong typing, modular plugins, and deep architectural control.
Your right choice depends less on which is “objectively best” and more on which aligns with your team’s skills, your growth plans, and the trade-offs you’re willing to make. In the end, the winner is the one that fits your context.
Below are the key planning steps and best practices:
Assess Your Magento Implementation (Data and Customizations)
Start by auditing your current Magento setup in detail. This involves:
Catalog and Data Model Compatibility
Review how your product catalog, categories, variants, pricing, and customers are structured in Magento, and map these to Medusa’s data models. Medusa has its own schemas for products, variants, orders, customers, etc., which are more straightforward than Magento’s (Magento uses an EAV model for products with attribute sets, which doesn’t directly exist in Medusa).
Identify any custom product attributes or complex product types (e.g. Magento bundle or configurable products) that will need special handling. For example, Magento “configurable products” with multiple options will likely map to a product with multiple variants in Medusa.
Make sure Medusa’s model can accommodate all necessary data (it usually can, via built-in fi elds or using metadata for custom attributes). Early on, defi ne how each entity (products, SKUs, categories, customers, orders, discount codes, etc.) will translate into the Medusa schema.
Extension and Module Inventory
Magento installations often have numerous third-party modules and custom extensions providing extra features (from SEO tools to loyalty programs). List out all installed Magento modules and custom code. You can generate a module list via CLI: for example, running
php bin/magento module:status > modules_list.txt
will output all modules in your Magento instance. Using this list, evaluate each module’s functionality:
Determine which features are native in Medusa (so you won’t need an equivalent extension). Medusa covers many commerce basics like product management, multi-currency, pricing rules, discounts, etc., out-of-the-box.
For features not built into Medusa, check if an existing Medusa plugin or integration can provide that capability. Medusa has an ecosystem of official and community plugins (for payments, CMS, search, analytics, etc.).
For truly custom or business-specific features that neither Medusa core nor a plugin covers, plan how to reimplement them in Medusa. This might involve writing a custom Medusa plugin or using Medusa’s APIs to integrate an external service. The good news is Medusa’s plugin system allows you to extend any part of the backend or admin with custom logic relatively easily. For instance, if you have a complex promotion rule module in Magento, you might recreate it as a Medusa plugin hooking into the order calculation fl ow. Prioritize which custom functions are critical to carry over and design solutions for them.
Data Volume and Quality
Consider the volume of data to migrate (number of SKUs, customers, orders, etc.) and its cleanliness. It’s also a chance to eliminate outdated or low-value data (for example, old customer records, or products that are no longer sold) so you start “clean” on Medusa.
Note: It’s often helpful to create a mapping document that enumerates Magento entities and how each will be handled in Medusa (e.g., Magento customer entity -> Medusa customer, including
addresses; Magento reward points -> integrate XYZ loyalty service via API). This becomes your blueprint.
Define a Migration Strategy and Timeline
With requirements understood, the next step is to choose a migration approach. For most enterprises, a phased migration strategy is highly recommended over a “big bang” cutover.
In a phased approach, you gradually transition pieces of functionality from Magento to Medusa in stages, rather than switching everything in one night. This greatly reduces risk and complexity. Key benefits of a phased replatforming include the ability to test and fi x issues in isolation, minimal downtime, and continuous business operation during the transition. By migrating one component at a time, you can validate that piece (e.g. product catalog) in Medusa while the rest of the system still runs on Magento. If something goes wrong, it’s easier to roll back a single component than a whole system.
Plan out the phases that make sense for your business. A typical plan (detailed in the next section) might be:
Phase 1: Build a new Medusa-based storefront (while Magento remains the backend) Phase 2: Migrate product/catalog data to Medusa Phase 3: Migrate cart & checkout (orders) to Medusa
Each phase should be treated as a mini-project with its own design, implementation, and QA. Determine clear exit criteria for each phase (e.g. “new product catalog on Medusa shows all items correctly and inventory syncs with ERP”) before moving on.
Also decide on timing: choose low-traffic periods for cutovers of critical pieces, and ensure business stakeholders are aligned on any necessary content freeze or downtime. For example, when you migrate the product catalog, you may enforce a freeze on adding new products in Magento to avoid divergence while data is copied. Similarly, a final order migration might require a short checkout downtime to ensure no orders are lost. All such events should be scheduled and communicated.
During planning, also outline a data synchronization strategy. In a phased migration, you’ll have a period where Magento and Medusa run in parallel for different functions. You must plan how data will stay consistent between them:
For example, in Phase 1, Magento is still the source of truth for products and orders, but a new Medusa/Next.js frontend might be reading some data. You can use Magento’s REST APIs or GraphQL to fetch live data from Magento into the new frontend. If you are also sending some data to Medusa (in later phases), you might temporarily feed updates both ways (Magento to Medusa and vice versa) to keep systems in sync.
you might implement synchronization scripts or utilize a Medusa migration plugin that periodically pulls data from Magento and pushes to Medusa. During Phase 2, for instance, you could run a one-time import of all products, then set up a job to sync any new or updated products from Magento to Medusa until Magento is fully retired.
Plan for the final cutover data sync: when you switch completely to Medusa, you’ll need to migrate any delta data that changed on Magento since the last bulk migration. For instance, just before Phase 3 (moving checkout), you might import all orders placed on Magento up to that minute into Medusa, so that order history is preserved. Similarly, migrate any new customers or reviews that were added in Magento during the transition.
It’s better to set up Medusa development and staging environments early in the project. Stand up a Medusa instance (or a few) in a sandbox environment and start populating it with sample data. This will be used to develop and test migration scripts. Make sure you have a staging database for Medusa (e.g., PostgreSQL or MySQL, whichever you choose for Medusa) and that the team is familiar with deploying Medusa. Medusa provides a CLI to bootstrap a new project quickly, for example:
npx create-medusa-app@latest
This will create a new Medusa server project (and optionally a Next.js storefront if you choose) on your machine.. You can also initialize a Medusa project via the Medusa CLI (medusa new command) to include a seeded store for testing.
As part of setup, you’ll create an Admin user for the Medusa backend and explore the Medusa Admin dashboard to ensure you know how to manage products, orders, etc., in the new system.. Familiarize your ops/administrative staff with the Medusa admin UI early, so they can provide feedback on any critical gaps (for instance, Magento has some specific admin grids or reports you might need to replicate).
Finally, communicate and coordinate the migration plan with all stakeholders. The engineering team, product managers, operations, customer support, and leadership should all understand the phased plan, the timeline, and any expected impacts (like minor UI changes in Phase 1 or slight differences in workflows in the new system). Migration at this scale is as much about change management as it is about technology. With a solid plan in place, you can now proceed to execution.
With planning done, it’s time to implement the migration. We will outline a phased step-by-step execution that gradually moves your e-commerce backend, admin, and storefront from Magento to MedusaJS.
Each phase below corresponds to a portion of functionality being migrated, aligned with best practices to minimize risk. Throughout each phase, maintain rigorous testing and quality assurance before proceeding to the next stage.
Phase 1: Launch a New Headless Storefront (Decoupling the Frontend)
The first phase is all about decoupling your storefront (UI) from Magento’s integrated frontend. In Magento, the frontend (themes/templates) is tightly coupled with the backend. We’ll replace this with a new headless storefront (for example, a Next.js or Gatsby application) that initially still uses Magento’s backend via APIs.
Phase 1: Introduce a new headless storefront and CMS while Magento remains the backend. The new frontend (e.g., Next.js app) fetches data from Magento’s APIs. Steps in Phase 1:
Develop the new Frontend
Choose a modern frontend framework such as Next.js, Gatsby, or Nuxt to build your storefront. Medusa provides starter templates for Next.js that you can use as a foundation (or you can build from scratch).. Design the frontend to consume data from an API rather than directly from a database.
In this phase, the API will be Magento’s. Magento 2 supports a REST API and a GraphQL API out-of-the-box. For example, your new product listing page in Next.js could call Magento’s REST endpoints (or GraphQL queries) to fetch products and categories.
This essentially treats Magento as a headless service. You might build a small middleware layer or utilize Next.js API routes to securely proxy calls to Magento’s API if needed, or call Magento APIs directly from the frontend (taking care of CORS and authentication).
Many enterprise teams opt to implement a BFF (Backend-For-Frontend)—a lightweight Node.js server that sits between the frontend and Magento—to aggregate and format data. This is optional but can help in mapping Magento’s API responses to a simpler format for the UI.
Replicate design and UX
Reimplement your storefront’s design on the new tech stack. Try to keep the user experience consistent with the old site initially, to avoid confusing customers during the transition.
You can, of course, take the opportunity to improve UX, but major changes might be better introduced gradually. Importantly, ensure global elements like header, footer, navigation, and product URL structure remain familiar or have proper redirects, so SEO and usability aren’t hurt.
Connect to Magento’s Data
Use Magento’s API to feed the necessary data. For instance, the product listing page will call an endpoint like /rest/V1/products (Magento’s REST) or a GraphQL query to retrieve products and categories. You will likely need an API authentication token to access Magento’s APIs.
Magento’s REST API can be accessed by generating an integration token, or as in the Medusa migration plugin, by programmatically obtaining an admin token. For example, the Medusa migration module uses a POST to Magento’s V1/integration/admin/token endpoint with admin credentials to get a token:
const response = await fetch(`${magentoBaseUrl}/rest/default/V1/integration/admin/token`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ username: MAGENTO_ADMIN_USER, password: MAGENTO_ADMIN_PASS })
});
const token = await response.text();
// Use this token in Authorization header for subsequent Magento API calls
Proxy live operations to Magento
In this phase, Magento still handles all commerce operations (cart, checkout, customer accounts). Your new frontend will simply redirect or proxy those actions. For example, when a user clicks “Add to Cart” or goes to checkout, you might hand off to Magento’s existing pages or send a request to Magento’s cart API.
It’s acceptable if the checkout flow temporarily takes users to Magento’s domain or uses Magento’s UI, as this will be addressed in later phases. The goal of Phase 1 is not to eliminate Magento, but to introduce the new frontend and CMS while Magento underpins it behind the scenes.
(For teams that cannot rebuild the entire frontend in one go, an alternative approach is to do a partial storefront migration. Using tools like Next.js Rewrite rules, you can incrementally replace certain Magento pages with new ones. For example, you could serve product detail pages via the new Next.js app, but keep checkout on Magento until later. This way, you flip portions of the UI to the new stack gradually. While this complicates routing, it offers a very controlled rollout. Many teams, however, would prefer to launch the whole new frontend at once as described above, for a cleaner architecture.)
Phase 2: Migrate Product Catalog, Inventory, and Pricing to Medusa
In Phase 2, the focus shifts to the backend. Here we migrate the core catalog data: products, categories, inventories, prices, from Magento’s database into Medusa. By the end of this phase, Medusa will become the source of truth for all product information, while Magento may still handle the shopping cart and orders until Phase 3.
Steps in Phase 2:
Set up Medusa Server and Modules
If you haven’t already, install and configure your Medusa backend service. This involves spinning up a Medusa server (Node.js application) connected to a database (Medusa supports PostgreSQL, MySQL, etc., with an ORM).
Medusa comes with a default product module, order module, etc. Make sure your Medusa instance is running and you can access the Medusa Admin panel. In the Admin, you might manually create a couple of sample products to see how data is structured, then delete them, just to get familiar.
Also configure any essential settings in Medusa (currencies, regions, etc.) to align with your business; for example, if your Magento store had multiple currencies or websites, configure Medusa’s regions and currency settings accordingly.
Data Export from Magento
Extract the product catalog data from Magento. There are a few approaches for this:
Use Magento’s REST API to fetch all products, categories, and related data (images, attributes, inventory levels, etc.). Magento’s API allows filtering and pagination to get data in batches.
Alternatively, do a direct database export from Magento’s MySQL. For example, run SQL queries or use Magento’s built-in export tool to get products to CSV. However, Magento’s data schema is quite complex (spread across many tables due to EAV), so using the API (which presents consolidated data) can simplify the process.
In either case, you will likely need to transform the data format to match Medusa. For instance, Magento’s product may have a separate price table, whereas in Medusa the price might be a fi eld or managed via price lists. Plan to capture product names, SKUs, descriptions, category relationships, images, variants/options (size, color, etc.), stock levels, and any custom attributes you identified during planning.
Data Import into Medusa
Insert the exported data into Medusa’s database using the Medusa APIs or programmatically. You have a few options:
Use Medusa’s Admin REST API: Medusa exposes endpoints to create products, variants, etc. You could write scripts that read the Magento data and call Medusa’s /admin/products endpoints to create products one by one. This is straightforward but could be slow for very large catalogs unless you batch requests.
Use a Medusa Script or Plugin: Because Medusa is a Node.js system, you can write a custom script (within the Medusa project or a separate Node script) that uses Medusa’s internal services or repository layer to insert data directly. For example, within a Medusa plugin context, you could use Medusa’s ProductService to create products in bulk. You essentially create a reusable migration tool: the plugin fetches products from Magento and then calls Medusa’s createProductsWorkflow to import them.. The benefit of doing this as a plugin is that you can rerun it or even schedule it (e.g., to periodically sync data during transition).
CSV/JSON import via code: Another approach is to export data to a structured fi le (CSV/JSON) from Magento and then write a Node.js script using Medusa’s SDK or direct DB calls to import. This is custom but might be simpler for one-time use.
Verify and Adjust Data in Medusa
Once imported, use the Medusa Admin dashboard to spot-check the catalog. Do all products appear with correct titles, prices, variants, and images? Are categories properly assigned? This is where you may need to adjust some mappings.
For example, Magento product “attributes” that were used for filtering (color, brand, etc.) might be represented in Medusa as product tags or metadata. If so, you might convert Magento attributes to Medusa tags for fi ltering purposes. Likewise, customer groups or tier pricing in Magento could map to Medusa’s customer groups and price lists (Medusa has a Price List feature for special pricing).
Point Frontend Product Calls to Medusa
After product data is in Medusa, switch your new frontend to use Medusa’s Store APIs for product and inventory data. Up to now, in Phase 1, the Next.js app was likely calling Magento’s API to list products. Now you will update those API calls to query the Medusa backend instead. Medusa provides a Store API (unauthenticated endpoints) for products, collections, etc.
For example, your product listing page might hit GET /store/products on Medusa (which returns a list of products in JSON). This cutover should be invisible to users: the data is the same conceptually, just coming from a diff erent backend. Because we still haven’t moved the cart/checkout, you may need to ensure product IDs or SKUs align so that when a user adds to cart (going to Magento), it still recognizes the product. If you maintain the exact same SKUs and identifi ers in Medusa as in Magento, you can cross-reference easily. You might keep a mapping of Magento product ID to Medusa product ID if needed just for the interim.
Phase 3: Migrate Cart, Checkout, and Order Processing to Medusa
Phase 3 tackles the transactional components: shopping cart, checkout, payments, and order management. This is usually the most complex part of the migration because it affects the core of your e-commerce operations and customer experience.
Steps in Phase 3:
Rebuild Cart & Checkout Functionality
Since your frontend is already decoupled, you will now integrate it with Medusa’s cart and order APIs instead of Magento’s. Medusa provides a Cart API and Order API to support typical e-commerce flows. For example:
When a user clicks “Add to Cart”, you will call POST /store/carts (to create a cart) and then POST /store/carts/{cart_id}/line-items to add a product line item. Medusa’s store API will handle the cart persistence (likely in Medusa’s DB).
The cart state (items, totals, etc.) can be retrieved via GET /store/carts/{cart_id} and displayed to the user.
For checkout, Medusa supports typical checkout flows: you’ll collect shipping address, select shipping method, select payment method, etc., and update the cart via API calls (e.g., POST /store/carts/{id}/shipping-method, POST /store/carts/{id}/payment-session).
Finally, placing an order is usually done by completing the payment session which in turn creates an Order in Medusa (this is often POST /store/carts/{id}/complete).
Essentially, you need to replicate in the new frontend all the steps that Magento’s checkout used to provide. If you use a Magento one-page checkout or multiple steps, design a corresponding fl ow in the new frontend calling Medusa. The heavy-lifting of order creation, tax calculation, etc., can be done by Medusa if configured properly.
Configure Payment Providers in Medusa
Set up payment processing within Medusa to replace Magento’s payment integrations. Medusa has a plugin-based system for payments, with support for providers like Stripe, PayPal, Adyen, etc. If you were using, say, Authorize.net or Stripe in Magento, you can install the Medusa plugin for the same (or use a new provider if desired).
Please make sure that your Medusa instance is confi gured with API keys for the payment gateway and that the frontend integrates the payment provider’s UI or SDK appropriately (for example, Stripe Elements on the frontend and the @medusajs/stripe plugin on backend).
Set up Shipping and Tax in Medusa
Medusa provides a default shipping option management and integrates with shipping carriers via plugins (if needed). For taxes, Medusa can handle simple tax rules or integrate with tax services. Configure any necessary fulfillment providers (like if using Shippo, Shipstation, etc.) and tax rates or services (like TaxJar) so that the Medusa order fl ow computes totals correctly.
Migrate or Sync Customer Accounts (if required for checkout)
Depending on how you want to handle customer logins, you might at this point migrate customer accounts to Medusa. Medusa has its own customer management and authentication.
However, if you want logged-in customers to be able to see their profile or use saved addresses during checkout on the new system, you’ll need to move customer data now. Migrating customer accounts means importing users’ basic info and hashed passwords.
Medusa uses bcrypt for hashing passwords by default; Magento (depending on version) might use different hash algorithms (MD5 with salt in M1, SHA-256 in M2). One strategy is to migrate all users with a flag requiring a password reset (simplest, but impacts user experience), or attempt to import password hashes and adjust Medusa’s authentication to accept Magento’s hash format (advanced).
Order Management and Fulfillment
Recreate any order processing workflows in Medusa. For example, if Magento was integrated to an ERP or OMS (Order Management System) for fulfillment, now Medusa must integrate with those systems. Medusa can trigger webhooks or you can use its event system to notify external systems of new orders.
If your team uses an admin interface to manage orders (e.g., print packing slips, update order status), the Medusa Admin or a connected OMS should be used. The Medusa Admin dashboard allows viewing and updating orders, creating fulfillments, etc., similar to Magento admin.
You might need to train the operations team to use Medusa’s admin for processing orders (creating shipments, marking orders as shipped, etc.). If any custom post-order logic existed (like custom fraud checks, or split fulfillment logic), implement that either via Medusa’s plugins or an external microservice triggered by Medusa’s order events.
Cutover Cart/Checkout on Frontend
Once Medusa’s checkout is fully implemented and tested in a staging environment, you will switch the production frontend to use it. This is a big milestone, effectively, Magento is being removed from the live customer path.
Ensure to coordinate a deployment when the site is quiet. It can be wise to disable the ability to place orders on Magento shortly before the switch (for instance, put Magento in maintenance mode for checkout) to avoid any orders being placed on Magento at the same time.
When you deploy the new frontend that connects to Medusa for cart/checkout, run through a suite of test orders (ideally in a staging environment or with test payment modes on production just before enabling real payments).
Data Migration (Orders)
You may want to migrate historical order data from Magento into Medusa, for continuity in order history. This can be done via script or gradually. However, migrating thousands of old orders might not be strictly necessary for operations; some teams keep Magento read-only for a time for reference or build an archive.
If you do import past orders, you might insert them as Medusa orders via the Admin API or directly in DB. The critical part in Phase 3 is to ensure any ongoing orders (like open carts or pending orders) are either transferred or completed. For example, you might cut off new cart creation on Magento a few hours before, but allow any user who was in checkout to finish (or provide a clear notice to refresh and start a new cart on the new system).
Now that your store is on MedusaJS, leverage the benefits of the new architecture and follow best practices to get the most out of it. Here are some recommended architecture patterns and practices post-migration for an enterprise-scale Medusa setup:
Composable, Microservices-Friendly Architecture
With Medusa as the core commerce service, your overall e-commerce platform is now “composable.” This means you can plug in and replace components at will. Continue to embrace this modular approach.
For example, if you want to add a new capability like AI-driven recommendations, you can integrate a specialized microservice for that via Medusa’s APIs or events, without monolithic constraints. Each piece of the system (CMS, search, payments, etc.) can scale independently and be updated on its own schedule.
Scalability and Cloud Deployment
Deploy Medusa in a cloud-native way to achieve maximal scalability and reliability. Containerize the Medusa server (Docker) and use Kubernetes or similar to manage scaling. Because Medusa is stateless (except the database), you can run multiple instances for load balancing.
Scale the database vertically or use read replicas as needed; e.g., a managed PostgreSQL service that can handle enterprise load. Use auto-scaling for your frontend as well (if using Next.js, consider serverless deployment for dynamic functions and a CDN for static pre-rendered pages).
Monitor resource usage and performance; one benefit of Medusa’s headless setup is you can put a caching layer or CDN in front of certain API calls if needed (though be careful to cache only safe GET requests like product browsing, not cart actions).
Maintain a Clean Extension Layer
As you add features to your commerce platform, use Medusa’s extension points (plugins, modules, and integrations) rather than modifying core code. This keeps the core stable and upgradable. Medusa’s plugin system supports adding custom routes, middleware, or overriding core service logic in a contained manner.
If an enterprise feature is missing, consider building a plugin and possibly contributing it back to the Medusa community. This way, your platform remains maintainable. For example, if down the road you need a complex promotion engine beyond what Medusa offers, build it as a separate service or plugin that interfaces with orders, rather than forking the Medusa core.
Cost and Maintenance Considerations
Without Magento’s license or heavy hosting requirements, you may fi nd cost savings. However, budget for the new components (hosting for Medusa, any new SaaS services like CMS or search). Keep track of total cost of ownership.
Over time, Medusa’s lower resource footprint can be a win; for example, a Node service might use less memory and CPU under load than Magento did. If you switched from Magento Commerce (paid) to Medusa (free OSS), you’ve eliminated license fees as well.
By approaching the process in phases—starting with the storefront, then moving catalog data, and finally checkout and orders—you minimize risk while steadily unlocking the benefits of a headless, modular architecture.
The result is a faster, more scalable platform that adapts to your business needs instead of limiting them. With MedusaJS in place, your enterprise is better equipped for future growth, innovation, and long-term effi ciency.