Mayank Patel
Sep 4, 2025
4 min read
Last updated Sep 9, 2025
When everything is available online, nothing feels discoverable. Shoppers drop off within seconds if they can’t find what they want. That’s why modern merchandising is less about inventory and more about strategy; organizing, personalizing, and presenting products in ways that turn an endless shelf into a guided path.
A taxonomy is simply how products are organized. A clear hierarchical category structure (with intuitive subcategories and product attributes) is important so that shoppers can easily browse and find what they’re looking for.
The way items are grouped directly impacts user experience and conversion rates: if users can’t quickly locate a product, frustration builds and they bounce, often within seconds. For this reason, thoughtful taxonomy isn’t just nice-to-have; it’s a core merchandising strategy.
Designing an effective taxonomy is both art and science. From a data-structure perspective, most e-commerce taxonomies form a tree: broad parent categories branch into more specific child categories, and eventually into individual products.
Each product also carries attributes (metadata like brand, color, size, etc.) that cut across categories. This hierarchical arrangement enables guided discovery. Customers start in broad sections and drill down by selecting subcategories or filtering by attributes.
Importantly, simpler is often better. It’s best practice to use as few top-level categories as possible without sacrificing clarity; too many parallel options at once can quickly lead to decision fatigue and overwhelm.
In other words, don’t present 50 “aisles” on your homepage and force the shopper to choose. Instead, define a logical, streamlined category structure and let filters do the heavy lifting of refinement.
Also Read: Who Wins Where? Saleor vs MedusaJS vs Vendure
Modern e-commerce sites almost universally employ faceted search and filtering to help users slice through a vast catalog. Faceted search (also called guided navigation) uses those product attributes as dynamic filters, for example, filtering a clothing catalog by size, color, price range, brand, etc., all at once.
This capability is a powerful antidote to the infinite shelf’s chaos. By narrowing the visible options step by step, facets give users a sense of control and progress toward their goal. Each filter applied makes the result set smaller and more relevant.
From an implementation standpoint, faceted search relies on indexing product metadata and often involves clever algorithms to decide which filters to show. With a large catalog, there may be tens of thousands of attribute values across products, so showing every possible filter is neither feasible nor user-friendly.
Instead, e-commerce search engines dynamically present the most relevant filters based on the current query or category context. For example, if a user searches for “running shoes,” the site might immediately offer facets for men’s vs women’s, size, shoe type, etc., instead of unrelated filters like “color of laces” that add little value.
By analyzing the results set, the system can suggest the filters that are likely to matter, essentially reading the shopper’s mind about how they might want to refine the search. This dynamic filtering logic is often backed by data structures like inverted indexes for search and bitsets or specialized databases for fast faceted counts.
Also Read: Migrating from Magento to MedusaJS: The Enterprise Technical Guide
Even with a great taxonomy and strong filters, two different shoppers landing on the same mega-catalog will have very different needs. This is where personalization and recommendation algorithms become indispensable.
Advanced e-commerce platforms now use machine learning to dynamically curate and rank products for each user. By analyzing user data: past purchases, browsing behavior, search queries, demographic or contextual signals, algorithms can determine which subset of products out of thousands will be most relevant to that individual.
Recommendation engines are at the heart of this personalized merchandising. These systems use techniques like collaborative filtering (finding patterns from similar users’ behavior), content-based filtering (matching product attributes to user preferences), and hybrid models to surface products a shopper is likely to click or buy.
For example, a personalization engine might note that a visitor has been viewing hiking gear and thus highlight outdoor jackets and boots on the homepage for them, while another visitor sees a completely different set of featured products.
User behavior analytics feed these models: every click, add-to-cart, and dwell time becomes input to refine what the algorithm shows next. Over time, the site “learns” each shopper’s tastes. The benefit is two-fold: customers are less overwhelmed (since they’re shown a tailored slice of the catalog rather than a random assortment) and more delighted by discovery (since the selection feels relevant).
Also Read: Content Modeling 101 for Omnichannel Using dotCMS
A smart strategy is to vary the merchandising approach for different contexts and customers. For first-time or anonymous visitors (where no prior data is known), showing the entire endless catalog would be counterproductive.
It’s often better to present curated selections like bestsellers or trending products. This “warm start” gives new shoppers a manageable starting point instead of a blank page or an intimidating browse-all experience.
On the other hand, returning customers or logged-in users can immediately see personalized recommendations based on their history. The key is using data wisely to guide different customer segments toward discovery without ever letting them feel lost.
Modern recommendation systems also use contextual data and advanced algorithms. For instance, some platforms adjust recommendations in real-time based on the shopper’s current session behavior or even the device they use. (Showing simpler, more general suggestions on a mobile device where screen space is limited can outperform overly detailed personalization, whereas desktop can offer more nuanced recommendations.)
Cutting-edge e-commerce architectures are exploring vector embeddings and deep learning models to capture subtle relationships between products and users to enable features like visual search or chatbot-based product discovery. We can build these algorithms. Talk to us.
UX design choices play a huge role in whether the shopping experience feels inspiring or exhausting. Just because you can display thousands of products doesn’t mean you should dump them all in front of the user at once.
The content at the top of category pages, search results, and homepages is disproportionately influential. Critical items (whether they are popular products, lucrative promotions, or highly relevant personalized picks) should be merchandised in those prime slots. As a case in point, product recommendations or banners shown in the top viewport are roughly 1.7× more effective than those displayed below the fold.
There is an ongoing UX debate in ecommerce about using infinite scrolling versus traditional pagination or curated grouping. Infinite scroll automatically loads more products as the user scrolls down. This can increase engagement time, as users don’t have to click through pages and are continuously presented with new items.
However, infinite scroll can also backfire if not implemented carefully. If shoppers feel they are wading through a bottomless list, they may give up. And once they scroll far, finding their way back or remembering where something was can be difficult. User testing has found that people have a limited tolerance for scrolling, after a certain point, they either find something that catches their eye or they tune out.
A balanced approach is often best. Many sites employ a hybrid: load a substantial chunk of products with an option to “Load More” (giving the user control), or use infinite scroll but with clear segmentation and filtering options always visible.
Also Read: Headless vs Hybrid vs “Universal” CMS: Which Model Fits Multi-Team Delivery?
Aside from search and filters, consider adding guided discovery tools in the UX. This might include features like dynamic product groupings, recommendation carousels, or wizards and quizzes. For example, you can programmatically create curated “shelves” on the fly: e.x. a “Best Gifts for Dog Lovers” collection that appears if the user’s behavior suggests interest in pet products.
These can be powered by the same algorithms we discussed earlier, which can identify meaningful product groupings from trends in data. Such groupings address a common UX gap: a customer may be looking for a concept (“cream colored men’s sweater” or “outdoor kitchen ideas”) that doesn’t neatly map to a single pre-defined category.
Relying solely on static navigation might give them poor results or force them to manually hunt. By dynamically detecting intent clusters and generating pages or sections for them, you improve the chance that every user finds a relevant path. It’s impractical for human merchandisers to pre-create pages for every niche query (there could be effectively infinite intents), so this is an area where algorithmic assistance shines.
Merchandising is no longer a downstream activity that happens after inventory is set; it’s upstream, shaping how catalogs are structured, how data is modeled, and how algorithms are trained. Teams that treat merchandising as a technical capability—not just a marketing function—will be positioned to turn complexity into competitive advantage.
Who Wins Where? Saleor vs MedusaJS vs Vendure
Headless commerce is no longer a niche experiment. More and more brands are converting from a traditional setup, with a clear focus on tech that cuts costs. But once you decide to go headless, you hit the next big question: which platform actually fits your team and your roadmap?
For most developers weighing open-source options, the short list usually comes down to MedusaJS, Saleor, and Vendure. On paper, they look similar (API-first, extensible, and community-driven). In practice, they couldn’t be more different.
MedusaJS (Node/JS) | Saleor (Python/GraphQL) | Vendure (Node/TS) | |
Core Tech Stack | Node.js, Express, JavaScript | Python, Django, GraphQL | Node.js, TypeScript, NestJS |
API Style | REST (API-fi rst) | GraphQL (API-only) | GraphQL (API-fi rst; Admin & Shop APIs) |
Built-in Features | Multi-currency, discounts, returns, plugins etc. (Add others via plugins) | Multi-language, multi-currency, extensive catalog, many features by default | Multi-currency, custom fi elds, promotions (features added via plugins/confi g) |
Customization | (High) Modular and plugin-based; easy JS extensions | (Moderate) Many built-ins, extensible via plugins or separate apps | (Very High) Plugin system, modular architecture, strong typing |
Ease of Use | Developer-friendly, quick start; non-tech users need the provided admin UI | Rich features but more complex; steeper learning curve for devs | Developer-focused, requires TS/Nest familiarity; admin UI is functional but less modern UX |
Community | Small but fast-growing (30k+ stars); very active Discord | Medium-sized established community (20k+ stars); strong enterprise backing | Small but dedicated (6k+ stars); maintainers very responsive |
Best For | Lean teams, DTC brands needing customization and quick iteration; JS-centric teams | Enterprise or large-scale projects; teams wanting GraphQL and out-of-box completeness | TS/JS developers wanting a tailored framework; medium businesses with specifi c custom needs |
Before getting into feature-by-feature comparisons, it’s best to understand the background and core focus of each platform. Here’s a brief overview:
MedusaJS is a Node.js-based headless commerce platform first released in 2021. It positions itself as an open-source alternative to Shopify. The core Medusa server is built with JavaScript (on Node and Express). Medusa is API-first (offering a REST API by default) and designed to be modular, lightweight, and easy to extend with plugins.
Saleor is a headless e-commerce platform built with Python and Django, originally released in 2018 (though its roots go back further, with active development for over a decade). It takes a GraphQL-first approach—all functionality is exposed via GraphQL APIs—and is designed for scalability and high performance operations.
Saleor is often positioned as a modern alternative to enterprise platforms like Magento. In fact, when Magento 1 reached end-of-life, Saleor touted itself as a solution that’s “equally good if not better, and relatively inexpensive” for merchants facing costly Magento 2 upgrades. The platform is maintained by the company Saleor Commerce (previously Mirumee) and is fully open-source (under a BSD 3-clause license) with an option for cloud hosting as a paid service.
Vendure, like Medusa, is API-first and headless, meaning you build your own storefront or integrate with any front-end technology via its API layer. Vendure leverages the NestJS framework under the hood (a popular Node framework inspired by Angular’s architecture) and uses TypeScript end-to-end.
Vendure’s philosophy is to provide a modern, modular, and developer-first foundation for e-commerce, with an emphasis on strong typing, a rich plugin architecture, and GraphQL APIs. It is fully open-source (MIT licensed) and maintained by a core team (Vendure GmbH) with an active community of contributors.
Also Read: Headless vs Hybrid vs “Universal” CMS: Which Model Fits Multi-Team Delivery?
Here’s how they fare:
Medusa exposes a RESTful API by default for both storefront and admin interactions. This straightforward approach often means easier onboarding for developers (REST is ubiquitous and simple to test). Saleor is strictly GraphQL API, all queries and mutations go through GraphQL endpoints. Vendure by design also uses GraphQL APIs for both its shop and admin endpoints.
(Vendure does allow adding REST endpoints via custom extensions if needed, but GraphQL is the primary interface).
There are pros and cons here:
All three are headless and API-first, meaning the back-end business logic is decoupled from any front-end. They each allow (or encourage) running additional services for certain tasks:
The architecture is relatively monolithic but modular internally. You run a Medusa server which handles all commerce logic and exposes APIs. Medusa’s philosophy is to keep the core simple and let functionality be added via plugins (which run in the same process).
This design avoids a microservices explosion for small projects; everything is one Node process (plus a database and perhaps a search engine). This is great for smaller teams. Medusa uses a single database (by default Postgres) for storing data, and you can deploy it as a single service (with optional separate services for things like a storefront or an admin dashboard UI).
Saleor’s architecture revolves around Django conventions. It’s also monolithic in the sense that the Saleor server handles everything (GraphQL endpoints, business logic, etc.) in one service, backed by a PostgreSQL database. However, Saleor encourages a slightly different extensibility model: you can extend by writing “plugins” within the core or by building “apps” (microservices) that integrate via webhooks and the GraphQL API.
This dual approach means if you want to alter core behavior deeply, you might write a Python plugin that has access to the database and internals. Or, if you prefer to keep your extension separate (or write it in another language), you can create an app that talks to Saleor’s API from the outside and is authorized via API tokens.
The latter is useful for decoupling (and is language-agnostic), but it means that extension can only interact with Saleor through GraphQL calls and webhooks, not direct DB access. Saleor’s design also supports containerization and scaling; it’s easy to run Saleor in Docker and scale out the services (plus it has support for background tasks and uses things like Celery for asynchronous jobs in newer versions).
Vendure is structured as a Node application with a built-in modular system. It runs as a central server (plus an optional separate worker process for heavy tasks). Vendure’s internal architecture is plugin-based: features like payment processing, search, etc., are implemented as plugins that can be included or replaced.
Developers can write their own plugins to extend functionality without forking the core. Vendure uses an underlying NestJS framework, which imposes a certain organized structure (modules, providers, controllers, etc.) that leads to a clean separation of concerns.
It also means Vendure can benefit from NestJS features like dependency injection and middleware. Vendure includes a separate Worker process capability, e.x., for sending emails or updating search indexes asynchronously, a background worker can be run to offload those tasks. This is great for scalability, as heavy operations don’t block the main API event loop.
Vendure’s use of GraphQL and a strongly typed schema also means frontends can auto-generate typed SDKs (for example, generating TypeScript query hooks from the GraphQL schema).
Medusa provides an admin panel (open source) built with React and GatsbyJS (TypeScript). It’s a separate app that communicates with the Medusa server over REST. You can deploy it separately or together with the server.
The admin is quite feature-rich (products, orders, returns, etc.) and since it’s React-based, it’s relatively straightforward for JS developers to customize or extend with new components. Medusa’s admin UI being a decoupled frontend means it’s optional, if you wanted, you could even build your own admin or integrate Medusa purely via API; but most users will use the provided one for convenience.
Saleor’s admin panel is also decoupled and is built with React (they have a design system called Macaw-UI). It interacts with the Saleor core via GraphQL. You can use the official admin or fork/customize it if needed. Saleor allows creating API tokens for private apps via the admin, so you can integrate external back-office systems easily. Saleor’s admin is quite polished and supports common tasks (managing products, orders, discounts, etc.). As with Medusa, the admin is essentially a client of the backend API.
Vendure’s admin UI comes as a default part of the package; implemented in Angular and delivered as a plugin (AdminUiPlugin) that serves the admin app. By default, a standard Vendure installation includes this admin. Administrators access it to manage catalog, orders, settings, etc.
Even if you’re not an Angular developer, you can still use the admin as provided. Vendure documentation notes that you “do not need to know Angular to use Vendure” and that the admin can even be extended with custom UI extensions written in other frameworks (they provide some bridging for that).
However, major custom changes to the admin likely require Angular skills. Some teams choose to build a custom admin interface (e.g., in React) by consuming Vendure’s Admin GraphQL API, but that’s a bigger effort. So out-of-the-box, Vendure gives you a functioning admin UI which is sufficient form many cases, though perhaps not as slick as Medusa’s or Saleor’s React-based UIs in terms of look and feel.
All three being headless means you’re expected to build or integrate a storefront. To jump-start development, each provides starter storefront projects:
Medusa offers a Gatsby starter that’s impressively full-featured, including typical e-commerce pages (product listings, cart, checkout) and advanced features like customer login and order returns, all wired up to Medusa’s backend. It basically feels like a ready-made theme you can customize, which is great for fast prototyping. Medusa also has starters or example integrations with Next.js, Nuxt (Vue), Svelte, and others.
Saleor provides a React/Next.js Storefront starter (sometimes referred to as “Saleor React Storefront”). It’s a Next.js app that you can use as a foundation for your shop, already configured to query the Saleor GraphQL API. This covers basics like product pages, cart, etc., but might not be as feature-complete out of the box as Medusa’s Gatsby starter (for example, handling of returns or customer accounts might require additional work).
Vendure, as mentioned, has official starters in Remix, Qwik, and Angular. These starter storefronts include all fundamental e-commerce flows (product listing with facets, product detail, search, cart, checkout, user accounts, etc.) using Vendure’s GraphQL API. The Remix and Qwik starters are particularly interesting as they focus on performance (Remix for fast server-rendered React, Qwik for ultra-fast hydration). Vendure thus gives a few choices depending on your front-end preference, though notably, there isn’t an official Next.js starter from Vendure’s team as of 2025. However, the community or third parties might provide one, and in any case, you can build one easily with their GraphQL API.
All modern e-commerce platforms cover the basics: product listings, shopping cart, checkout, order management, etc. However, differences emerge in how features are implemented and what is provided natively vs. via extensions. Let’s compare some key feature areas and note where each platform stands out:
Products in Medusa can have multiple variants (for example, a T-shirt with different sizes/colors) and are grouped into Collections (a collection is essentially a group of products, often used like categories). Medusa also supports tagging products with arbitrary tags for additional grouping or filtering logic.
Medusa’s philosophy is to keep the core product model fairly straightforward, and encourage integration with external Product Information Management (PIM) or CMS if you need extremely detailed product content (e.g., rich descriptions, multiple locale content, etc.). It does provide all the basics like images, description, prices, SKUs, etc., and inventory tracking out of the box.
Saleor’s product catalog is a bit more structured. It supports organizing products by Categories and Collections. A Category in Saleor is a tree structure (like traditional e-commerce categories) and a Collection is more like a curated grouping (similar to Medusa’s collections).
Saleor also has a notion of Product Types and attributes; you can define custom product attributes and assign them to types (for example, a “Shoes” product type might have size and color attributes). These attributes can then be used as filters on the storefront.
This system provides flexibility to extend product data without modifying code, which can be powerful for store owners. Saleor supports multiple product variants per product as well (with the attributes distinguishing them).
As for tagging, Saleor doesn’t have simple tags via the admin either (at least as of that comparison), but because it has custom attributes and categories, that gap is usually filled by those features.
Saleor’s admin also allows adding metadata to products if needed, and its GraphQL API is quite adept at querying any of these structures.
Vendure combines aspects of both. It has Product entities that can have variants, and it supports a Category-like system through a feature called Collections (Vendure’s Collections are hierarchical and can have relations, effectively serving the role of categories).
Vendure also allows defining Custom Fields on products (and other entities) via configuration, meaning you can extend the data model without hacking the core. For example, if you want to add a “brand” field to products, Vendure lets you do that through config and it will generate the GraphQL schema for it. This is part of Vendure’s extensibility.
Vendure supports facets/facet values which can be used as product attributes for filtering (similar to Saleor’s attributes).
Vendure provides a highly customizable catalog structure with a bit of coding, whereas Saleor provides a lot through the admin UI, and Medusa keeps it simpler (with the option to integrate something like a CMS or PIM for additional product enrichment).
Saleor has built-in multi-language support for product data. Product names, descriptions, etc., can be localized in multiple languages through the admin, and the GraphQL API allows querying in a specified language. This is one of Saleor’s selling points (multi-language, multi-currency).
Vendure supports multi-language by marking certain fields as translatable. Internally, it can store translations for product name, slug, description, etc., in different languages. This is configured at startup (you define which languages you support), and the admin UI allows inputting translations. It’s quite robust in that area for an open-source platform.
MedusaJS does not natively have multi-language fields for products in the core. Typically, merchants using Medusa would handle multi-language by using an external CMS to store translated content (for example, using Contentful or Strapi with Medusa, as suggested by Medusa’s docs).
The Medusa backend itself might not store a French and English version of a product title; you’d either store one in the default language or use metadata fields or region-specific products. However, Medusa’s focus on regions is more about currency and pricing differences, not translations.
Recognizing this gap, the community has created plugins to assist with multilingual catalogs (for instance, there’s a plugin that works with MeiliSearch to index products with internationalized fields). Moreover, Medusa’s Admin recently introduced multi-language support for the admin interface (so the admin UI labels can be in different languages), but that’s separate from actual product content translation.
For a primarily single-language store or one with minimal translation needs, Medusa’s approach is fine, but if you have a complex multi-lingual requirement, Saleor or Vendure may require less custom work.
A highlight of Medusa is its multi-currency and multi-region support. In Medusa, you can define Regions which correspond to markets (e.g., North America, Europe, Asia) and each region has a currency, tax rate, and other settings.
For example, you can have USD pricing for a US region and EUR pricing for an EU region, for the same product. Medusa’s admin and API let you manage different prices for different regions easily. This is extremely useful for DTC brands selling internationally. Medusa also supports setting different fulfillment providers or payment providers per region.
Saleor supports multi-currency through its Channels system. You can set up multiple channels (which could be different countries, or different storefronts) each with their own currency and pricing. Saleor even allows differentiating product availability or pricing by channel.
This covers the multi-currency need effectively (Saleor’s demo often shows, for instance, USD and PLN as two currencies for two channels). Tax calculation in Saleor can integrate with services or be configured per channel as well. So, Saleor is on par with Medusa in multi-currency capabilities, and it additionally handles multi-language as mentioned. It’s truly built for multi-market operation.
Vendure has the concept of Channels too. Channels can represent different storefronts or regions (for example, an EU channel and a US channel). Each channel can have its own currency, default language, and even its own payment/shipping settings.
Vendure allows products to be in multiple channels with different prices if needed. This is basically how Vendure supports multi-currency and multi-store scenarios. It’s quite flexible, although configuring and managing multiple channels requires deliberate setup (like creating a channel, assigning products, etc.).
Vendure’s approach is powerful for multi-tenant or multi-brand setups as well (one Vendure instance could serve multiple shops if configured via channels and perhaps some custom logic).
Medusa does not have a full-text search engine built into the core; instead, it provides easy integrations for external search services. You can query products by certain fields via the REST API, but for advanced search (fuzzy search, relevancy ranking, etc.), Medusa leans on plugins.
The Medusa team has provided integration guides or plugins for MeiliSearch and Algolia, two popular search-as-a-service solutions. For example, you can plug in MeiliSearch and have typo-tolerant, fast search on your catalog.
This approach means a bit of setup but results in a better search experience than basic SQL filtering. The trade-off is that search is as good as the external system you use and if you don’t configure one, you only have simple queries.
Saleor’s approach (at least up to recently) for search was relatively basic; you could perform text queries on product name or description via GraphQL to implement a simple search bar. It did not include a built-in advanced search engine or ready connectors to one at that time.
Essentially, to get a robust search in Saleor, you might need to use a third-party service or write a plugin/app. Given that Saleor is GraphQL, one could use something like ElasticSearch by syncing data to it, but that requires development work (some community projects likely exist). In an enterprise context, it’s expected you’ll integrate a dedicated search system.
Vendure includes a built-in search mechanism which is pluggable. By default, it uses a simple SQL-based search (with full-text indexing on certain fields) to allow basic product searches and filtering by facets. For better performance or features, Vendure provides an ElasticsearchPlugin, a drop-in module that, when enabled, syncs product data to Elasticsearch and uses that for search queries.
There’s also mention of a Typesense-based advanced search plugin in development. This shows Vendure’s emphasis on modularity: you can start with the default search and later move to Elastic or another search engine by adding a plugin, without changing your storefront GraphQL queries.
Vendure’s search supports faceted filtering (e.g., by attributes, price ranges, etc.), especially when using Elasticsearch. This is great for storefronts with category pages that need filtering by various criteria.
All three platforms handle the full checkout flow including cart, payment processing (via integrations), and order management, but with some nuances:
Each platform provides APIs to manage a shopping cart (often called an “order draft” or similar) and then convert it to a completed order at checkout.
Medusa ships with several payment providers integrated: Stripe, PayPal, Klarna, Adyen are supported. Medusa abstracts payment logic through a provider interface, so adding a new gateway (say Authorize.net or Razorpay) is a matter of either installing a community plugin or writing a small plugin yourself to implement that interface.
Thanks to this abstraction, developers have successfully extended Medusa with many region-specific providers too. Medusa does not charge any transaction fees on top; you use your gateway directly (and with the new Medusa Cloud, the team behind Medusa emphasize they don’t take a cut either).
Saleor supports Stripe, Authorize.net, Adyen out of the box, and through its plugin system, it also has integration for others like Braintree or Razorpay. Being Python, if an API exists for a gateway, you can integrate it via a Saleor plugin in Python.
Saleor’s approach to payments is also abstracted (it had a payment plugins interface). So both Medusa and Saleor cover the common global gateways, with Saleor perhaps having a slight edge in some additional regional ones via community (e.g., Razorpay as mentioned).
Vendure has a robust plugin library that includes payments such as Stripe (there’s an official Stripe plugin), Braintree, PayPal, Authorize.net, Mollie, etc. Vendure’s documentation guides on implementing custom payment processes as well. So Vendure’s coverage is quite broad given the community contributions.
Medusa shines with some advanced features here. It supports full Return Merchandise Authorization (RMA) workflows. This means customers can request returns/exchanges, and Medusa’s admin allows processing returns, offering exchanges or refunds, tracking inventory back, etc. Medusa also uniquely has the concept of Swaps: allowing exchanges where a returned item can trigger a new order for a replacement.
These are sophisticated capabilities usually found in more expensive platforms, and having them in Medusa is a big plus for fashion and apparel DTC brands that deal with returns often. Medusa’s admin and API let you handle order status transitions (payment authorized, fulfilled, shipped, returned, etc.), and it can integrate with fulfillment providers or you can handle it manually via admin.
Saleor covers standard order management. You can see orders, update statuses, process payments (capture or refund), etc. However, a noted difference is that Saleor’s approach to returns/refunds was a bit more manual or basic at least in earlier versions.
There isn’t a built-in automated RMA flow; a store operator might have to mark an order as returned and manually create a refund in the payment gateway or such. They may improve this over time or provide some apps, but it isn’t as streamlined as Medusa’s RMA feature.
For many businesses, this might be acceptable if returns volume is low or they handle it via customer service processes. But it’s a point where Medusa clearly invested effort to differentiate (likely because Shopify’s base offering lacks easy returns handling too, and Medusa wanted to cover that gap).
Vendure’s core includes order states and a workflow that can be customized. It doesn’t natively have a “magic” RMA module built-in to the same degree, but you can implement returns by leveraging its order modifications.
Vendure does allow refunds (it has an API for initiating refunds through the payment plugins if supported), and partial fulfillments of orders, etc. If a robust returns system is needed, it might require some custom development or use of a community plugin in Vendure. Since Vendure is very modular, one could create a returns plugin that automates some of that.
Medusa supports discount codes and gift cards from within its own functionality. You can create percentage or fixed-amount discounts, limit them to certain products or customer groups, set expiration, etc. Medusa allows product-level discounts (specific products on sale) easily. It also has a gift card system which many platforms don’t include by default.
Saleor also supports discounts (vouchers) and gift cards. Saleor’s discount system can apply at different levels; one interesting note is that Saleor can do category-level discounts (apply to all products in a category), which might be a built-in concept. Saleor, being oriented to marketing needs, has quite an extensive promotions logic including “sales” and “vouchers” with conditions and requirements.
Vendure includes a Promotions system where you can configure promotions with conditions (e.g., order total above X, or buying a certain product) and actions (e.g., discount percentage or free shipping). It’s quite flexible and is done through config or the admin UI. Vendure doesn’t call them vouchers but you can set up coupon codes associated with promotions. Gift cards might not be in the core, but could be implemented or might exist as a plugin.
One of the biggest reasons to choose a headless open-source solution over a SaaS platform is the ability to customize and extend it to fit your business, rather than fitting your business into it. Let’s compare how our three contenders enable extension:
MedusaJS is designed with a plugin architecture from the ground up. Medusa encourages developers to add features via plugins rather than forking the code. A plugin in Medusa is essentially an NPM package that can hook into Medusa’s backend; it can add API endpoints, extend models, override services, etc.
For instance, if you wanted to integrate a third-party ERP, you could write a plugin that listens to order creation events and sends data to the ERP. Medusa also prides itself on allowing replacement of almost any component; you could even swap out how certain calculations work by providing a custom implementation via dependency injection (advanced use-case).
Saleor’s extensibility comes in two flavors as noted: Plugins (in-process, written in Python) and Apps (out-of-process, language-agnostic). Saleor’s plugins are used for things like payment gateways, shipping calculations, etc., and run as part of the Saleor server. If you have a specific business logic (say, a custom promotion rule), you might implement it as a plugin so that it can interact with the core logic and database.
On the other hand, Saleor introduced a concept of Saleor Apps which are somewhat analogous to Shopify apps; they are separate services that communicate via the GraphQL API and webhooks. An app can be hosted anywhere, subscribe to events (like “order created”) via webhook, and then call back to the API to do something (like add a loyalty reward, etc.).
This decouples the extension and also means you could use any programming language for the app. The admin panel allows store staff to install and manage these apps (grant permissions, etc.). The advantage of the app approach is safer upgrades (your app doesn’t hack the core) and more flexibility in tech stack; the downside is a slight overhead of maintaining a separate service and the limitations of only using the public API.
Vendure takes an extreme plugin-oriented approach. Almost all features in Vendure (payments, search, reviews, etc.) are implemented as plugins internally, and you can include or exclude them in your server setup. Writing a Vendure plugin means writing a TypeScript class that can tap into the lifecycle of the app, add new GraphQL schema fields, override resolvers or services, etc.
The core of Vendure provides the commerce primitives, and you compose the rest. This is why some view Vendure as ideal if you have very custom requirements. The community has contributed plugins for many needs (reviews system, wishlist, loyalty points, etc.). Vendure’s official plugin list includes not only integrations (like payments, search) but also features (like a plugin that adds support for multi-vendor marketplace functionality, which is something a company might need to add to create a marketplace).
As of 2025, Medusa has introduced Medusa Cloud, a managed hosting platform for Medusa projects. This caters to teams that want the benefits of Medusa without dealing with server ops. The Medusa Cloud focuses on easy deployments (with Git integration and preview environments) and transparent infrastructure-based pricing (no per-transaction fees).
This shows that Medusa is evolving to serve more established businesses that might require uptime guarantees and easier scaling. Apart from that, Medusa’s core being open-source means you can self-host on AWS, GCP, DigitalOcean, etc., using Docker or Heroku or any Node hosting. Many early-stage companies go that route to save cost.
Saleor Commerce (the company) offers Saleor Cloud, which is a fully managed SaaS version of Saleor. It’s targeted at mid-to-large businesses with a pricing model that starts in the hundreds of dollars per month. This service gives you automatic scaling, backups, etc., and might be attractive if you don’t want to run your own servers.
However, it’s a significant cost that perhaps only later-stage businesses or those with no devops inclination would consider. Saleor’s open-source can also be self-hosted in containers; some agencies specialize in hosting Saleor. Because Saleor is more complex to set up (with services like Redis, etc., possibly needed), the cloud option is a convenient but pricey offering.
Vendure’s company does not currently offer a public cloud SaaS. They focus on the open-source product and consulting. That said, because Vendure is Node, you can host it similarly easily on any Node-friendly platform. Some third-party hosting or PaaS might even have one-click deployments for Vendure.
From a total cost of ownership perspective: all three being open-source means you avoid licensing fees of traditional enterprise software. If self-hosted, your costs are infrastructure (cloud servers, etc.) and developer time.
For any growing business, the platform needs to handle increased load: more products, more traffic, flash sales, etc. Let’s consider how each platform fares and what it means for your project’s scalability:
MedusaJS (Node/Express, REST):
Also Read: Content Modeling 101 for Omnichannel Using dotCMS
All three can be customized heavily. If you foresee the need to implement highly unique business logic or integrate unusual systems, consider how you’d do it on each:
MedusaJS, Saleor, and Vendure all tick the “headless, open-source, flexible” boxes but each wins in different places.
Your right choice depends less on which is “objectively best” and more on which aligns with your team’s skills, your growth plans, and the trade-offs you’re willing to make. In the end, the winner is the one that fits your context.
Mayank Patel
Sep 2, 202513 min read
Migrating from Magento to MedusaJS: The Enterprise Technical Guide
For enterprises running on Magento, the challenges of scaling, maintaining custom code, and optimizing performance can hold back growth. MedusaJS off ers a lightweight, API-fi rst alternative that unlocks flexibility, speed, and lower overhead.
In this guide, we’ll walk through why enterprises are making the switch, the technical differences between Magento and Medusa, and how to plan and execute a phased migration with minimal disruption.
Let's walk through some of the biggest differences between the two platforms and see how they stack up.
Modern Architecture and Flexibility
Magento is a monolithic platform (built in PHP) where frontend, backend, and extensions are tightly coupled. Customizing Magento often means dealing with complex XML configurations and overriding core code, which can introduce maintenance issues.
For example, adding a seemingly simple feature like a custom checkout step or gift-wrapping option in Magento can require extensive PHP/XML work and risk breaking things on upgrades. In contrast, MedusaJS has a lightweight, API-first architecture that is fully headless.
Its modular design allows you to replace or extend components without touching core logic. Medusa’s flexibility means you can compose your commerce stack with best-of-breed services and swap things out as needed, rather than being locked into a single rigid system.
For enterprise retailers, site performance is critical. MedusaJS delivers significant performance gains over Magento’s legacy stack. Medusa’s decoupled frontend allows you to
use modern frameworks (like Next.js) and techniques (SSR/SSG) to optimize page load times. For example, using Medusa with Next.js server-side rendering can significantly improve initial load speed and SEO.
Scalability is a major consideration for growing DTC brands and retailers. MedusaJS’s headless, stateless architecture scales more easily in modern cloud environments. You can run multiple Medusa server instances behind a load balancer to handle high traffic, and scale your frontend independently as needed.
Because Medusa is modular, you can also scale individual services (for example, the database, search index, or cache) without scaling the entire platform. Magento can certainly scale to enterprise levels, but doing so often requires significant infrastructure (multiple web nodes with sticky sessions or Varnish caching, separate database servers, etc.) and careful tuning.
Magento’s heavy footprint means higher hosting costs and complexity as traffic grows. By contrast, Medusa’s lighter Node.js service can be containerized and deployed on modern orchestration platforms for efficient horizontal scaling.
Maintenance is also smoother with Medusa. Upgrading Magento, especially if you have many custom modules, can be a risky, time-consuming project (with potential regressions caused by core changes or extension incompatibilities). Medusa’s design encourages keeping customizations in isolated plugins or separate services. You avoid the “house of cards” effect often seen in Magento upgrades.
Moreover, Medusa is open-source and free with an active community, so you’re not tied to Adobe’s release cycle or licensing for enterprise features. Many features that are premium add-ons in Magento (or require the Commerce edition) come out-of-the-box in Medusa or via free plugins; e.g. advanced promotions, multi-currency, and so on.
Also Read: Who Wins Where? Saleor vs MedusaJS vs Vendure
Below are the key planning steps and best practices:
Start by auditing your current Magento setup in detail. This involves:
Review how your product catalog, categories, variants, pricing, and customers are structured in Magento, and map these to Medusa’s data models. Medusa has its own schemas for products, variants, orders, customers, etc., which are more straightforward than Magento’s (Magento uses an EAV model for products with attribute sets, which doesn’t directly exist in Medusa).
Identify any custom product attributes or complex product types (e.g. Magento bundle or configurable products) that will need special handling. For example, Magento “configurable products” with multiple options will likely map to a product with multiple variants in Medusa.
Make sure Medusa’s model can accommodate all necessary data (it usually can, via built-in fi elds or using metadata for custom attributes). Early on, defi ne how each entity (products, SKUs, categories, customers, orders, discount codes, etc.) will translate into the Medusa schema.
Magento installations often have numerous third-party modules and custom extensions providing extra features (from SEO tools to loyalty programs). List out all installed Magento modules and custom code. You can generate a module list via CLI: for example, running
php bin/magento module:status > modules_list.txt
will output all modules in your Magento instance. Using this list, evaluate each module’s functionality:
Consider the volume of data to migrate (number of SKUs, customers, orders, etc.) and its cleanliness. It’s also a chance to eliminate outdated or low-value data (for example, old customer records, or products that are no longer sold) so you start “clean” on Medusa.
Note:
It’s often helpful to create a mapping document that enumerates Magento entities and how each will be handled in Medusa (e.g., Magento customer entity -> Medusa customer, including
addresses; Magento reward points -> integrate XYZ loyalty service via API). This becomes your blueprint.
With requirements understood, the next step is to choose a migration approach. For most enterprises, a phased migration strategy is highly recommended over a “big bang” cutover.
In a phased approach, you gradually transition pieces of functionality from Magento to Medusa in stages, rather than switching everything in one night. This greatly reduces risk and complexity. Key benefits of a phased replatforming include the ability to test and fi x issues in isolation, minimal downtime, and continuous business operation during the transition. By migrating one component at a time, you can validate that piece (e.g. product catalog) in Medusa while the rest of the system still runs on Magento. If something goes wrong, it’s easier to roll back a single component than a whole system.
Plan out the phases that make sense for your business. A typical plan (detailed in the next section) might be:
Phase 1: Build a new Medusa-based storefront (while Magento remains the backend)
Phase 2: Migrate product/catalog data to Medusa
Phase 3: Migrate cart & checkout (orders) to Medusa
Each phase should be treated as a mini-project with its own design, implementation, and QA. Determine clear exit criteria for each phase (e.g. “new product catalog on Medusa shows all items correctly and inventory syncs with ERP”) before moving on.
Also decide on timing: choose low-traffic periods for cutovers of critical pieces, and ensure business stakeholders are aligned on any necessary content freeze or downtime. For example, when you migrate the product catalog, you may enforce a freeze on adding new products in Magento to avoid divergence while data is copied. Similarly, a final order migration might require a short checkout downtime to ensure no orders are lost. All such events should be scheduled and communicated.
During planning, also outline a data synchronization strategy. In a phased migration, you’ll have a period where Magento and Medusa run in parallel for different functions. You must plan how data will stay consistent between them:
It’s better to set up Medusa development and staging environments early in the project. Stand up a Medusa instance (or a few) in a sandbox environment and start populating it with sample data. This will be used to develop and test migration scripts. Make sure you have a staging database for Medusa (e.g., PostgreSQL or MySQL, whichever you choose for Medusa) and that the team is familiar with deploying Medusa. Medusa provides a CLI to bootstrap a new project quickly, for example:
npx create-medusa-app@latest
This will create a new Medusa server project (and optionally a Next.js storefront if you choose) on your machine.. You can also initialize a Medusa project via the Medusa CLI (medusa new command) to include a seeded store for testing.
As part of setup, you’ll create an Admin user for the Medusa backend and explore the Medusa Admin dashboard to ensure you know how to manage products, orders, etc., in the new system.. Familiarize your ops/administrative staff with the Medusa admin UI early, so they can provide feedback on any critical gaps (for instance, Magento has some specific admin grids or reports you might need to replicate).
Finally, communicate and coordinate the migration plan with all stakeholders. The engineering team, product managers, operations, customer support, and leadership should all understand the phased plan, the timeline, and any expected impacts (like minor UI changes in Phase 1 or slight differences in workflows in the new system). Migration at this scale is as much about change management as it is about technology. With a solid plan in place, you can now proceed to execution.
Also Read: Content Modeling 101 for Omnichannel Using dotCMS
With planning done, it’s time to implement the migration. We will outline a phased step-by-step execution that gradually moves your e-commerce backend, admin, and storefront from Magento to MedusaJS.
Each phase below corresponds to a portion of functionality being migrated, aligned with best practices to minimize risk. Throughout each phase, maintain rigorous testing and quality assurance before proceeding to the next stage.
The first phase is all about decoupling your storefront (UI) from Magento’s integrated frontend. In Magento, the frontend (themes/templates) is tightly coupled with the backend. We’ll replace this with a new headless storefront (for example, a Next.js or Gatsby application) that initially still uses Magento’s backend via APIs.
Phase 1: Introduce a new headless storefront and CMS while Magento remains the backend. The new frontend (e.g., Next.js app) fetches data from Magento’s APIs.
Steps in Phase 1:
Choose a modern frontend framework such as Next.js, Gatsby, or Nuxt to build your storefront. Medusa provides starter templates for Next.js that you can use as a foundation (or you can build from scratch).. Design the frontend to consume data from an API rather than directly from a database.
In this phase, the API will be Magento’s. Magento 2 supports a REST API and a GraphQL API out-of-the-box. For example, your new product listing page in Next.js could call Magento’s REST endpoints (or GraphQL queries) to fetch products and categories.
This essentially treats Magento as a headless service. You might build a small middleware layer or utilize Next.js API routes to securely proxy calls to Magento’s API if needed, or call Magento APIs directly from the frontend (taking care of CORS and authentication).
Many enterprise teams opt to implement a BFF (Backend-For-Frontend)—a lightweight Node.js server that sits between the frontend and Magento—to aggregate and format data. This is optional but can help in mapping Magento’s API responses to a simpler format for the UI.
Reimplement your storefront’s design on the new tech stack. Try to keep the user experience consistent with the old site initially, to avoid confusing customers during the transition.
You can, of course, take the opportunity to improve UX, but major changes might be better introduced gradually. Importantly, ensure global elements like header, footer, navigation, and product URL structure remain familiar or have proper redirects, so SEO and usability aren’t hurt.
Use Magento’s API to feed the necessary data. For instance, the product listing page will call an endpoint like /rest/V1/products (Magento’s REST) or a GraphQL query to retrieve products and categories. You will likely need an API authentication token to access Magento’s APIs.
Magento’s REST API can be accessed by generating an integration token, or as in the Medusa migration plugin, by programmatically obtaining an admin token. For example, the Medusa migration module uses a POST to Magento’s V1/integration/admin/token endpoint with admin credentials to get a token:
const response = await fetch(`${magentoBaseUrl}/rest/default/V1/integration/admin/token`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ username: MAGENTO_ADMIN_USER, password: MAGENTO_ADMIN_PASS })
});
const token = await response.text();
// Use this token in Authorization header for subsequent Magento API calls
In this phase, Magento still handles all commerce operations (cart, checkout, customer accounts). Your new frontend will simply redirect or proxy those actions. For example, when a user clicks “Add to Cart” or goes to checkout, you might hand off to Magento’s existing pages or send a request to Magento’s cart API.
It’s acceptable if the checkout flow temporarily takes users to Magento’s domain or uses Magento’s UI, as this will be addressed in later phases. The goal of Phase 1 is not to eliminate Magento, but to introduce the new frontend and CMS while Magento underpins it behind the scenes.
(For teams that cannot rebuild the entire frontend in one go, an alternative approach is to do a partial storefront migration. Using tools like Next.js Rewrite rules, you can incrementally replace certain Magento pages with new ones. For example, you could serve product detail pages via the new Next.js app, but keep checkout on Magento until later. This way, you flip portions of the UI to the new stack gradually. While this complicates routing, it offers a very controlled rollout. Many teams, however, would prefer to launch the whole new frontend at once as described above, for a cleaner architecture.)
In Phase 2, the focus shifts to the backend. Here we migrate the core catalog data: products, categories, inventories, prices, from Magento’s database into Medusa. By the end of this phase, Medusa will become the source of truth for all product information, while Magento may still handle the shopping cart and orders until Phase 3.
Steps in Phase 2:
If you haven’t already, install and configure your Medusa backend service. This involves spinning up a Medusa server (Node.js application) connected to a database (Medusa supports PostgreSQL, MySQL, etc., with an ORM).
Medusa comes with a default product module, order module, etc. Make sure your Medusa instance is running and you can access the Medusa Admin panel. In the Admin, you might manually create a couple of sample products to see how data is structured, then delete them, just to get familiar.
Also configure any essential settings in Medusa (currencies, regions, etc.) to align with your business; for example, if your Magento store had multiple currencies or websites, configure Medusa’s regions and currency settings accordingly.
Extract the product catalog data from Magento. There are a few approaches for this:
Use Magento’s REST API to fetch all products, categories, and related data (images, attributes, inventory levels, etc.). Magento’s API allows filtering and pagination to get data in batches.
Alternatively, do a direct database export from Magento’s MySQL. For example, run SQL queries or use Magento’s built-in export tool to get products to CSV. However, Magento’s data schema is quite complex (spread across many tables due to EAV), so using the API (which presents consolidated data) can simplify the process.
In either case, you will likely need to transform the data format to match Medusa. For instance, Magento’s product may have a separate price table, whereas in Medusa the price might be a fi eld or managed via price lists. Plan to capture product names, SKUs, descriptions, category relationships, images, variants/options (size, color, etc.), stock levels, and any custom attributes you identified during planning.
Insert the exported data into Medusa’s database using the Medusa APIs or programmatically. You have a few options:
Once imported, use the Medusa Admin dashboard to spot-check the catalog. Do all products appear with correct titles, prices, variants, and images? Are categories properly assigned? This is where you may need to adjust some mappings.
For example, Magento product “attributes” that were used for filtering (color, brand, etc.) might be represented in Medusa as product tags or metadata. If so, you might convert Magento attributes to Medusa tags for fi ltering purposes. Likewise, customer groups or tier
pricing in Magento could map to Medusa’s customer groups and price lists (Medusa has a Price List feature for special pricing).
After product data is in Medusa, switch your new frontend to use Medusa’s Store APIs for product and inventory data. Up to now, in Phase 1, the Next.js app was likely calling Magento’s API to list products. Now you will update those API calls to query the Medusa backend instead. Medusa provides a Store API (unauthenticated endpoints) for products, collections, etc.
For example, your product listing page might hit GET /store/products on Medusa (which returns a list of products in JSON). This cutover should be invisible to users: the data is the same conceptually, just coming from a diff erent backend. Because we still haven’t moved the cart/checkout, you may need to ensure product IDs or SKUs align so that when a user adds to cart (going to Magento), it still recognizes the product. If you maintain the exact same SKUs and identifi ers in Medusa as in Magento, you can cross-reference easily. You might keep a mapping of Magento product ID to Medusa product ID if needed just for the interim.
Phase 3 tackles the transactional components: shopping cart, checkout, payments, and order management. This is usually the most complex part of the migration because it affects the core of your e-commerce operations and customer experience.
Steps in Phase 3:
Since your frontend is already decoupled, you will now integrate it with Medusa’s cart and order APIs instead of Magento’s. Medusa provides a Cart API and Order API to support typical e-commerce flows. For example:
Essentially, you need to replicate in the new frontend all the steps that Magento’s checkout used to provide. If you use a Magento one-page checkout or multiple steps, design a corresponding fl ow in the new frontend calling Medusa. The heavy-lifting of order creation, tax calculation, etc., can be done by Medusa if configured properly.
Set up payment processing within Medusa to replace Magento’s payment integrations. Medusa has a plugin-based system for payments, with support for providers like Stripe, PayPal, Adyen, etc. If you were using, say, Authorize.net or Stripe in Magento, you can install the Medusa plugin for the same (or use a new provider if desired).
Please make sure that your Medusa instance is confi gured with API keys for the payment gateway and that the frontend integrates the payment provider’s UI or SDK appropriately (for example, Stripe Elements on the frontend and the @medusajs/stripe plugin on backend).
Medusa provides a default shipping option management and integrates with shipping carriers via plugins (if needed). For taxes, Medusa can handle simple tax rules or integrate with tax services. Configure any necessary fulfillment providers (like if using Shippo, Shipstation, etc.) and tax rates or services (like TaxJar) so that the Medusa order fl ow computes totals correctly.
Depending on how you want to handle customer logins, you might at this point migrate customer accounts to Medusa. Medusa has its own customer management and authentication.
However, if you want logged-in customers to be able to see their profile or use saved addresses during checkout on the new system, you’ll need to move customer data now. Migrating customer accounts means importing users’ basic info and hashed passwords.
Medusa uses bcrypt for hashing passwords by default; Magento (depending on version) might use different hash algorithms (MD5 with salt in M1, SHA-256 in M2). One strategy is to migrate all users with a flag requiring a password reset (simplest, but impacts user experience), or attempt to import password hashes and adjust Medusa’s authentication to accept Magento’s hash format (advanced).
Recreate any order processing workflows in Medusa. For example, if Magento was integrated to an ERP or OMS (Order Management System) for fulfillment, now Medusa must integrate with those systems. Medusa can trigger webhooks or you can use its event system to notify external systems of new orders.
If your team uses an admin interface to manage orders (e.g., print packing slips, update order status), the Medusa Admin or a connected OMS should be used. The Medusa Admin dashboard allows viewing and updating orders, creating fulfillments, etc., similar to Magento admin.
You might need to train the operations team to use Medusa’s admin for processing orders (creating shipments, marking orders as shipped, etc.). If any custom post-order logic existed (like custom fraud checks, or split fulfillment logic), implement that either via Medusa’s plugins or an external microservice triggered by Medusa’s order events.
Once Medusa’s checkout is fully implemented and tested in a staging environment, you will switch the production frontend to use it. This is a big milestone, effectively, Magento is being removed from the live customer path.
Ensure to coordinate a deployment when the site is quiet. It can be wise to disable the ability to place orders on Magento shortly before the switch (for instance, put Magento in maintenance mode for checkout) to avoid any orders being placed on Magento at the same time.
When you deploy the new frontend that connects to Medusa for cart/checkout, run through a suite of test orders (ideally in a staging environment or with test payment modes on production just before enabling real payments).
You may want to migrate historical order data from Magento into Medusa, for continuity in order history. This can be done via script or gradually. However, migrating thousands of old orders might not be strictly necessary for operations; some teams keep Magento read-only for a time for reference or build an archive.
If you do import past orders, you might insert them as Medusa orders via the Admin API or directly in DB. The critical part in Phase 3 is to ensure any ongoing orders (like open carts or pending orders) are either transferred or completed. For example, you might cut off new cart creation on Magento a few hours before, but allow any user who was in checkout to finish (or provide a clear notice to refresh and start a new cart on the new system).
Also Read: Headless vs Hybrid vs “Universal” CMS: Which Model Fits Multi-Team Delivery?
Now that your store is on MedusaJS, leverage the benefits of the new architecture and follow best practices to get the most out of it. Here are some recommended architecture patterns and practices post-migration for an enterprise-scale Medusa setup:
With Medusa as the core commerce service, your overall e-commerce platform is now “composable.” This means you can plug in and replace components at will. Continue to embrace this modular approach.
For example, if you want to add a new capability like AI-driven recommendations, you can integrate a specialized microservice for that via Medusa’s APIs or events, without monolithic constraints. Each piece of the system (CMS, search, payments, etc.) can scale independently and be updated on its own schedule.
Deploy Medusa in a cloud-native way to achieve maximal scalability and reliability. Containerize the Medusa server (Docker) and use Kubernetes or similar to manage scaling. Because Medusa is stateless (except the database), you can run multiple instances for load balancing.
Scale the database vertically or use read replicas as needed; e.g., a managed PostgreSQL service that can handle enterprise load. Use auto-scaling for your frontend as well (if using Next.js, consider serverless deployment for dynamic functions and a CDN for static pre-rendered pages).
Monitor resource usage and performance; one benefit of Medusa’s headless setup is you can put a caching layer or CDN in front of certain API calls if needed (though be careful to cache only safe GET requests like product browsing, not cart actions).
As you add features to your commerce platform, use Medusa’s extension points (plugins, modules, and integrations) rather than modifying core code. This keeps the core stable and upgradable. Medusa’s plugin system supports adding custom routes, middleware, or overriding core service logic in a contained manner.
If an enterprise feature is missing, consider building a plugin and possibly contributing it back to the Medusa community. This way, your platform remains maintainable. For example, if down the road you need a complex promotion engine beyond what Medusa offers, build it as a separate service or plugin that interfaces with orders, rather than forking the Medusa core.
Without Magento’s license or heavy hosting requirements, you may fi nd cost savings. However, budget for the new components (hosting for Medusa, any new SaaS services like CMS or search). Keep track of total cost of ownership.
Over time, Medusa’s lower resource footprint can be a win; for example, a Node service might use less memory and CPU under load than Magento did. If you switched from Magento Commerce (paid) to Medusa (free OSS), you’ve eliminated license fees as well.
Also Read: Headless vs Hybrid vs “Universal” CMS: Which Model Fits Multi-Team Delivery?
By approaching the process in phases—starting with the storefront, then moving catalog data, and finally checkout and orders—you minimize risk while steadily unlocking the benefits of a headless, modular architecture.
The result is a faster, more scalable platform that adapts to your business needs instead of limiting them. With MedusaJS in place, your enterprise is better equipped for future growth, innovation, and long-term effi ciency.
Mayank Patel
Aug 26, 20255 min read
Content Modeling 101 for Omnichannel Using dotCMS
Customers hop between storefronts, apps, marketplaces, and in-store screens; and they expect the story about your products to match everywhere. The only way to meet that bar is to store content as clean, structured data and serve it to whatever front end needs it. That’s the job of content modeling.
This guide walks through a practical “101” for modeling content in dotCMS. You’ll learn how to define types, fields, and relationships that reflect your business, keep presentation concerns out of the CMS, enable localization and multi-site, and enforce quality with workflow.
Omnichannel content delivery means your messaging remains consistent across every channel, while also optimized for the context of each channel. In an omnichannel retail example, a customer might discover a product on a marketplace, check details on your website, receive a promotion via email or SMS, and finally buy in-store.
Traditional web-oriented CMS platforms often struggle here. They were built to push content into tightly-coupled webpages, not to serve as a hub for many diverse outputs. Many legacy CMSs lack the flexibility to easily repurpose content for mobile apps, IoT devices, or third-party platforms, and their rigid page-centric models make omnichannel consistency hard to maintain.
In contrast, modern headless and hybrid CMS solutions promise to meet omnichannel needs. Headless CMS decouples the content repository from any specific delivery channel, exposing content via APIs. This decoupling gives organizations the agility to present content on any channel from a unified backend.
However, pure headless systems sometimes sacrifice the user-friendly authoring experience that content teams desire. This is where hybrid CMS platforms like dotCMS.com shine. They combine the flexibility of headless architecture with tools for easy authoring and preview. dotCMS enables a true “create once, publish everywhere” paradigm, which drives omnichannel content distribution, reduces duplicate work, and speeds up time-to-market.
What is Content Modeling? (And Why It Matters for Omnichannel)
Content modeling is the process of defining a structured blueprint for your content; identifying what types of content you have, what fields or elements make up each type, and how those pieces relate to each other. Instead of treating a piece of content as a big blob of text (like a lengthy web page), you break it down into meaningful chunks: titles, descriptions, images, metadata, categories, and so on.
For example, a Product content type might include fields for product name, description, price, images, specifications, and related products or categories. A Blog Article content type might include title, author, publish date, body text, and tags. When content is neatly structured and stored, you can present it on a website in one format, in a mobile app with a different layout, or even feed it into a chatbot or voice assistant; all without duplicating content or manual copy-paste.
dotCMS is an enterprise-grade content management system built with omnichannel needs in mind. It advertises itself as a "Hybrid Headless" or "Universal CMS", meaning it blends the API-first flexibility of a headless CMS with the ease-of-use of a traditional CMS for editors. From a content modeling
perspective, dotCMS provides a rich toolset to define and manage your content structure without heavy development effort.
Here are some of the dotCMS features to understand:
In dotCMS, you can define unlimited content types to represent each kind of content in your business (products, articles, events, locations, customer testimonials, etc.). Each content type has fields (text, rich text, images, dates, geolocation, etc.) that you configure. Importantly, all content types are completely customizable through a no-code interface; you don’t need a developer to add a new field or change a content model. Content in dotCMS is stored in a central repository and structured by these content types, rather than as unstructured blobs.
dotCMS treats content as data. A single piece of content (say a Product or an Article) lives in one place in the CMS but can be referenced or pulled into any number of front-end presentations. Authors can create content once and then use dotCMS’s features to share it across pages, sites, and channels effortlessly. For example, you might have one canonical product entry that is displayed on your public website, inside your mobile app, and on an in-store screen, all fed from the same content item.
dotCMS provides no-code tools for tagging and categorizing content, as well as defining relationships between content types. For instance, you can relate an “Author” content item to a “Blog Post” item to model a one-to-many relationship, or relate “Products” to “Categories”. These relationships make it possible to build rich, dynamic experiences (e.g., listing all articles by a certain author, or showing related products in the same category). They also improve content discovery and personalization. The ability to create and manage these relations through a friendly UI means your content model can truly reflect the reality of your business (how things connect) without custom development.
dotCMS was built with multi-channel delivery in mind. It allows you to create content once and deliver it anywhere via APIs. Under the hood, dotCMS offers both REST and GraphQL APIs to retrieve content, so your front-end applications (website, mobile app, IoT device, etc.) can query the content they need. The content model you define is enforced across all channels..
One standout feature of dotCMS is its Universal Editing capabilities for content creators. Even though content may be delivered headlessly, dotCMS provides content teams with a visual editing and preview experience that works across different channel outputs. For example, the dotCMS Universal View Editor allows authors to assemble and preview content as it might appear on various devices or channels, all within the CMS interface. This means marketers can, say, adjust a landing page and see how it will look on a desktop site, a mobile app, or other contexts without needing separate systems.
Large enterprises often serve multiple websites, brands, or regions, each a "channel" of its own. dotCMS supports multi-tenant content management, meaning you can run multiple sites or digital experiences from one dotCMS instance, reusing content where appropriate and varying it where needed. For example, you might have a global site and several country-specific sites; with dotCMS, you can share the core content model and even specific content items across them, while still allowing localization and differences where necessary. This feature amplifies content reusability for omnichannel because not only are you delivering to different device channels, but also to different sites/audiences from the same content hub.
A true omnichannel content strategy rarely lives in a vacuum; it often needs to integrate with other systems (e.g., e-commerce platforms, CRMs, personalization engines, mobile apps). dotCMS is API-first and integrates effortlessly with any tech stack via REST, GraphQL, webhooks, and plugins. This openness means your content model in dotCMS can be the central content service not just for your own channels, but it can feed into other applications as well. For instance, if you have a separate mobile app or a voice assistant platform, they can pull content from dotCMS. If you use a third-party search engine or commerce engine, dotCMS can connect to it. The ability to plug into a composable architecture is important; dotCMS is often deployed alongside best-of-breed solutions (for example, integrating with e-commerce engines like Commercetools, Fynd etc. is supported out-of-the-box).
(dotCMS for content · MedusaJS for commerce · Fynd for channel sync)
TL;DR: Use dotCMS as your single source of truth for content, MedusaJS as the headless commerce engine, and Fynd to unify online/offline channels. Each does what it’s best at; together they deliver one seamless customer experience.
Why this matters (for CTOs)
Omnichannel isn’t just “show the same thing everywhere.” It’s a promise that product info, branding, and availability stay consistent across web, app, kiosk, marketplace, and POS without your teams duplicating work. The cleanest way to keep that promise is a composable stack where each system does one job extremely well and everything talks over APIs.
Start with dotCMS as the content brain. Treat product copy, specs, imagery, promos, and SEO as structured content modeled once, governed once, and delivered everywhere through REST or GraphQL. Editors get a friendly, hybrid authoring experience; engineers get predictable schemas and stable APIs. Because content is cleanly separated from presentation, you can render it in any UX—from a React website to a kiosk UI—without rewriting the source.
Pair that with MedusaJS on the commerce side. MedusaJS is a headless Node.js engine for catalogs, variants, pricing, carts, checkout, orders, and payments. It doesn’t prescribe a front end and plays nicely with webhooks and plugins. Think of it as the transactional core that your UIs (and channel tools) can query for the real-time bits: price, stock, and order state.
Now widen the lens with Fynd to unify online and offline channels. Fynd syncs inventory and orders across marketplaces and in-store systems, so the pair of shoes shown on your site matches what’s available at the mall and what’s listed on third-party marketplaces. When Fynd needs rich product content: names, descriptions, images, feature bullets, it can pull that from dotCMS.
Here’s how a typical flow feels. Your content team creates or updates a Product entry in dotCMS (title, short/long descriptions, hero image, spec table, locale variants). Front ends request that content via GraphQL and render it alongside live commerce data from MedusaJS (price, stock by variant). Fynd consumes the same product content from dotCMS and the same inventory signals from MedusaJS to populate marketplaces and POS.
Content modeling is the linchpin. Define types like Product, Category, Brand, Promo, and Store with clear relationships (Product↔Category, Product↔Promo). Add channel-aware fields: short descriptions for mobile cards, alt text for accessibility, locale fields for multi-region delivery. Wrap it all with dotCMS workflows so high-impact edits are reviewed before they propagate to every channel. The result is “create once, deliver everywhere” with actual guardrails.
Today, a CTO’s goal should be to avoid monolithic systems that try to do everything and instead orchestrate best-of-breed platforms. By modeling your content well in dotCMS and integrating it with your commerce and channel platforms, you achieve the coveted omnichannel experience: the customer gets a unified journey, and your internal teams get maintainable, specialized systems.
To ensure we “cover everything,” let’s distill some practical best practices for content modeling in dotCMS, especially geared toward omnichannel readiness:
Begin by listing the types of content your business uses (or will use) across channels. Typical domains include products, articles, landing pages, promotions, user profiles, store locations, FAQs, etc. Don’t forget content that might be unique to certain channels (for example, push notification messages or chatbot prompts). Having a comprehensive view prevents surprises later.
For each content type, define the fields it needs. Ensure each field represents a single piece of information (e.g. separate fields for title, subtitle, body text, instead of one blob). Determine which fields are required, which are multi-valued (like multiple images or tags), and use appropriate field types (dates, numbers, boolean, etc., not just text for everything). In dotCMS, setting this up is straightforward via the Content Type builder. Keep future channels in mind: for instance, if you might need voice assistants to read out product info, having a short summary field could be useful. Example: A News Article content type might include fields for Headline, Summary, Body, Author (related content), Publish Date, Thumbnail Image, and Tags. This way, a mobile news feed can use the Headline and Summary, whereas the full website uses all fields.
Organize your content using categories, tags, or folders offered by dotCMS. Taxonomy is hugely helpful for dynamic content delivery, e.g., pulling “all articles tagged with X for the mobile app home screen” or “all products in category Y for this landing page.” Define a tagging scheme or category hierarchy that makes sense for your domain. dotCMS allows tagging content items and using those tags to assemble content lists without coding. Consistent taxonomy also aids personalization (showing content by user interest) and SEO.
A key headless content modeling principle is to separate content from design. In dotCMS, avoid embedding HTML/CSS styling or device-specific details in your content fields. For example, use plain text fields and let your front-end apply the styling. This keeps the content truly channel-agnostic. If you need variations of content for different channels (like a shorter title for mobile), model that explicitly (e.g., a field “Mobile Title” separate from “Desktop Title”) rather than overloading one field with both.
If you operate in multiple locales or brands, design your content model to accommodate that. dotCMS has multilingual support and multi-site features. Decide which content types will need translation or variation by locale. dotCMS can manage translations of content items side-by-side. Structuring your content well (and not hard-coding any language-specific text in fields) will pay off when you need to roll out in another language or region. Similarly, if running multiple sites, plan which content types are shared globally and which are specific to a site. dotCMS’s multi-tenant capabilities will allow content to be shared or isolated as needed.
As you roll out an omnichannel content hub, establish workflows and approval processes for content changes. This ensures that a change in content (which could affect many channels at once) is reviewed properly. dotCMS allows you to configure custom workflows with steps like review, approve, publish. Especially for large teams, this is a safety net so that your carefully modeled content isn't inadvertently altered and pushed everywhere without checks. It also helps assign responsibility (e.g., legal can approve terms & conditions content, whereas marketing can freely publish blog content).
When designing your content model in dotCMS, test it by retrieving content via the API and rendering it in different channel contexts. Build a simple prototype of a website page and a mobile screen (or whatever channels you plan) to see if the content model fits well. You might discover you need an extra field or a different structure. dotCMS’s content API and even its built-in presentation layer (if you choose to use it in hybrid mode) can be used to do these dry runs.
Technology is only part of the equation. Implementing content modeling for omnichannel success also requires strategy and expertise. Linearloop works on building modern digital platforms and has experience with headless CMS implementation, content strategy, and integrating systems like dotCMS with commerce and other services. Drawing on lessons from past projects can help you avoid common pitfalls and accelerate the adoption of an omnichannel content hub.
Mayank Patel
Aug 20, 20255 min read