How to migrate from Metabase to a modern BI tool: a practical playbook
Max Musing
Max Musing Founder and CEO of Basedash · May 13, 2026

Max Musing
Max Musing Founder and CEO of Basedash · May 13, 2026

A Metabase migration is a four-phase project: audit what you actually use, choose a replacement, rebuild questions and dashboards in the new tool against the same data sources, then run both tools in parallel before decommissioning Metabase. Most teams can get the critical 20% of dashboards moved in two to three weeks, and the long tail in another month. The work that breaks migrations is not technical; it is deciding which Metabase content is worth rebuilding at all.
This guide is for product, data, and operations leads who have outgrown Metabase and need a defensible plan to move off it without losing the dashboards their teams rely on. It assumes you are running Metabase Open Source or Metabase Cloud against a production warehouse like PostgreSQL, MySQL, Snowflake, BigQuery, Redshift, or ClickHouse, and that you have a shortlist of candidate replacements.
Metabase is a useful starting point because it is free, self-hostable, and easy to point at a database. Teams typically outgrow it for one of a few specific reasons rather than a vague “we need something better.”
If none of these describe your situation, do not migrate. The right move is to invest in Metabase, fix the specific pain, and reassess in six months.
Before you talk to a single vendor, get an honest picture of what is actually being used. Most migrations fail because teams try to recreate every dashboard, including the ones nobody has opened in a year.
Metabase tracks view counts, question runs, and last-edited timestamps in its application database (Postgres or H2). You can query it directly. The fields you care about live in tables like report_card (questions), report_dashboard, view_log, and query_execution.
A useful starting query for the application database:
select
c.id,
c.name as question_name,
c.created_at,
c.updated_at,
count(v.id) as views_last_90d
from report_card c
left join view_log v
on v.model_id = c.id
and v.model = 'card'
and v.timestamp > now() - interval '90 days'
group by c.id, c.name, c.created_at, c.updated_at
order by views_last_90d desc;
Do the same for dashboards. You are looking for three categories:
Most teams discover that more than half their Metabase content is in category three. Cutting that work up front is the single biggest lever you have.
For each surviving question or dashboard, capture five fields in a spreadsheet:
This list becomes your project plan. It also makes the migration negotiable: when a stakeholder demands you preserve their pet dashboard, you can point at the usage data and ask whether it is worth a week of work to a tool nobody opens.
Before you touch the new tool, extract any logic that lives only in Metabase. The common offenders:
Save the SQL into a git repo. This single step prevents the most painful Metabase failure mode: a critical query lives only inside a Metabase question, the instance dies, and nobody can reconstruct the logic.
Resist the urge to skip this phase. Most teams that “just pick the obvious next tool” end up doing a second migration eighteen months later.
The right replacement depends on which Metabase pain points pushed you out. Map your top three pains to capabilities you must verify in any candidate:
The realistic shortlist in 2026 looks like this. None is the right answer for every team:
Use the more detailed Metabase alternatives comparison to narrow further, then run a structured proof of concept. The BI tool POC framework covers a 30-day evaluation in detail.
Score each candidate on six criteria, weighted by your situation:
| Criterion | What you are measuring |
|---|---|
| Time to rebuild a top-5 dashboard | How long it takes one person to recreate a real Metabase dashboard end to end |
| Non-technical user success | Whether a finance or ops teammate can answer a new question without help |
| Permission fit | Whether you can model your real groups and customer-scoped data |
| Warehouse load | Whether the tool pushes more or less work to your database under realistic load |
| Total cost of ownership | All-in cost including seats, viewer add-ons, SSO upcharges, and any required services |
| Migration cost | Estimated person-weeks to move the critical 20% of content |
Whoever owns the decision should write this matrix before vendor demos start. Vendors will reorient toward whatever you score on. That is good; you want them solving your real problems, not selling you generic features.
Once you have picked a tool, the work is straightforward but easy to underestimate.
Start by connecting the same databases Metabase uses, in roughly the same configuration. If you used a read replica for Metabase, point the new tool at the same replica. Match the connection user’s permissions exactly so you do not discover halfway through that one dashboard depended on a privileged credential.
For each connection:
Move content in this order:
Aim to rebuild rather than mechanically translate. A dashboard built in Metabase three years ago probably has questions nobody opens, filters nobody uses, and metrics that no longer match how the business measures itself. Migration is the cheapest moment in the next three years to clean that up.
For each rebuilt dashboard, run a side-by-side check:
Differences will surface. Most are good: filter logic that was wrong in Metabase, joins that excluded recent rows, timezones that did not match the warehouse. Document each one and resolve before sign-off.
The most common mistake at this stage is going too fast. The second most common is going too slow.
Keep Metabase running and read-only for at least two weeks after the new tool has the critical content rebuilt. During this time:
If a dashboard has zero Metabase opens for two consecutive weeks, it is safe to retire on the Metabase side. If a dashboard still has heavy Metabase usage after week three, find out why; either it is missing in the new tool or it does not work the same way and someone is quietly suffering.
Pick a single decommission date, communicate it once, and stick to it. The pattern that drags out migrations is moving the date in response to a stakeholder who has not bothered to migrate their pet dashboard.
A reasonable cadence:
Before shutting Metabase down:
Use this in order. Anything skipped becomes a source of post-migration confusion.
A migration is real work. It is the right call when Metabase is the bottleneck and a replacement actually fixes the pain. It is the wrong call when:
If any of these describe you, fix the upstream problem first. The migration will be easier and cheaper later.
If the reason you are leaving Metabase is that business users still cannot answer follow-up questions without a SQL owner, put Basedash in the proof of concept early. The strongest migration path is to connect Basedash to the same databases Metabase already uses, rebuild a handful of critical dashboards, then ask the non-technical teams who rely on those dashboards to answer new questions from the same data.
That test is different from asking whether a vendor can reproduce a chart. A useful Metabase replacement should help someone move from “what happened?” to “why did it happen?” without starting a new ticket. In Basedash, that means using the AI-native query flow, visual editor, governed database access, and shareable dashboards together instead of treating AI as a small feature beside the old dashboard workflow.
Basedash is especially worth testing when your migration goals include:
It is not automatically the right choice if your company already has a mature LookML estate, a Microsoft-first analytics stack, or a team that primarily wants notebook-based analysis. But if Metabase was attractive because it was simple and direct, and the problem is that it never became truly self-serve, Basedash is one of the few replacements that keeps that directness while moving the day-to-day experience toward AI-assisted BI.
Critical dashboards take two to three weeks with a dedicated owner. The long tail and decommission run another two to four weeks in parallel. Total elapsed time is typically four to eight weeks for a team of moderate Metabase usage. Teams with hundreds of dashboards or heavy custom SQL should plan for longer.
Not reliably. A few open-source scripts try to translate Metabase questions into other tools by reading the application database, but they rarely handle native SQL questions, custom expressions, or visualizations cleanly. Expect to rebuild manually; this is also the point at which deciding what to rebuild is the highest leverage.
Usually not. If you are paying for Metabase Cloud or Enterprise, the cost rarely justifies a read-only archive. Snapshot the application database, archive the SQL to git, and shut the instance down. Spin it back up only if you need to investigate a historical dashboard.
Treat embedded analytics as a separate migration with its own validation. The replacement should support the same iframe or SDK pattern, the same per-tenant filtering, and the same SLA expectations. Some teams choose a different tool for embedded use cases than for internal BI; that is fine if the cost is justified. The embedded analytics guide covers the tradeoffs.
No. The point of the migration is to change the BI tool, not the data foundation. If you are also unhappy with your warehouse, separate that decision and sequence it after the BI migration. Doing both at once multiplies risk and makes it impossible to attribute problems.
Two practices help. First, write a small set of “golden numbers” (ARR, active customers, revenue by month, whatever your top KPIs are) into a doc with their expected values, and check them against any dashboard build. Second, treat the new tool’s dashboards like code: ownership, review, and a way to track changes over time.
The hard work of a Metabase migration is not connecting databases or rebuilding charts. It is taking the audit seriously, choosing a tool that solves the specific pain that pushed you out, and being patient enough to let the new tool earn trust before you switch the old one off.
Written by
Founder and CEO of Basedash
Max Musing is the founder and CEO of Basedash, an AI-native business intelligence platform designed to help teams explore analytics and build dashboards without writing SQL. His work focuses on applying large language models to structured data systems, improving query reliability, and building governed analytics workflows for production environments.
Basedash lets you build charts, dashboards, and reports in seconds using all your data.