On-demand AI usage gives admins more control
Admins can now decide whether Basedash should keep working after an organization uses its included AI credits. If on-demand usage is turned off, Basedash now blocks additional AI usage more clearly across the product instead of quietly running into overage usage behind the scenes. That makes billing behavior easier to reason about, especially for teams that want a firmer cap on spend. The experience around hitting the limit is clearer too, with better messaging that helps admins understand what is blocked and what to do next.Chats are easier to share and collaborate on
Chats now use the same kind of grant-based sharing model as the rest of Basedash. That makes it much easier to control who can see or manage a chat, and it makes chat sharing feel more consistent with the way dashboards and other resources already work. Admins can also set default access for new chats, and new organizations now start with chats shared to everyone with full access by default. For teams that use chat as a shared workspace instead of a private scratchpad, this should make collaboration feel much more natural from the start.Insights are easier to generate on demand
You can now manually generate an insight from the Insights page whenever you want one, instead of waiting for the next scheduled run. That makes it easier to pull a fresh insight right after your data changes or when you want to actively explore a question. We also increased the reasoning effort behind Insights. In practice, that should make Insights feel more thoughtful on harder questions and more useful when you want something that goes beyond a quick summary.Automations recover better from temporary AI failures
Automations now do a better job retrying through temporary AI provider issues instead of failing silently. That is especially important for scheduled reports and data-change-driven workflows, where a transient model error used to be much more likely to interrupt a run that users expected to just work. When a run still cannot complete after retries, Basedash now does a better job surfacing that outcome and notifying you. We also smoothed out scheduled insight generation so runs are spread out more evenly, which should reduce avoidable spikes and make automation behavior feel steadier overall.Connector setup feels smoother end-to-end
The connector setup flow got a broad UX cleanup. Forms are easier to read, descriptions render more cleanly, keyboard submission feels better, and loading states feel more polished instead of looking like unfinished placeholder UI. We also fixed some rough edges after setup. New connectors now take you to the connector page directly, brand-new warehouses no longer look broken before their first sync, and SQL autocomplete handles names with special characters more cleanly.Fixes and improvements
- Improved dashboard auto-refresh so large dashboards do less unnecessary work and stay more stable under frequent refreshes.
- Fixed mobile dashboards so charts render reliably on smaller screens.
- Fixed the chat composer getting stuck in “Generating…” after an AI response had already finished.
- Preserved chart version history when reopening charts from dashboards and made reverted AI versions show up correctly in the timeline.
- Added character counters and sensible limits across AI context fields so it is easier to tune context without guessing.
- Improved MCP connector OAuth setup so supported servers request better scopes and launch authorization more reliably.
- Enabled automations in embedded sidebars, with an option to hide them when they do not belong in the embed.
- Improved number chart sizing so large metric cards render more consistently.
- Improved the billing AI usage breakdown so system-generated usage is visible alongside user-attributed usage.