The Death of the MSP Dashboard
I have spent a significant chunk of my career building reporting portals and dashboards for MSPs and MSSPs. At Kudelski Security and Ekco, I was deeply involved in creating the kind of branded, data-rich client portals that win deals and drive renewals; I understand the investment that goes into them because I have led that investment myself, and I know how much value they have delivered over the years. I say all of that to make clear I am not observing this from the outside. The portal-only model is reaching the end of its useful life as a standalone strategy, and the providers who recognise that early enough to evolve will be the ones who come out ahead.
Your clients currently log into your portal to check their security posture, review incidents from the past month, or pull together data for a board meeting. The experience involves navigating through multiple tabs, filtering data across date ranges, exporting reports into formats that rarely match what the board actually wants to see, and then spending time manually assembling the narrative around the numbers. Most MSP and MSSP portals were built to display data, and they do that reasonably well, but they were never designed around the way people actually want to consume information, which is by asking a question and getting an answer. Your clients do not wake up wanting to browse a dashboard; they want to know whether their environment is secure, what happened last week, and whether they are compliant with their regulatory obligations. AI is changing this quickly — your more technically aware clients are already using AI tools in other parts of their business, and they are starting to wonder why they cannot simply ask "what is my security posture this month?" and get a straight answer drawn from real data, without logging into anything.
The investment in Power BI reporting was the right call at the time. Custom dashboards gave clients visibility they could not get anywhere else, and they gave sales teams something tangible to demonstrate during pitches. The underlying model, though, assumes your client wants to come to you for data, and that assumption is increasingly wrong. AI agents can now connect to data sources through APIs, SDKs, plugins, and integration protocols; whether it is a direct API integration, an SDK plugin, or a protocol-based connector, the end result is the same. Clients query their security and operational data from within their own environment, ask their AI assistant about patch compliance, and get an answer in seconds with source citations showing exactly where each data point came from. The tools exist today.
The pressure will come from both directions. Larger clients with internal technology teams will expect you to expose data through agent-friendly interfaces; smaller clients will see competitors adopting conversational data access and start asking why their provider cannot offer the same. The MSP that can say "your AI agent can query our data directly" wins that conversation, and the one insisting you log into a portal starts looking outdated.
I think of the evolution as "Portal Plus" rather than replacement. The existing portal still serves a purpose for internal operations, specific client requests that need a custom view, and situations where a structured dashboard is genuinely the right interface; what changes is that the portal stops being the only way clients access their data. In practical terms, that means building an agent-accessible data layer alongside your existing portal — an API, an SDK plugin, or a direct integration — that exposes your client's security metrics, incident data, compliance status, and operational health in a format that AI agents can consume and reason about. Your SOC processes, alerting, and remediation workflows all stay the same. The paradigm is what shifts: you move from software your clients log into to software your clients talk to, and the data meets them where they already are, in their own tools, on their own terms. The portal becomes one channel among several rather than the only channel, and the data and expertise behind it become the real product.
There are genuine engineering challenges in making this work properly, particularly around tenant isolation (ensuring one client's AI agent cannot access another client's data), grounding (making sure the AI only answers from real data and does not fabricate responses), and audit trails (proving where every answer came from). I have written about these in earlier posts and I spend most of my time solving exactly these problems for clients in regulated industries.
Most MSPs and MSSPs have not started thinking about this yet, which means the window to move early is still open. The practical starting point is to identify the three or four data queries your clients make most often through your portal and build an agent-friendly endpoint that can serve those same answers programmatically — security posture summaries, incident reports for the last 30 days, patch compliance, and compliance audit status are usually the ones that come up first. The specific integration approach matters less than the principle: make your data queryable by AI agents, not just viewable by humans in a browser.
Before building anything, it is worth understanding whether your organisation is actually ready for AI-driven data access. I built a free AI Implementation Readiness Assessment that walks through exactly these questions and produces a traffic-light score with practical recommendations; it takes ten minutes and gives you a clear picture of where the gaps are before you commit budget. The natural first move is to find one client who is already using AI tools internally, offer them the integration as a pilot, and let that pilot become your proof point for the rest of the base.