Fabric is expanding. New workloads, new tools, new names. That expansion is genuinely useful. But it also introduces a familiar problem: the more options you have, the easier it becomes to lose sight of what you were actually trying to do.
This page is a companion to a session I submitted for FabCon. It is not exam preparation. It is not a feature overview. It is a look at the structure behind PL-300 and why that structure still offers a clear starting point when working with Fabric.
Four pillars. Still four.
The PL-300 exam is built around a clear sequence: getting data in, preparing it, building something useful from it, and making sure it reaches the right people in a governed way.
That sequence did not change when Fabric arrived. What changed is the number of tools available to support each step.
- Getting data in place: Power Query, Dataflows, pipelines, shortcuts to OneLake
- Preparing the data for reports: Power Query transformations, Dataflow Gen2, notebooks, lakehouse patterns
- Building the reports: Semantic models, DAX, Power BI reports, DirectLake
- Sharing and governing: Workspaces, apps, Microsoft Purview, row-level security, sensitivity labels
Fabric gives you more paths through the same terrain. The terrain itself has not changed.
More paths, same destination
The value of thinking in pillars is that it slows down the tool-selection decision. Instead of asking "should we use a notebook or Power Query?" you first ask "what are we trying to do at this step?" That question tends to have a clearer answer.
For most reporting projects, the path is simpler than it might look. Power Query or Dataflow Gen2 for getting data in and shaping it. A well-structured semantic model for business logic. Power BI reports for presenting findings. Workspaces and apps for distribution and governance.
Notebooks and pipelines become relevant when scale or complexity genuinely demands it. Not before. That is not a knock on notebooks. It is about choosing the right tool for where you actually are, rather than the tool that sounds most impressive.
The DP-600 exam (Fabric Analytics Engineer) extends this thinking into lakehouses, Dataflow Gen2, and Fabric-specific workloads. The DP-700 exam (Fabric Data Engineer) goes further into data engineering patterns at scale. Both still assume you understand the fundamentals covered in PL-300.
What I see in practice
The session, designed for FabCon Europe, walks through this framework in more detail, with practical examples of how different teams can map their current work onto these four pillars and where Fabric workloads fit naturally into that picture.
Moreoever, the session is built around a simple argument: the more tools we have, the more important it becomes to have a clear mental model of what we are actually trying to achieve. PL-300 gives you that model. Fabric gives you more ways to execute on it.
A few practical observations that tend to come up when working through this with teams:
- Use the tools that fit your skills and your problem. Notebooks are useful. They are not always the right starting point.
- Star schema is still the default for reporting. Not because it is traditional, but because it works.
- Keep stable business logic in the backend. Keep analytical flexibility in DAX.
- Choose visuals deliberately. A chart that answers the question is better than one that looks impressive.
- A small-data mindset is a good starting point even when you are working with large data. Start with what you can understand, and scale what you learn.
If you are working towards the PL-300 certification or want a solid foundation in Power BI before stepping into Fabric, the PL-300 training is where I would suggest starting. The full picture of what I offer is on the training overview page.
Curious about the session or want to explore what this looks like for your team? Feel free to get in touch.
A question worth sitting with
Are you starting from the tools, or from the structure?
That is not a rhetorical question. Both approaches can work. But in my experience, teams that start from the structure tend to make better tool decisions later. They know what they are optimising for, which turns the choice between Power Query and a notebook into a practical question rather than an ideological one.
PL-300 is still the most relevant certification path for people building reporting solutions with Power BI. Not because the exam is a perfect reflection of every real-world scenario, but because the structure it teaches is genuinely useful. It gives you a shared vocabulary, a logical sequence, and a foundation to build from when Fabric keeps expanding around you.
Most reporting work still ends up as a Power BI report for the business user. Fabric integrates all the options available to get there. Understanding the destination first makes choosing the path much easier.