NL

Microsoft Fabric promises a lot, but in practice there's still a lot of figuring out to do. What goes where? When do you pick a lakehouse, when a warehouse? And how do you make sure your analytics solution doesn't just work, but is also scalable and manageable? In this training we work through that together.

What you'll learn

The DP-600 exam (Fabric Analytics Engineer Associate) tests whether you can design and implement end-to-end analytics solutions in Microsoft Fabric. From data ingestion and transformation to modelling, visualisation and governance.

In my training we don't treat those topics as separate blocks, but as a coherent story. You learn not just what the options are, but when which choice makes sense and why.

Microsoft Certified Trainer

DP-600 — Official certification training (MCT)

This is an official DP-600 certification training for Microsoft Fabric. We train according to the structure and learning objectives of Microsoft Learn, aligned with the DP-600 exam. As a Microsoft Certified Trainer (MCT) I combine that exam structure with your reality: your own dataflow/ETL pipeline, governance questions and reporting needs.

What Microsoft says you'll learn

  • Managing an analytics solution in Fabric (governance, security, deployment)
  • Preparing and processing data within the Fabric pipeline
  • Designing, building and managing semantic models (Power BI in Fabric)

Who is this for?

  • Data engineers and BI professionals who work with (or are moving to) Fabric
  • Teams making the transition from traditional Power BI to the Fabric ecosystem
  • Professionals who want the DP-600 certificate with real understanding of the material

Basic knowledge of Power BI and data modelling is helpful. Experience with SQL and DAX is a plus.

How this translates to my approach

We work with a mini end-to-end example that resembles real practice: from source → landing → transformation → publication. Concretely: we set up a Fabric environment where you feel the difference between Lakehouse/warehouse/semantic model, and we make explicit where you use which workload and why.

Then we take on one piece of "analytics engineer" reality: security/governance (who can see what, endorsement/sensitivity, RLS/OLS where relevant) and how this flows through to the report. Conceptual or hands-on—but always with a clear "this is how the pipeline works" thread.

More about the setup on the way of working page. Curious who you'll be working with?

Want to know more? Get in touch.

Interested in DP-600?

Leave your name and email and tell me a bit about your situation. I'll get back to you personally to see what fits best.

Created by Björn, with support of AI, owned by Dogoda. More disclaimers, here.