4 Rules to follow on Microsoft Fabric Source Control

Power BI and Fabric are implementing source control support. It’s a long-awaited feature for Power BI.

However, it’s important to highlight some basic principles which should be followed as source control best practices. Some of them apply to any project in source control, some are specific for this environment, and some are specific for this date, depending on future Fabric features.

Let’s take a look on them!

Rule 1: Break down the objects in different workspaces

It’s becoming more visible that including all the objects of a solution in a single workspace is bad practice. You should break down the objects.

I mentioned this on the video Fabric Monday 09: Workspace Organization

But what’s the relation between this and the source control?

The problem is the Fabric source control support for the objects. Fabric objects have different levels of supported. The support is evolving everyday, but we need to be careful. This is the current support at the moment I’m writting:

  • Notebooks: Fabric source control supports notebooks, but the default lakehouse on notebooks creates the need to use deployment pipelines and rules. I explained this on the blog Fabric Notebooks and Deployment Pipelines
  • Lakehouses: They partially support source control and some source control operations can break them. I learned the hard way. This evolved very recently, but there is not much sense to expect to synchronize lakehouses across the environments using the source control.
  • Data Pipelines and Dataflows: These are not supported in source control yet. When they become supported, it will be out of the blue and they will get in sync with your repository. Could you imagine the mess if their working rules don’t match the ones existing for Notebooks?

Keeping the objects isolated in different workspaces will allow you to manage the source control for the ones which already support it without putting in danger the ones with incomplete or no support.

Rule 2: Never mix projects with different SDLC in the same source control repository

Notebooks and Semantic Models already have special source control requirements. The environment promotion needs to be made through Power BI Deployment Pipelines.

(You can watch my videos about Deployment Pipelines in English or Portuguese)

The branches for Development, Test and Production can never be merged directly, only through deployment pipelines, otherwise the branches will be doomed.

Other projects which may be related to the same solution, such as an Azure Function, for example, have a different SDLC. The branches for an Azure Function need to be merged using pull request from Development to Test, from Test to Production. Deployment Pipelines will do nothing for them.

It may be tempting to put all the elements of a solution under the same repository, but if they have different SDLC (Software Development Lifecycle), your process is doomed.

Rule 3: Don’t create environment specific object names

It may be tempting to create environment specific names such as mylakehouse-dev, mylakehouse-test, mylakehouse-prod.

Do you know what will happen when the source control and promotion between environments is applied?

On this date, a complete mess. But you could end up with 3 lakehouses (or the definition of them), in a single branch.

This rule applies to any object: Notebooks, semantic models, reports. They need to have the absolutely same name in the three environments: Development, Test and Production.

Some people with more expertise in source control may consider this very basic, but it goes against the instinct of who has been developing using the Power BI Portal for so long.

Rule 4: Keep the objects with the same owner

The relation between the objects and deployment pipelines is not easy. The objects need to have the same owner for the deployment pipeline work.

For semantic models, it’s possible for one member of the team to take ownership of them and manage the deployment pipelines in this way.

For notebooks, there is no easy solution at this date. You need to plan a work process to achieve this result. I recommended one on my blog about Notebooks and Deployment Pipelines