top of page

Refresh Semantic Models Every Minute – Now in Fabric (Preview)

  • Writer: Vojtěch Šíma
    Vojtěch Šíma
  • May 5
  • 3 min read

Updated: May 10

tl;dr If you have any type of Power BI Premium, you can use the Refresh Semantic Model activity in a Data Pipeline to refresh your semantic models. This bypasses the default 30-minute refresh interval—no need for code-based solutions or Power Automate. It also offers a centralized way to manage refreshes, with support for dependencies and serialization. This feature is still in preview at the time of writing.

Disclaimer

Even though you'll learn how to refresh your Fabric items every minute, I strongly recommend following the principle of least frequent refresh—ideally, as infrequently as your reporting logic allows. If you're not careful, you can easily burn through your capacity with just a few refreshes. Keep in mind that the number of refreshes is subject to general limitations for API-based refreshes on both Pro and Premium models.


Where's that option?

This is a quick article, so let’s cut to the chase. In any workspace backed by Power BI Premium, Premium Per User, or Power BI Embedded capacity, you can create a Fabric item called a Data Pipeline.


In short, a Data Pipeline (the Fabric version of Azure Data Factory) can orchestrate a series of tasks, such as copying data, running dataflows, notebooks, or stored procedures, and now (in preview) even refreshing semantic models.


This new activity can be found in the Activities ribbon by looking for the semantic model icon or by clicking the three dots under Orchestrate and selecting it from the menu. It's called Semantic model refresh (Preview) if you prefer searching by text.


Semantic model refresh icon
Semantic model refresh icon

How to set it up?

Setting this up requires zero brain cells; it's just clickity-click. Once you click the Activity, you'll get the activity in your pipeline.


emantic model activity in the pipeline
Refresh the semantic model activity in the pipeline

In the General tab, enter a meaningful name, configure the timeout, and set the retry policy. One advantage of using a Data Pipeline is that, on failure, you can specify a delay and automatically retry the operation—a handy built-in resiliency feature.


In the Settings tab, configure the Connection (authentication), select the Workspace, and—most importantly—choose the Semantic Model. You also have the option to refresh only specific tables or partitions, but I’ll skip those details for now, as this feature is still in preview and subject to change before general availability.


Settings Tab
Settings Tab

And frankly, that's it. I will show later how to build dependencies; however, let's now configure the refresh policy.


Set up the refresh

Here, we have to think one level above—we’re going to be scheduling the run of the Data Pipeline, not the semantic model itself. So we’ll stay inside the Data Pipeline and move to the Run ribbon. Here, we want to select Schedule.


Schedule data pipeline
Schedule data pipeline

Now, in the Schedule tab, if we want to refresh every minute, we can configure it accordingly.


Schedule window
Schedule window

Another benefit of using a Data Pipeline is the flexibility to go in the opposite direction—for example, refreshing reports only once a month. The Repeat options are far more flexible compared to the limited scheduling capabilities of native semantic model refresh settings.


Repeat options
Repeat options

Once you have everything set up, you can try to run your data pipeline.


Save data pipeline
Save data pipeline

Run data pipeline
Run data pipeline

Whether it succeeds or fails, you can check its Output or the Run history. In case of failure, you can view the error message as well (you can even rate the error :D)


Output window
Output window

Error details
Error details

In the semantic model itself, make sure the credentials are correctly configured and that no other scheduled refreshes conflict with the Data Pipeline to avoid unexpected errors. If you want to verify whether a refresh was triggered by the Data Pipeline rather than a standard scheduled refresh, you can check the refresh history—Data Pipeline-triggered refreshes are labelled as Via Enhanced Api”.



Refresh History of Semantic Model
Refresh History of Semantic Model

Serialization/parallelization and dependencies

If you want to build dependencies and streamline refreshes, you can easily do that within a Data Pipeline. This isn’t new if you're already familiar with how pipelines work.


You can define the condition under which the next activity should run. Typically, you'd choose to proceed on success (or on completion, if the result doesn't matter—though in this case, I’d assume it does). Then, simply connect it to the following activities. Your setup might look something like this:


Data pipeline
Data pipeline

Summary

With this (still preview) feature in Microsoft Fabric, you can finally refresh your Power BI semantic models straight from a Data Pipeline—no more waiting 30 minutes or hacking around with Power Automate. Everything stays centralized, clean, and under your control. Whether you're refreshing every minute like a maniac or just once a month because you're chill like that, the pipeline's got you covered. Add retries, dependencies, and proper flow control, and you’ve got yourself a seriously flexible setup—all without leaving Fabric.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page