Marco Russo has created a great tool for SSAS tabular that lets you edit measure definitions (which you should read here first if you haven’t done yet).
In this article I’ll show you how you can use it to import multiple measures from different tabular models into your current model.
The way the DAX-editor works is that it exports the existing measures from your model as a text file and imports them back after you’ve done your transformations. My technique will add the measures from the other model automatically to the existing measures so that both can be loaded back into your current model. In addition to that, you will have a UI with a process that guides you through the necessary steps that come with a task like that, which are:
select only those measures that you actually need
check references to existing measures and columns from the import model and manage their handling
allocate the tables into which these measures are going to be imported
If you want to perform incremental load of your data in PowerBI, you should vote for it here.
Please vote before you read on to see which hacky workaround to use until this is implemented.
This workaround is a compromise that’s mainly suited for use cases like:
– performing extensive transformations on imported data only once
– saving data in the model that will not be available at the source later on (like many web-imports)
It works on the principle of self-referencing-queries like I’ve described here in combination with the hack of your current PBI-datamodel which I’ve described here (that’s fully automated now). So it will just work in the Desktop version and not in the service (so no scheduled refresh possible).
So the data that should “sit in the model” and just be merged with new incoming data will actually be imported into the model again. Just not from it’s original source, but instead from itself. As this is often slower than to import from the original source, you will have to look closely into your specific need and decide if this makes sense for you.
These are the steps to perform:
1) Create 1st query that imports and transforms the first chunk of data that shall not be refreshed afterwards and load it to the data model
2) Go back to the query designer and create a 2nd query that re-imports 1st query using the function below
3) Create a 3rd query that appends query2 to query1 and load it to the model
4) Go back to the query designer and choose 3d query to implement a filter or remove-duplicates step that avoids importing data that has already been imported
5) Change the source in the 2nd query from 1st to 3rd query (as you now want to reference everything that has been imported so far).
6) Disable load of the first two queries
When querying large databases with Power BI or Power Query, the design of the query itself sometimes causes long waiting periods because of background loading activities. In order to speed up this process, I’m adding a filter in my tables as one of the first steps, which I can toggle on or off using a central switch. The toggle is a query that will be referenced by an additional line of code:
FilterToggle = if DBFilterToggle=1 then LastQueryStep else FilteredRowStep
Where DBFilterToggle is a query itself with just one value: 0 if I want to have the filter active or 1 if I want to deactivate the filter and see all the data instead.
The cool thing about this method is that the following code will work seamlessly and we can use this toggle in multiple queries at the same time: Just filter your other long tables to your desire and refer them to DBFilterToggle.
This enables me to work with a very few data which will be delivered blazingly fast (due to query folding) and move on with my design very quickly or (often more importantly) during debugging where you need to move through multiple steps quickly in order to find the errors.
I’m not sure, if this is my “invention” or I have seen this before, but so far couldn’t find the source. So please post the source in the comments if you find it.