Use Slicers for Query Parameters in PowerBI

Reading Rob Collie’s latest cool blogpost on how to retrieve slicer selections in Power BI, I couldn’t stop thinking of how awesome it would be, if we could use this technique to pass slicer selections as query parameters to the M-queries in the query editor. Not only would we have a very convenient user interface, but – what’s actually more important at the moment – we could pass multiple values as parameters to our queries, as this is not possible at all currently:

But how to fetch them? Rob’s post simply uses cross-filtering to show the values in a separate visual. In Excel we have cubefunctions where we can pass the slicer(-selection) as a parameter. Igor Cotruta, who is describing beautiful PBI-hacks on his blog here, kindly helped me out on this: “Via DMVs. Check $system.discover_sessions for the field sessions_last_command”. This worked perfectly into the following function, in which you just have to pass the name of the measure as a parameter:

Make sure that you have used that measure on one of your visuals, as otherwise the function cannot harvest it. Also you have to first save the file and then push the refresh-button in order to trigger the correct refresh. The above function sort of “reads the current PBI file from outside”, so it will only see the saved version.

When you do the first refresh, a dialogue will pop up, where you just have to accept the default values like this:

The example in the file below fetches temperature data where every selected year will create a unique URL and the results of all those calls is consolidated into one table. But of course, this technique can also be used to pass multiple parameter values to SQL-commands or others.

A final note: The query to extract the slicer parameters from the DAX-statement is not particularly robust and you might have to adjust it, if your slicer-selection-strings contain special characters.

Download for logged-in subscribers SlicerParameter2.zip

Edit 25-Sep-2017: Adjusted the code to retrieve PortID.

Also: You have to disable privacy-settings for this code to run!

Recap:

  1. If you create a new measure or adjust this file to your settings, you have to use the new measure in report somewhere and save the file. This is necessary to “initialize” the measure to your model so that it can be seen in the query editor.
  2. From then on, when you use it: Just change the selection in your slicer and click “Refresh”

Enjoy & stay queryious 🙂

Blending data in PowerBI like in Tableau

Today I came across a question in the PowerBI-forum if blending data was possible in Power BI like in Tableau. Although I wouldn’t necessarily recommend it, it’s definitely is a nice challenge. So the following function will interlace the rows from 2 tables like the blending-function in Tableau does. Just that we cannot use any aggregators on the attributes and are not able to use measures, as this takes place in the query-editor.

In our example we have a table with actual figures and one with budget figures:

We want to add 2 columns from the budget table to the actual table: “Amt” and “Qty” (red). Where there’s no match of budget – figures with actuals, there need to be added rows which hold only values from the budget figures (yellow):

Blended data like in Tableau

So we could do a join in full-outer-mode, but then we would need to find a way to put the date- and AccountNo-values into the existing columns of the actual figures. Instead we will identify those rows who need to go below the actuals and then do a join in left-outer mode just to add the values of the 2 new columns.

You need to feed this function the following parameters:

  1. Name of the primary table (“Actuals”)
  2. Name of the secondary table (“Budget”)
  3. Key column names of the primary table (“Date”, “Account”)
  4. Key column names of the secondary table (“Date”, “AccountNo”)
  5. Column names for the value columns (“Amt”, “Qty”)

Table.BlendRows

 

File with sample: BlendDataTableau.zip

 

Dynamically flatten Parent-Child Hierarchies in DAX and PowerBI

If you use DAX to flatten Parent-Child hierarchies you will end up with a table that has a static number of columns (like described here). If you need a dynamic solution instead, which creates just as many level-columns as there are needed for the current data, you can use DAX’s helper-tool Power Query (or Get Data in Excel) or the query-editor in PowerBI, which uses the language M.

Another advantage of this solution is that you can script the table creation in one step (only flaw: You still need to manually adjust your hierarchy though): But it saves time in creating the table, especially if you have many levels.

2 simple steps

  1. copy the following function,
  2. add a new step to your current table where you call this function, filling in the following parameters:
    • table name (which is the name of the previous step in your M-query)
    • name of the column with the child-key
    • name of the column with the parent-key
    • name of the column who’s values shall be shown in the levels (can also be child-key)

 

Call fnFlattenPCHierarchyFunction

 

And this is the code, which you can also download below:

Read more

Conditions in FirstN, LastN and other xN-functions in M, PowerBI and Power Query

Today I discovered that we can use conditions in many of the N-selecting functions where one/I would normally expect just a number-expression for the N:

Table.RemoveFirstN( table as table, optional countOrCondition as any)

So apart from being able to select a certain number of rows to be removed, we can pass a condition (as function). This condition will iteratively be checked for every row in the table (from top or bottom) and as long as every (next) step returns true, the resulting range will be removed. So as soon as one row breaks the condition, the process will stop.

I find that totally awesome, as we can now remove all top-rows who have an empty field in Column3 like this for example:

Table.RemoveFirstN(<MyTable>, each each (_[Column3] = null or _[Column3] = “”))

Yes, this will remove the first sequence of consecutive nulls in the table. So all other rows with nulls in the table coming later after a non-null value has “broken in”, will remain.

This is the list of function, where you can use this M-agic:

Enjoy & stay queryious 🙂

How to store tables longer than 1,1 Mio rows in an Excel-sheet using Power Query and JSON

If you are working Power Query, you might come into a situation where you would like to make tables accessible from outside that file, that are longer than the 1,1 Mio rows who fit into an Excel-sheet. You can do that using the JSON-format and compression. That way you create a table in Excel that contains a JSON-text string, that can then be read and decoded again by Excel (or Power BI), using Power Query (or Get&Transform in Excel 2016).

You can download the file where a table with 2 Mio rows is compressed into a table with just 229 rows here: JSON_BreakRefreshChain4.xlsx

Simple idea:

Convert your table into JSON using the Json.FromValue-function. If we could export that JSON-file from here, we would be done already. But I haven’t figured out yet, if/how one could export or copy that from within the query-editor (apart from manual copy& paste, but that has also a limit of our good 1 Mio characters here). So if you have any idea about this, please share.

Workaround:

Read more

Import Measures from different Tabular Models using DAX Editor

Marco Russo has created a great tool for SSAS tabular that lets you edit measure definitions (which you should read here first if you haven’t done yet).

In this article I’ll show you how you can use it to import multiple measures from different tabular models into your current model.

The way the DAX-editor works is that it exports the existing measures from your model as a text file and imports them back after you’ve done your transformations. My technique will add the measures from the other model automatically to the existing measures so that both can be loaded back into your current model. In addition to that, you will have a UI with a process that guides you through the necessary steps that come with a task like that, which are:

  • select only those measures that you actually need
  • check references to existing measures and columns from the import model and manage their handling
  • allocate the tables into which these measures are going to be imported

This will all be done in a Power Query-powered Excel workbook that you can download here:   DaxEditorEditor_Final2.xlsm

These are the steps:

Read more

How to tame Case Sensitivity in Power Query and PowerBI

Have you heard of “Comparer.OrdinalIgnoreCase”?

If the answer is “No” (or something similar), you will be very delighted to hear that this is your key to disable case sensitivity in comparing operations.

So if you want to check if a substring like “car” is contained somewhere, you can write:

Text.Contains(MyText, "car", Comparer.OrdinalIgnoreCase)

and the expression will return true for all cases of car like: CAR, Car, caR… .

The official documentation looks like this:

Read more

Use R to export data from Power BI

Edit 03 Aug 2016: With the July 2016 release you can now run your R-scripts directly in the query-editor (so no row-limit any more!). No need to go via the visuals. But this will limit you to export datasets from the query-editor, so no DAX.

Edit 22 Jan 2016: Currently this method is limited to 150k rows! It will drop all others without warning!

Edit 25 Jan 2016: 150k rows is the limit on all R-scripts at the moment (link). A warning sign will be shown on the image. Will create a new blogpost if this changes – so stay tuned (by following this blog, Twitter or LinkedIn)

With the December release, Microsoft enabled Power BI’s content to be fetched from the R-feature. So instead of plotting this data to the screen, could we just export it to a file?

Yes we can:

write.table(trim(dataset), file=”your filepath & filename.txt”, sep = “\t”, row.names = FALSE);

write.table has a lot of options, which are explained here. I’ve only used a few here: Exactly defining the location where the output-file shall be stored (file=), setting tab as delimiter (sep=”\t”) and skipping row names, which would be an index in our example here.

In addition to that, I had to get rid of trailing whitespaces that somehow sneaked into the table (trim(dataset)). Therefore the gdata-package is needed. So the full R-code looks like this:

require(gdata)
write.table(trim(dataset), file=”your filepath & filename.txt”, sep = “\t”, row.names = FALSE)
plot(dataset);

Here you have to make sure to turn the slashes in the filepath: So if your path looks like this: C:\Users\Imke\Desktop\Rfile.txt you need to write it like this: C:/Users/Imke/Desktop/Rfile.txt

Let’s take this as a start that will probably fulfill many users need to quickly export data from Power BI to simple text files. Further use cases I see are:

Read more

Visualize dependencies between your DAX measures: DAX-VizArt-Wizard

This DAX-VizArt-Wizard vizualizes dependencies between your DAX measures, shows the definition of all related measures and shows differences between the measures of 2 models/versions. This works for Power Pivot, Power BI and for Analysis Services Tabular (SSAS).

In the Power BI-Version you’ll see them in the Sankey-chart like this:

VizArt1

If select measures/nodes, all direct connections will be highlighted:

VizArt2Direct

In the second version, all indirect connections will be highlighted as well & the selected measure definitions will be shown.

Read more