How Power Query can return clickable hyperlinks with friendly names to Excel

When you use Power Query as an Excel-automation-tool rather than just to feed the data model, you might want to return clickable hyperlinks that carry friendly names. This doesn’t work out of the box, but with a little tweak it will be fine:

The trick

Return a text-string that contains the Excel (!)-formula for hyperlinks, preceded by an apostrophe  ‘ . After the data has been loaded to the sheet, check the column and replace ‘= by = to activate your Excel-formula:

Activate the HYPERLINK formula by replacing ‘= with =

You can then format the column to “Hyperlink”:

Read more

How to open a complex JSON record in Power BI and Power Query

Today I’ll show you a very useful technique how to deal with a JSON record that contains a wild mixture of different elements like this:

If you click on one of the expandable elements, their content will be shown, but you’ll loose all the “surrounding” information (metadata) that is visible now. This is often an issue, regardless if you want to create multiple tables from it to build a star-schema or just need a handful of fields or a denormalized table. But with a little help from M, you’re good to go:

Table.FromRecords( { MyJsonRecord } )

Will returns this:

With this move, every expansion of one of the expandable elements will keep the existing data in place:

Create one big flat table

Simply expand one element after each other to create a denormalized table

Create star schema

For multiple tables, keep this query and reference it to create you (sub-)tables. Always keep the Id-column as the key (!) to combine all the tables in your data model later. (Provided you use this in a function for multiple entities/series)

Best is to play with it, so just past this code into the advanced editor:

 

If your JSON-record has a different structure with “just” header and data in different fields, this technique will be more suitable for you: http://www.thebiccountant.com/2016/04/23/universal-json-opener-for-quandl/

Enjoy & stay queryious 🙂

How to create and use an R-function-library in Power BI

Edit 10-10-2017: There is also a (simpler) way to run a custom function library described here: http://www.thebiccountant.com/2017/10/06/create-a-function-library-in-power-bi-using-m-extensions/ . If you go that route, the only point of interest in the article might be how to create your function library automatically.

Once you’ve discovered the huge potential R gives you to expand your analytical toolbox in Power BI (check some tips & tricks in my previous blogpost if you haven’t already), you might wish to have all your awesome functions conveniently at hand when designing new solutions. And thanks to M, there’s actually nothing easier than that: R-function-library in a record (which works just the same for M-functions 🙂 )

Put your functions into a record (fnr) with the function name as the field name and the function itself as the value: One query to hold them all (and not cluttering your editor pane) and ready to use as if they were native functions:

R-function-library

Use

will export content of my query “Actuals” to csv-file on my desktop.

  1. fnr is the name of the record. You can give it your own name of course, I prefer to keep this as short as possible.
  2. followed in square bracket is the name of the function (record field name)
  3. in ordinary brackets you have the function arguments just like in standard M (record value)

Create record

Read more

How to import from Excel with cell coordinates in Power Query and Power BI

There might be occasions where you want to import data from Excel into Power Query or Power BI using cell coordinates like a range from E3 until G9 for example (“A1 cell reference style”). The function I provide below also caters for the potential pitfalls of this task that Maxim Zelensky has described in his article.

Background

If your worksheet has one leading empty row and column, the import will ignore them and automatically return the range starting from B2. So to fetch the range E3:G9 you have to delete the first row and the first 3 columns. But as Maxim has found out, remaining formats on empty cells will lead to an import of empty rows and columns. So the number of rows and columns to delete will vary and is hard/impossible to predict.

Method

The range that PowerQuery or PowerBI will import is stored in the Excel-file already in the sheet-data and the xml looks like this (“Sample3” from Maxims data):

The imported range is E1 till J12, as the first rows contain formatting instructions, and will therefore be imported as well. In the 3rd row E3 shows up with the first value, which is surrounded by “<v>”.

This is how it looks like in the Xml.Table in the query editor:

Task is to calculate the number of rows and columns delete, considering the individual offset that is caused by the formatted empty cells.

Code

So I’ve cooked together these ingredients in a pretty massive code that you can download here: fnImportFromExcelCellCoordinates.txt

How to use the function

Read more

Should we pipe M?

“Just because you could doesn’t mean you should”… So I’m asking the Power Query and M fans & experts here if we “should” pipe M:

Background: With M you can nest your expressions like in Excel to group commands that belong together. But this has some disadvantages like:

  • Reading:
    • the execution order of the functions doesn’t match the reading order
    • the function name and the arguments are torn apart
  • Writing:
    • if you write an additional function around the existing expression which then fails, it is very laborious to manually delete all the code to go back to the previous state (especially, if you have trailing function arguments)
    • if you later recon that you need an intermediate step of the nested expression and need to split up the statement, the same problems occur

So instead of this code:

we could write it like so:

This code works in M if you have a record (“M”) in your queries that contains Kim Burgess’ cool code and an additional record (“M”) that he has kindly helped me with:

All that still folds!

Honestly, it doesn’t look pretty in my eyes (yet), but it works and eliminates the disadvantages mentioned above. With some help of Expression.Evaluate, we could further clean it up to match the magrittr-style for example, but I’ve been warned to use this function, so not sure what to prefer at the end.

Please let me know your thoughts & stay queryious 😉

File to download: ShouldWePipeM.zip

New M-function: Table.TransformColumnTypesToFirstRowsTypes for PowerBI and PowerQuery

The following function automatically transforms all columns to the types that have been detected in the cells of its first row. Provided they come as: Number, date or text (but you can add additional type conversions if you need them.).

It also has some rough edges: If the first value is empty, the column will be converted to text. Also, it contains the (improved) logic from this article:  So if a date is written in a way that it could also be a number, then it will be converted as a number. To minimize the room for errors here, I’ve converted the values to text first, but this is still something to watch out for. But in very many cases it will just do what you have long been looking for:

Use cases:

  • You don’t want to use the automatic but static/hard coded type-conversion in the 2nd step (because you know you’re table is going to have more columns in the future and you want to cater for proper type-conversion of them as well)
  • You’ve lost your column types due to some other command (like Table.ReplaceValues)

M-Code

Code to download: TableColumnTypesToFirstRowsTypes.txt

 

Enjoy & stay queryious 😉

How to expand a column that cannot be expanded in Power BI and Power Query in Excel

Especially when working with JSON-data, you might end up with a column that has elements of mixed types in it. The expand column – arrows will be missing, but some elements still need to be expanded, like here:

But there is an easy way to fix it:

Transform to expandable column

Table.TransformColumns(Source, {{“Column1”, each if Value.Is(_, type list) then _ else {_} }} )

It transforms the “Column1” from table “Source” by checking, if the content of the each row ( _ ) is of type list and if yes, keep that value ( _ ) and if not, transform it to a list (by framing it into curly brackets {_} )

Syntax for tables

Table.TransformColumns(Source, {{“Column1”, each if Value.Is(_, type table) then _ else #table({“Column1”}, {{_}} ) }} )

Syntax for records

Table.TransformColumns(Source, {{“Column1”, each if Value.Is(_, type record) then _ else [a=_] }} )

File for Subscribers to download: HTExpandColumnThatCannotBeExpanded.zip

This technique should be applied to columns where the expandable elements all have the same structure. If that’s not the case, you should use this technique instead.

Enjoy & stay queryious 😉

Table.TransformColumns – alternative in PowerBI and PowerQuery in Excel

This article shows a trick for a little problem that annoyed me for quite some time: How to get Table.TransformColumns transforming the values of a column with a reference to a value (from the same row) of another column?

1 Replace text by a value from another column

So instead of adding a new column where the “*” is replaced by the value from column “WhildcardValue”, I just want to perform the replacement-operation in the original “Text”-column, so that I don’t have to rename and delete the other columns later:

So far, I always ended up fighting with Table.TransformColumns-function and got quite frustrated because I couldn’t find a way to reference the (row-) value of the other column. As it turns out, I was fighting the wrong target here, because Table.ReplaceValue is actually the saviour for this challenge:

1.1 Check column “Text” -> Transform -> Replace Values

So in (4) we just filled in a dummy-value, which we’re going to replace with a reference to the desired column: “each [WildcardValue]” like this:

Read more

SQL-query folding bug still alive and sucking in PowerBI and PowerQuery in Excel

Edit: There is currently no interest at Microsoft to change the current behaviour: http://community.powerbi.com/t5/Issues/Bug-Query-folding-not-working-with-non-SQL-datasource/idi-p/225100#M11611 .

I found that this workaround works also pretty well (faster in most cases), as long as your filter-tables are not too long: http://community.powerbi.com/t5/Desktop/Parameterized-SQL-Query-with-query-folding/td-p/171503

Are your SQL queries that filter with a non-SQL-table slow in PowerBI and PowerQuery in Excel? Then this might be of interest for you:

Nearly 2 years ago when I published my first blogpost about the bug that prevents query folding on SQL-sources when filtered by non-SQL-sources, PowerBI was still so new that I even didn’t mention it in my blogpost. I had enormous problems with some clients work with performance and spent 2-digit hours on figuring out the reason with some good help from the TechNet-Forum.

Today I was notified that “this bug was deemed as not high-pri enough to warrant a fix”. So it’s well alive and sucking: In Excel as well as in PowerBI. Considering how many people read my article, I found it worth to mention again and provide an improved solution. It is a function that’s easy to implement and takes the name of the tables and its key-columns as parameters:

Code to fix Query-Folding bug in PowerBi and PowerQuery

Code to fix Query Folding Bug in PowerBI and PowerQuery

And the code to copy: FilterSQLTable.txt

Hope this prevents you from wasting precious time.

Non-linear Break-Even Analysis in PowerBI

A break-even analysis tells you at which value of the parameter in question your profit-calculation will turn positive (link). Here we need to sell at least 173 at a given price of 20 before we’ve recovered all our costs:

If your variable costs are constant, you can solve it by this formula:

BreakEvenQuantity = Total Fixed Costs / (Unit Sale Price - Unit Variable Costs)

You’ll find tons of examples on how to do this in Excel like here .

Non-linear cost structure

But in real life, the variable costs often depend on certain quantities as you get discounts for purchasing large amounts. The following table shows a cost structure with fix costs in row 1. The 3rd column “FixOrQty” indicates if the cost item is fix or dependent on the quantity (Qty). The 2nd row contains a variable cost that is constant with 2 for all quantities. Row 3&4 show a variable cost of 8 for quantities up to 100 and if you purchase more than 100 the costs will be lowered to 2 for all additional quantities. Row 5-7 have a similar structure, but with 3 quantity ranges:

 Solve with goal-seek algorithm

Read more