Atlanta MS BI and Power BI Group Meeting on December 6th

Please join us online for the next Atlanta MS BI and Power BI Group meeting on Monday, December 6th, at 6:30 PM ET.  To finish the year at a high note, the famous Guys in the Cube (Patrick and Adam) will tell us how keep the data fresh in Power BI. And your humble correspondent will update you on the Power BI latest.  For more details and sign up, visit our group page.

Presentation:Keeping your Data Fresh in Power BI
Date:December 6th
Time:6:30 – 8:30 PM ET
Place:Click here to join the meeting
Overview:We all want our data refresh to happen quickly so the most current data is available for our reports. In this session we will walk you through options to configure refreshing your data but more importantly we will help with performance. We’ll look at how to identify bottlenecks and then how to optimize at different points to get the most out of your Power BI refresh.
Speaker:Patrick LeBlanc is a currently a Principal Program Manager at Microsoft and a contributing partner to Guy in a Cube. Along with his 15+ years’ experience in IT he holds a Masters of Science degree from Louisiana State University. He is the author and co-author of five SQL Server books. Prior to joining Microsoft, he was awarded Microsoft MVP award for his contributions to the community. Patrick is a regular speaker at many SQL Server Conferences and Community events.

Adam Saxton is just a guy in a cube doing the work! He is on the Power BI CAT team at Microsoft working with customers to help adopt Power BI. He is based in Texas and started with Microsoft supporting SQL Server connectivity and Reporting Services in 2005.

Prototypes without PizzaPower BI Latest

PowerBILogo

Power BI Bookmark Navigator – A Better Hack

As a report author, you are constantly pressed to fit more visuals into a single page. The November release of Power BI Desktop introduced the Power BI Bookmark Navigator, which simplifies the process of creating a tabbed interface, such as this one.

Since Power BI doesn’t support visual containers or a “menu” visual, you must resort to the awful hack of hiding and showing UX elements by bookmarking them. This reminds me of the beginning of my career as a developer where we didn’t have widgets and we had to hack our way through implementing a custom navigation “experience” by toggling visibility. Alas, this continues in the 21st century but at least the hack got simplified. To implement the tabbed interface:

  1. Add two (or more) overlapping visuals.
  2. Add two bookmarks (Bar Chart and Column Chart) that show and hide the appropriate visual. Don’t worry about hidden visuals impacting the report performance because Power BI doesn’t process them.
  3. Add the two bookmarks to a Tabbed Interface bookmark group.
  4. In Report View, go to the Insert ribbon, expand the Buttons menu, and then click Navigators, “Bookmark navigator”.

Currently, Power BI supports two navigators. The “Page navigator” adds a tabbed navigation menu with a tab for each report page to let the user navigate to a given page by clicking the corresponding tab. The navigator that will inspire more interest is the “Bookmark navigator”.

  1. Notice that by default the navigator adds a tab for each bookmark defined in the report, but in this case, you just need to restrict it to the two bookmarks that you previously created. With the navigator selected, expand the Bookmarks section in the “Format navigator” pane, and select the “Tabbed Interface” bookmark group.
  2. Position the navigator above the two visuals. Remember that in Power BI Desktop, you need to press Ctrl when you click that navigator tabs to switch between the visuals.

Limitations and bugs:

  1. The previously selected tab gets stuck in a highlighted state, so you must hover on it to make it appear “unselected”.
  2. Hierarchical navigation is not supported. For example, you might want to build a page navigation experience like in Power BI apps. However, you can’t define a hierarchy, such as to start the user at the bookmark group level and then drill down to bookmarks.
  3. Although you can somewhat customize the tab appearance, no UX designer will probably be impressed. For example, one feature that could be useful to free up more page real estate is to be able to toggle the navigator visibility.

“Serverless” Lessons Learned

I’ve architected and currently implementing a solution that uses Synapse (my last newsletter has the details, plus the architecture diagram). Synapse Serverless is the Microsoft answer to Amazon Athena but instead of using open-source tools like Presto, it’s built on SQL Server. In this project we extract many tables from 1,500 on-prem SQL Server databases and stage them in ADLS.

From there we use Synapse Serverless to virtualize these files as tables that we query with T-SQL to load the source “table” data into a data warehouse hosted in Synapse SQL Pool. I have to tell you that I’m becoming a “serverless” fan.

Here are a few lessons learned from this project:

  1. Save the files in parquet format in ADLS. Parquet can be compressed. It’s columnar based, it’s much faster to query. Serverless automatically creates statistics for parquet files on the first query and each time it detects changes.
  2. Less files result in better ETL performance – We compared the results of querying a virtual table that is based on 1,500 files (one file per database) vs. a single file (by sending a T-SQL SELECT…UNION ALL SELECT query) that combines the data from all databases for that table. The single file outperforms the many files by far. First, the ETL process is a way faster because ADF doesn’t have to queue each copy activity. So, even if the file is small and takes a few seconds to copy over, time quickly adds up so you might find that you have to scale up your ADF self-hosted runtime and increase parallelism in ADF loops. For example, uploading all these files would take an hour vs. 40 seconds for a single file.
  3. Less files results in better query performance – We observed similar results when querying a virtual table in Synapse Serverless. In the case where the table was virtualized on top of many files, it took about 15 seconds to count the rows in the table and even longer to execute a single WHERE clause. By contrast, a virtual table on top of a single file was almost instantaneous.
  4. Don’t be afraid of schema differences – The chances are that different databases may have slightly different schemas, such as data types mismatch or extra columns exist in some tables. A great feature of Synapse Serverless is that the columns of the virtual table are the superset of all possible columns in the source. If a file doesn’t include a column, an empty column is returned.

To make my joy complete, I hope at some point Microsoft would support native integration between SQL Pool and Serverless so we don’t copy the data over. Although SQL Server-based, currently SQL Pool and Serverless are two separate sources. In our case we had to use ADF to extract data from Synapse Serverless and stage it in the SQL Pool before the final transformation to the data warehouse.

Atlanta MS BI and Power BI Group Meeting on November 1st

Please join us online for the next Atlanta MS BI and Power BI Group meeting today (Monday, October 4th), at 6:30 PM ET.  Sandeep Pawar will explain how to use the Power BI AI visuals for predictive insights. And your humble correspondent will show you how to use the Power BI REST APIs.  For more details and sign up, visit our group page.

Presentation:Demystifying Power BI AI Visuals
Date:November 1st
Time:6:30 – 8:30 PM ET
Place:Click here to join the meeting
Overview:Power BI has several powerful AI visuals that allow business analysts to create insightful reports that include predictive capabilities without writing any code. In this session, we will take a deeper look at these visuals, discuss how exactly they work, when & how you should use them effectively and importantly when not to use them. We will look at the algorithms driving them and understand how to use them in your reports. We will look at forecasting, key influencer visual, clustering, decomposition tree, anomaly detector in detail. We will also look at how to validate the outputs of these visuals.
Speaker:Sandeep Pawar is a data science professional. He currently works at Cree Lighting, WI as a Data Analytics engineer. He has experience creating data analytics solutions using BI and ML tools.

PowerBILogo

PolyBase Adventures

I’m setting SQL Server 2019 PolyBase for ODBC to JDBC access to a vendor data lake to virtualize entities as SQL tables. Overall, a smooth experience with a few gotchas:

Data type mappings

The vendor lake uses Oracle data types TIMESTAMP AT TIME ZONE and BOOLEAN that Java doesn’t know how to map. The solution was to set up a view in the data lake (luckily the vendor supports that) to cast these data types to NVARCHAR and INTEGER.

NullPointerException

Once the table is finally set up what do we get when querying it?

105082;Generic ODBC error: java.lang.NullPointerException .

How do we fix this horrible issue? Upgrade SQL Server and PolyBase to the latest cummulative update (CU).

The final mystery that I haven’t been able to crack yet is that for some obscure reason, PolyBase adds quite a bit of performance overhead to the query execution. So, if a query in DBeaver directly connected to the lake (or Power BI Desktop directly corrected to the ODBC driver) takes eight seconds, PolyBase expands it to a minute. Examining the DMVs shows that the actual query does execute in line with DBeaver, but there is some additional overhead from PolyBase that would require a support case with Microsoft.

Atlanta MS BI and Power BI Group Meeting on October 4th

Please join us online for the next Atlanta MS BI and Power BI Group meeting today (Monday, October 4th), at 6:30 PM ET.  Reda Raz (RADACAD) will share best practices on semantic modelling with Power BI. And your humble correspondent will show two new features: Get Insights and Power BI Goals.  For more details and sign up, visit our group page.

Presentation:Power BI Modeling 101
Date:October 4th
Time:6:30 – 8:30 PM ET
Place:Click here to join the meeting
Overview:Getting started with a report in Power BI is easy. However, soon you will face challenges of having multiple tables, the relationship between tables, the direction of relationship, active or inactive relationship and so on. You also soon realize that you need a proper data model called star-schema which is combination of fact and dimension tables. But, wait a second, you never learned all these fundamentals. What should you do? This session is build exactly for you, to help you understand the fundamentals of Power BI modelling and start from a good foundation.
Speaker:Reza Rad is a Microsoft Regional Director, an Author, Trainer, Speaker, and Consultant. He has a BSc in Computer engineering; he has more than 20 years of experience in data analysis, BI, databases, programming, and development mostly on Microsoft technologies. He is a Microsoft Data Platform MVP for 11 continuous years (from 2011 till now) for his dedication in Microsoft BI. Reza is an active blogger and co-founder of RADACAD. Reza is also co-founder and co-organizer of Difinity conference in New Zealand, and the Power BI Summit (the biggest Power BI conference)

PowerBILogo

Drillthrough Paginated Reports

I’m helping a client convert a few SSRS reports from SharePoint to Power BI Premium Per User (PPU). SSRS is of course near and dear to my heart because of all the work I’ve done around it circa 2004-2010 (yep, it’s been that long), books, MVP awards, etc. Since its humble beginnings, SSRS have had a solid architecture that excelled in extensibility. You’d be hardly pressed to face a requirement that couldn’t meet with SSRS back then.

Unfortunately, most of these extensibility features, such as custom assemblies, custom security, custom delivery extensions, custom renderers (essentially everything related to custom code) didn’t make it to paginated reports in Power BI Premium. Not many companies are using these features, so they probably won’t be a showstopper for your migration. To their credit, Microsoft is closing the gap between SSRS and paginated reports. As of now, the feature limits that you might run into are:

Missing FeatureWorkaround
Shared data sources/datasetsReport-specific (embedded) data sources/datasets (yep, a maintenance nightmare)
Drillthrough report actionsChange to URL action and provide URL to drillthrough reports passing filters on the URL
Document mapNo workaround

Instead of drillthrough report actions, implement URL-based actions. Assuming embedding for your organization, here are the high-level steps:

  1. Deploy the drillthrough report to Power BI. Again, you must deploy to a Premium or PPU workspace. Run the report. It should run successfully. Go to File, Embed menu and copy the iframe code (assuming you want to embed the report in your company’s portal). The iframe code should look like this:
    <iframe width="800" height="600" src="https://app.powerbi.com/rdlEmbed?reportId=b82ed928-d9d9-42b5-b6cc-a6e8f3e9dc4d&autoAuth=true&ctid=<your tenant id>" frameborder="0" allowFullScreen="true"></iframe>
  2. Open the main report in Visual Studio or Power BI Report Builder. In the report properties, Code tab, define a public constant to the drillthrough report(s). Or, you can define internal parameters.

  3. Now find all instances of drillthrough actions in the main report. If there are many, it might be easier to open the report RDL in Visual studio and search for <Drillthrough>. For each textbox with drillthrough replace the Drillthrough section in the <ActionInfo> element with a Hyperlink, such as
<ActionInfo>
 <Actions>
 <Action>
 <Hyperlink>=String.Format("{0}&amp;rp:ProfileID={1}&amp;rp:Month1={2}&amp;rp:Month2={3}", Code.reportMigrationDrillthroughUri, Parameters!ProfileID.Value, Parameters!Month1.Value, Parameters!Month2.Value)</Hyperlink>
 </Action>
 </Actions>
</ActionInfo>
  1. In the example above, the positional string replace will replace the {0} placeholder with the report drillthrough constant you defined before. Then, you provide values for the drillthrough report parameters (in this case, the drillthrough report takes three parameters).
  2. Test the main report and drillthrough links. If all is well, publish the main report to Power BI. Obtain its embedded iframe code and add it to your app page.

Use the secure embed iframe URL for the main report so that it’s embedded in the app instead of opening in a new browser tab. Unfortunately, the drillthrough reports will open in a separate tab and I couldn’t find a workaround to render them inside the iframe of the main report.

Chasing SSAS Connection Timeouts

Suppose you have a Tabular model and you send a massive DAX query to it that could run for hours, such as to calculate many measures (in our case hundreds) for each customer overnight so that you can cache the results and delight the user with super-fast lookups. This issue could also apply to Multidimensional although in this case Tabular was used. The server times out sporadically the query after a random execution time. You have changed all possible connection timeout options (SSAS ServerTimeout, SSIS connection timeout, etc.) to no avail. In fact, if you have scheduled an Agent job that calls an SSIS package that executes the query, the package doesn’t register the exception and continues executing indefinitely, but a Profiler trace (or XEvents session) shows that the server raises a Connection Timeout error.

How to fix this horrible issue? Change the two undocumented settings in the MSMDSRV.INI file from 60,000 to 600,000.

<ServerSendTimeout>600000</ServerSendTimeout>

<ServerReceiveTimeout>600000</ServerReceiveTimeout>

Azure Data Factory is Getting Better All the Time

Three years ago, I wrote that it would probably take a decade for to mature and close the gap with SSIS.

To its credit thought, while it’s still lagging in the area of extensibility, ADF added features that we don’t have in SSIS so I’m developing a taste for it:

  1. Schema drift – Suppose you want to automatically stage new columns as they added to a source table. Or, columns might be deleted from the source but your ETL shouldn’t fail. You can’t do these things with SSIS which is tightly coupled with the data source schema. ADF data flows, however, can handle this.
  2. Parallel loops – Want to loop through some tables but load them in parallel? The ADF ForEach loop can be parallelized up to 50 concurrent threads.
  3. Source partitioning – Let’s say you have a big source table and you want to speed up staging. You can configure the Copy Activity to automatically create multiple threads to load the table in parallel, such as by partition (if the source table is already partitioned) or by buckets based on a primary key.
  4. Scalability – As I mentioned in the blog above, this was the main driver for Microsoft to start from scratch with ADF. You’d be hard pressed to run out of resources with ADF, so it wins hands down against SSIS in this area.
  5. Managed VNET on a horizon – Currently in preview, you can ask your helpful System Services to setup a managed virtual network and thus avoid the need to set up self-service runtimes for accessing on-prem data sources. Make sure you install a Linux VM as the tutorial demonstrates since we found the hard way that Windows won’t work.

ADF is not without idiosyncrasies of course. Why would data flows have a separate expression language and require a separate runtime? Speaking of data flows, I almost used them for their automatic file partitioning until I found that I don’t have the control over the partition names to support both full and incremental load. Long live the copy activity!

Atlanta MS BI and Power BI Group Meeting on August 2nd

Please join us online for the next Atlanta MS BI and Power BI Group meeting on Monday, August 2nd, at 6:30 PM.  Avi Singh (LearnPowerBI.com) will discuss how you can achieve a successful Power BI career.  For more details and sign up, visit our group page.

Presentation:How to Create a Successful Power BI Career Without the Struggle (By “Niching Down”)
Date:August 2nd
Time:6:30 – 8:30 PM ET
Place:Click here to join the meeting
Overview:Your Grandma was right! You cannot please everyone. And when you go out there and wave your flag as a Power BI Professional, that’s exactly what you are trying to do. And it doesn’t work. Either you get no results or have to work really hard for every inch of progress. The problem is that most professionals either miss or mess-up the first crucial step of their Power BI Career – Niching Down! In this session we’d talk about:

– Why Niching down is the crucial first step towards a successful Power BI Career

– How you can come up with the Niche that you should focus on

– Address some of the fears you may have about niching down (a classic one is: “Oh, but I don’t want to miss out on all the other opportunities out there by niching down”)

Speaker:If you are searching for a more meaningful work life as a Power BI Consultant ➔ Where you do what you love, create an impact by helping others and create a life of freedom for yourself ➔ Then Avi is dedicated in helping you achieve those goals.

Hint: The classic wisdom of “Learning More” and “Working Hard” actually does NOT help you get to these goals. Avi leads a five-step program for creating successful Power BI Consultants; starting with the first step…you guessed it 😉 “Niching Down”

PowerBILogo