Synapse Serverless: The Good, The Bad, and The Ugly

When you a provision a Synapse workspace, you get a serverless endpoint for free (or almost free). This endpoint represents Synapse Serverless: a query service for ad-hoc exploration of data in CVS, Parquet, and JSON files stored in Azure Data Lake.

The Good

Being able to query files using SQL is great. Some of you might remember the U-SQL language that was introduced a few years ago alongside Azure Data Lake Storage (ADLS) Gen 1 which is now deprecated. It never caught up because it looked like SQL but it wasn’t (it was actually closer to C#). Now we’re talking about real SQL. To query files! This opens the possibility to implement a logical warehouse (the emphasis is on logical as everyone to my knowledge who tried to replace a data warehouse with a data lake has failed). Or, you can connect Power BI to the serverless endpoint and start querying all these files in DirectQuery mode. So, this enables real-time BI on top of file extracts.

Behind the scenes, Synapse Serverless uses a distributed query engine to conquer and process files in parallel. For more information, read the “POLARIS: The Distributed SQL Engine in Azure Synapse” whitepaper. I ran some limited performance testing and I’m impressed. I’m yet to test joins between large tables (I’d love to hear from someone on this subject).

Pricing is based on the data processed but it’s only $5 per TB. So, export your data in *.parquet files which are smaller and faster (TIP: you can use Azure Data Factory to output Parquet files). Furthermore, Serverless will automatically update statistics for Parquet files (CVS files will require you to update statistics manually) which together with proper partitioning could improve performance for joins and filters so that Serverless doesn’t have to read all the files.

The Serverless feature that I like most though is the flexible schema.

Consider the following query, which defines the expected schema using the WITH clause. Will the query break if a schema drift is detected, e.g. some of the files have additional columns or the column names don’t match? Nope. The query still executes successfully, and if not specified in the WITH clause, the schema drift columns will come back empty. This opens the opportunity to do analytics on top of different schemas which is something you can’t do with your trusted SQL Server!

SELECT

     COUNT([Month]), COUNT([Month1])

FROM

    OPENROWSET(

        BULK‘https://dvdls01.dfs.core.windows.net/dvdlsfs01/google/*.csv’,

        FORMAT = ‘CSV’,

        PARSER_VERSION=‘2.0’,

        HEADER_ROW = TRUE

    )

    WITH (

        [Year] int,

        [Month] NVARCHAR(200),

        [Month1] NVARCHAR(200),

        [Day] int,

        [Ecommerce Conversion Rate] NVARCHAR(50),

        [New Users] int

        )

        AS [result]

The Bad

Currently, Serverless is limited to accessing data stored in ADLS only. I wish Microsoft extends Serverless to other vendors, such as Amazon (S3 and Athena) or Google. As a PaaS offering, there is no option to plug in additional drivers. The same limitation applies to the Synapse SQL Pool by the way. Adding the ORC files to the list of supported file formats will be a good addition, as well as the AVRO format which is generated by the Event Hub Capture.

You can find more crowdsourced improvement ideas at the Synapse Forum.

The Ugly

Now that you have a logical data warehouse, a common requirement is to get that data and load it into a physical data warehouse hosted in the Synapse SQL Pool. Unfortunately, the current architecture keeps the two pools separate (think of them as two different SQL servers). Views saved in a Serverless database are not accessible by the SQL Pool. The SQL Pool doesn’t support OPENROWSET either, so the approach outlined here won’t work. You must use Azure Data Factory or another ETL tool to stage the data in the SQL Pool if you follow the ELT pattern. I’d love to see the SQL Pool extended to support OPENROWSET as Azure SQL Managed Instance does to avoid yet another data movement and data staging.

Atlanta MS BI and Power BI Group Meeting on July 5th

Please join us online for the next Atlanta MS BI and Power BI Group meeting on Monday, July 5th, at 6:30 PM.  Reid Havens (Consultant and MVP) will show us how to get the most out of Power BI bookmarks and buttons. And I’ll cover the Power BI latest enhancements. For more details and sign up, visit our group page.

Presentation:Bedazzling Your Bookmarks & Buttons
Date:July 5th
Time:6:30 – 8:30 PM ET
Place:Click here to join the meeting
Overview:Bookmarks and Interactive Buttons are two of the best ways to add web-like user experiences into Power BI reports. Bookmarks help create a more seamless filter and navigation experience. Buttons can be used with bookmarks, but can also be leveraged for a lot of other report functionality and navigation.
Speaker:Reid Havens is the founder of Havens Consulting Inc. and is a Microsoft Most Valued Professional (MVP). He has an extensive background in technology and organizational management, and has obtained a Master’s Degree in Organizational Development and Business Analytics. He has experience as a consultant working with many Fortune 50, 100, and 500 companies. Additionally, he teaches Business Intelligence, reporting, and data visualization courses at the University of Washington and other universities.
Prototypes without pizza:Power BI Latest

PowerBILogo

Prologika Newsletter Summer 2021

Business Performance Management (BPM) is a methodology to help the company predict its performance. An integral part of a BPM strategy is creating and monitoring a scorecard with Key Performance Indicators (KPIs). In this newsletter, I’ll discuss how the newly released Power BI Goals can help you augmenet your BPM strategy. But before that, I’d like to share my excitement that Microsoft have recently awarded me FastTrack Recognized Solution Architect – Power BI! I’m one of the 33 individuals worldwide who got nominated by the Power Platform product engineering team for consistently exhibiting deep architecture expertise and creating high quality solutions for customers during project engagements.

Introducing Power BI Goals

A vital BI practice for every organization, performance management ensures that important metrics, such as Key Performance Indicators (KPIs), meet established goals. The typical artifact to do so is implementing a scorecard: a report that compares the current state with the desired state of these metrics. You might have also heard the term “balanced scorecard” which is an organization-wide scorecard that tracks several subject areas, such as Finance, Customer, and Operations. In the past, organizations would use different tools, such as the now deprecated PerformancePoint (included in SharePoint Server) to implement balanced scorecards. Realizing the importance of scorecards, Power BI introduced Goals that aim to simplify the process of implementing departmental and organizational scorecards. For more information on how Goals works, watch the “Goals in Power BI” presentation from the Microsoft Business Application Summit

What’s to like
As with anything Power BI, Microsoft has democratized scorecards so business users with no reporting experience can quickly assemble them from existing reports. Think of a goal as a line (or KPIs) in the scorecard. Here is scorecard with two goals:

Currently, Power BI supports two goal types:

  • Static – The goal creator manually enters and track the goal properties, such as current value, target value, and status. This could be useful for quick and dirty KPIs that are not backed by a data source, such as launching a new promotion campaign. In the scorecard above, I created the Revenue goal by entering 5M as the current value and 5.5M as the goal.
  • Data-driven – The goal current value and/or target value can be data-driven and bound to metrics from existing report(s). Coming from Analysis Services, I was initially surprised that Power BI doesn’t require implementing KPI measures, but I get it: Microsoft decided to source the metrics from reports so business users can easily apply filters. If the goal owner chooses a metric from a visual that has a Date field, such as a time series chart, Power BI automatically shows a sparkline for the goal progress over time. And, of course, when the report dataset is refreshed, the goal values are updated.

So, no modeling or Power BI Desktop required assuming that someone else, such as a data analyst, has delivered functional reports with the metrics. Even better, the goal current and target values can come from different reports (even a report in a different workspace if you have permissions), e.g. a report with actuals and another report with targets. So, there is plenty of flexibility here. To mimic a balanced scorecard that spans multiple subject areas, the owner can create subgoals. For example, the main goal could be Finance with subgoals Revenue, Margin, etc. Because like dashboards, goals are “pinned” from reports, the end user can navigate to the underlying report to examine the data in more detail. Users can also add notes to explain the goal behavior to the teammates.

A scorecard is a first-class Power BI citizen, and as such, it can be secured, endorsed, secured with sensitivity labels, annotated, and shared, such as sharing the scorecard to a Microsoft Teams channel. The scorecard data is saved in a Power BI dataset that users can connect to build custom reports. Moreover, Power BI automatically adds daily snapshots to the dataset allowing users to build up a history of the goals. For example, if the underlying report is refreshed daily, the updated goal values will be appended to the dataset. Developers can use the Power BI REST APIs to implement programmatic scorecard management solutions.

What’s not to like

Besides navigating to the underlying report, a goal is a one-liner in the scorecard. I can’t define a goal that shows me a metric sliced by dimension members, such as business unit. Further, subgoals are not currently aggregable, such as to sum or average values when rolling up to the main goal. Like limitations with dashboards, there is no way to apply a global filter to the scorecard, e.g. to filter all goals for the prior month.

Besides current and target values, no other goal properties can be data driven. For example, unlike Analysis Services KPIs, the goal status can’t be currently bound to a DAX measure. Changing the status requires proactive manual “check ins” although Microsoft mentions a forthcoming feature that will let users define rules to change the status, like how you can define rules for dashboard tile alerts. Speaking of data-driven properties, I don’t understand why you must use a date field to get the progress as opposed to any other field, such as Month, in your Date table.

Finally and unfortunately, Goals require Power BI Premium. If we really want to democratize features, shouldn’t we make them available in Pro?

Conclusion

Goals are a Power BI Premium feature aimed at making it easier to create scorecards and monitoring metrics from existing reports. They promote a “bottom-up” culture, where business users can create departmental scorecards without reliance on IT. Microsoft plans more features by the end of the year to make Goals more appealing, such as integration with Power Automate to trigger actions, rolling up subgoals, changing the goal tracking cycle (DoD, MoM, YoY), custom goal formatting, Power BI Mobile experience optimized for phones, providing a scorecard visual, and cascaded goals (hierarchy of goals).

If you find Power BI Goals somewhat inflexible or you don’t have budget to upgrade to Premium, you don’t have to use the Goals feature to implement scorecards. You can define KPIs and create dashboard-looking reports where you have complete control over the scorecard presentation.


Teo Lachev
Prologika, LLC | Making Sense of Data
Microsoft Partner | Gold Data Analytics

logo

Top 5 Reasons to Switch to Power BI Gen2

You should definitely switch your Power BI Premium capacities to Gen2 although you might wait until it goes GA because Gen2 is currently in preview if you’re risk-averse. I switched an enterprise client P2 node to Gen 2 a few months ago and here are the top benefits we observed:

    1. Lower CPU utilization
      The Gen1 P2 node was under CPU pressure so the client was considering upgrading to P3. This went away after switching to Gen2 as the graph below shows. That’s because each operation essentially has access to all the cores on the node (which is essentially a P3 node). This can give you a nice boost to performance as well. This doesn’t mean that you get more cores for free. If Power BI detects that the CPU used by the capacity (across all its datasets/dataflows/etc.) is exceeding the CPU that you have purchased, then subsequent operations would be throttled (delayed).
    2. More memory
      Imported models are memory-resident so memory is usually the most constraining factor. With Gen2, the capacity maximum memory applies to the resource itself and not collectively across all resources in the capacity. Let’s say you are on a P1 plan which has a maximum memory capacity of 25GB. With Gen 1, you won’t be able to have two datasets, let’s say 20GB and 10GB, loaded at the same time. However, Gen2 will apply the 25GB limit to each dataset. So, each resource (dataset, report, dataflow) will be boxed within 25 GB. This feat is possible because Gen2 uses a SaaS approach, which means datasets are scattered across multiple cluster nodes instead of being associated with a dedicated capacity. A potential downside, however, could be “noisy neighbor” because a P3 cluster node may co-host datasets from different customers.
    3. Less “out of memory” refreshes
      Related to item 2, dataset refreshes now have more room. Assuming P1, a 10GB dataset is likely to refresh successfully regardless of other datasets loaded in the same capacity. As a rule of thumb, a dataset will require at least twice the memory to fully refresh, so a full refresh of the 20GB dataset is likely to run out of memory. However, you should be able to utilize less memory if you process specific partitions in large tables or configure them for incremental refresh.
    4. Less management overhead
      All activities running on the cluster are metered so it’s easy to understand when the capacity is overutilized by using the Gen2 utilization app. This removes the need to monitor Gen1 capacities proactively for signs of overutilization. Further, capacity admins can subscribe for notifications.
    5. No additional cost
      The best for last. You get all the above without paying more!

What I like to see improved in future:

    1. Customized capacity limits, such as when more memory but less cores are needed.
    2. More granular auto-scale. The current auto-scaling mechanism keeps the “ad hoc” cores for 24 hours. Ideally, I’d like to see the same auto-scaling mechanism as Azure SQL Database Serverless where the system auto-scales within minutes and can hydrate if database is not in use. Of course, this should apply to also removing provisioned cores, such as a P1 plan downgrading to less than 8 cores.
    3. Better utilization monitoring app. The current app has left vast areas for improvement. For example, it doesn’t currently report the memory utilization at all. The app should report memory utilization per dataset and refresh so that you can answer the question “why my refresh ran out of memory”.

Power BI Aggregations: Limitations and Workarounds

Power BI aggregations are meant to speed up queries to large DirectQuery tables, as a DBA would create summarized tables to speed up queries to large tables. The most appealing aspect of telling Power BI about these aggregations is that Power BI will automatically redirect the query to the aggregation cache if it determines that its dimensionality matches the dimensionality of the aggregated table, as explained in the documentation. However, there are a couple of limitations worth emphasizing that will prevent this from happening:

  1. Power BI requires regular relationships with 1:M cardinality and uni-directional filter between the dimension table and aggregation table. Many-to-many cardinality (aka “limited” relationships in the documentation) won’t work. For example, you might have a Customer table related with 1:M to a CustomerFilter table. Queries involving the CustomerFilter table won’t hit the aggregation cache.
  2. Dynamic relationship won’t hit the cache either. For example, as a workaround for the first limitation, you might attempt creating measures, such as Measure1 = CALCULATE([SomeMeasure], TREATAS(VALUES(CustomerFilter[Selection], Customer[CustomerName])), but this won’t work either.

Aggregation hits require active relationships or mappings based on replicated dimension values.

These limitations were showstoppers for using Power BI aggregations in a recent project. Instead, we rolled up a custom aggregation approach along the following lines:

  1. We introduced an aggregation table just like with Power BI aggs and joined to the related dimensions.
  2. We created two sets of measures:
    1. On top of the aggregation table, e.g. Sales (Agg) = SUM(FactSalesAgg[SalesAmount])
    2. On top of the DQ table, e.g. Sales (DQ) = SUM(FactSales[SalesAmount])
  3. For reporting, when it was clear which report pages, such as dashboards, would hit only the aggregation table, we used the (Agg) measures. Detail pages, such as drill-through pages, use the DQ measures. To simplify the measure choice for end users interesting in their own reporting, you can introduce measure wrappers, such as:
    Sales = IF(ISCROSSFILTERED(FactSales[PrimaryKey or other low granularity column]), [Sales (DQ)], [Sales (Agg)])

Atlanta MS BI and Power BI Group Meeting on June 7th

Please join us online for the next Atlanta MS BI and Power BI Group meeting on Monday, June 7th, at 6:30 PM.  Stacey Jones (Principal Data Solutions Architect with Microsoft) will introduce us to Azure Purview. And I’ll cover the Power BI latest enhancements. For more details and sign up, visit our group page.

Presentation:Data Governance with Azure Purview
Date:June 7th
Time:6:30 – 8:30 PM ET
Place:Click here to join the meeting
Overview:Have auditors asked for a report detailing sensitive data in your Company? Are you having trouble finding out where all your data resides? Are Data Scientists and analysts wasting time finding the right data? What will be impacted if I change this column in a table? Where is personnel data stored? Are we in compliance with information security standards that apply to us? If you answered yes to any of these or related questions, you need an Enterprise Data Discovery and Cataloging tool. Come find out how Microsoft’s new Azure Purview tool can create this valuable perspective on your data, making it much more useful and valuable!
Speaker:Stacey Jones specializes in mentoring and guiding firms in their efforts to build a modern Data, AI & BI governance programs that empower their business with Self-Service BI and Data Science capabilities. He currently serves as the Principal Data Solutions Architect with Microsoft at the Atlanta Microsoft Technology Center (MTC).
Prototypes without pizza:Power BI Latest by Teo Lachev

PowerBILogo