Where is Teo in September?

I have a few public events in September (all in Atlanta).

These events are also on my calendar.

Power BI Dataset Size Limits

Question: What’s the maximum dataset size Power BI can support?

Answer: As of now (Sep 1, 2018), the maximum dataset size that Power BI Pro can support is 1 GB. The maximum dataset size with Power BI Premium and Power BI Embedded is 10 GB but it depends on the SKU purchased.

Power BI Premium PlanPower BI Embedded PlanMax Dataset Size and Memory
P1A13 GB
P2A25 GB
P3 and higherA3 or higher10 GB

The important point is “as of now”, as with everything cloud these limits will probably get higher soon. Also, recall that Power BI compresses data well so don’t jump to conclusions of how many rows a dataset can pack if you base the estimate solely on the dataset original size. For example, a 100 MB Excel file might compress to 10 megabytes.

The limits apply to datasets after compression. So, if the file size fits the limit, Power BI will let you upload it. But what happens if the dataset size increases after the dataset memory footprint expands or if it continues to expand on refresh? For example, a 10 GB pbix file might translate to a much larger memory footprint once loaded up (due to abf compression, dictionary expansion, assorted data structures, etc.). So, even if the dataset is loaded is loaded up, there will not be enough memory to do anything useful with it like run queries, do refresh, etc. because all the physical memory is used up. So there are also limits based on the actual memory used. The documentation refers to some extra memory that Power BI grants the dataset to allow future expansion to a point, such as up to 12 GB for a 10 GB dataset.

Power BI Sharing Is Getting Better

A Power BI training or assessment won’t be complete unless I get hammered on the Power BI sharing limitations. So far, I could only mumble something to the extent of “I agree that Power BI sharing sucks big time”, look for the exit and suggest an unplanned break. Fortunately, it looks like quantitative accumulations from unhappy corporate customers have resulted in qualitative changes as Microsoft is getting serious about addressing these limitations. To me, as it stands today, the Power BI sharing is a classic example of overengineering. Something that could have been easily solved with the conventional and simple folders a la SSRS, morphed into some farfetched “cool” vision of workspaces and apps without much practical value. Let’s revisit the current Power BI sharing limitations to understand why change was due:

  • Workspace dependency on Office 365 groups – This results in explosion of workspaces as Power BI is not the only app that creates groups.
  • Any Power BI Pro user can create a workspace – IT can’t put boundaries. I know of an organization that resorted to an automated script that nukes workspaces which aren’t on the approved list.
  • You can’t add security groups as members – As I discussed in my blog “Power BI Group Security“, different Power BI features have a different degree of support for groups. For example, workspaces didn’t support AD security groups.
  • Coarse content access level – A workspace could be configured for “edit” or “view only” at the workspace level only. Consequently, it wasn’t possible to grant some members view access while others edit permissions.
  • (UPDATE 6/9/2019: A dataset can now be shared across workspaces) Content sharing limitations – Content can’t be copied from one workspace to another. Worse yet, content can’t be reused among workspaces. For example, if Teo deploys a self-service semantic model to the Sales workspace, reports must be saved in the same workspace. 
  • No nesting support – Workspace can’t be nested, such as to have a global Sales workspace that breaks down in to Sales North America, Sales Europe, etc. Again, this is a fundamental requirement that any server-side reporting tool supports but not Power BI. You can’t organize content hierarchically and there is no security inheritance. Consequently, you must resort to a flattened list of workspaces.
  • (UPDATE 6/9/2019: A dataset can now be promoted and certified). No IT oversight – There is no mechanism for IT to certify workspace content, such as a dataset. There is no mechanism for users to discover workspace content.
  • UPDATE 6/27: The new Viewer role supports sharing with viewers. Sharing with “viewers” – You can’t just add Power BI Free users to share content in a premium workspace. This would have been too simple. Instead, you must share content out either using individual report/dashboard sharing or apps.
  • One-to-one relationship with apps – Since broader sharing requires apps (why?), an app needs to be created (yet another sharing layer) to share out workspace content. But you can’t publish multiple apps from a workspace, such as to share some reports with one group of users and another set with a different group.

Enough limitations? Fortunately, the new workspace experience immediately solves the first four (highlighted) issues!

For example, the screenshot below shows how I can add all the O365 group types as members of a workspace (Student is individual member, Security is a security group, and DIAD is a O365 group). Moreover, two new roles, Contributor and Viewer (the Viewer role is not yet available), would further limit the member permissions. So, we finally have member-level security instead of workspace-level security.

What about the other limitations? Microsoft promises to fix them all, but it will take a few more months. Meanwhile, watch the “Distribute insights across the organization with Microsoft Power BI” presentation. So, hopefully by the end of the year I’ll have a much better sharing story to tell after all.

Atlanta MS BI and Power BI Group Meeting on August 28

MS BI fans, join us for the next Atlanta MS BI and Power BI Group meeting on Monday, August 28th, Tuesday at 6:30 PM at the Microsoft office in Alpharetta (our regular venue room is being renovated). If you haven’t been at the new MS office, prepare to be impressed! Mark Tabladillo, Ph.D, from Microsoft will show you how to advanced analytics with Power BI. Accelebrate will sponsor the meeting.  For more details, visit our group page and don’t forget to RSVP (use the RSVP survey on the main page) if you’re planning to attend.

Presentation:Advanced Analytics with Power BI
Date:August 28, 2018, Tuesday
Time6:30 – 8:30 PM ET
Place:Microsoft Office (Alpharetta)

8000 Avalon Boulevard Suite 900
Alpharetta, GA 30009

Overview:Power BI has become an increasingly important data analytics tool.  This presentation focuses on the advanced analytics options currently available in Power BI.  Attendees to this talk will see:

·         Microsoft’s perspective on advanced analytics development:  the Team Data Science Process

·         What the general options are for advanced analytics on Azure

·         What the specific native advanced analytics capabilities are in Power BI

·         Some ideas on pairing Power BI with other technologies in advanced analytics architectures

See https://powerbi.microsoft.com/en-us/blog/power-bi-expands-self-service-prep-for-big-data-unifies-modern-and-enterprise-bi/ for the latest announcements.

Speaker:Mark Tabladillo Ph.D. is a data scientist at Microsoft. His career has focused on industry application of advanced analytics, using a variety of analytics tools including SAS, SQL Server Analysis Services, Cortana Intelligence (including Microsoft R Server and Microsoft Machine Learning Services), R, and Python.  He was a founding member of the Atlanta Microsoft BI User’s Group eight years ago, and this group’s second presenter.
Sponsor:Don’t settle for “one size fits all” training. Choose Accelebrate, and receive hands-on, engaging training precisely tailored to your goals and audience! Our on-site training offerings range from ASP.NET training and SharePoint training to courses on ColdFusion, Java™, C#, VB.NET, SQL Server, and more.
Prototypes with PizzaFeatured Power BI enhancements


092417_1708_AtlantaMSBI1.png

Conducting Two General Enrollment Workshops in Atlanta

I rarely teach classes open to the public so take advantage of this exclusive opportunity to increase your data IQ! After course completion and by the end of the year, all course attendees will receive a free, one-hour, follow-up consultation with me on any Power BI or SQL Server related topic. Register today for these in-person and instructor-led training events as seating is limited.

2-Day Applied Power BI Workshop

  • Date: Sep 11 and 12, 2018
  • Time: 8:30 AM – 5 PM
  • Location: Microsoft Office in Alpharetta
  • Catering: lunch provided
  • Syllabus: available here
  • Price: $999 (use coupon POWERBI20180911 for an instant 5% discount if you sign up two or more attendees from your company)
  • For more information and to register, go to http://bit.ly/pbiworkshop201809

2-Day Applied DAX with Power BI Workshop

  • Date: Oct 15 and 16, 2018
  • Time: 8:30 AM – 5 PM
  • Location: Microsoft Office in Alpharetta
  • Catering: lunch provided
  • Syllabus: available here
  • Price: $999 (use coupon SQL201810 for an instant 5% discount if you sign up two or more attendees from your company)
  • For more information and to register, go to http://bit.ly/daxworkshop201810

powerbi+dax

Finding Data Movement Bottlenecks

As you’ve heard, data volumes are increasing and putting pressure on BI practitioners to do more with less within ever shrinking processing windows. I’ve provided some ETL best practices in various blogs, such as my “Is ETL (E)ating (T)hou (L)ive?” newsletter. A few additional considerations regarding data movement bottlenecks came from two recent projects targeting SQL Server.

Slow Network

If you’re loading data from data sources on Wide Area Network (WAN), network speed is probably going to be a major concern. While LAN enjoys speeds at around 1,000 Mbps, you’ll probably find WAN speeds to be x10 less. You can use the network tab in the Windows Task Manager to find the network throughput. The next step is to determine how much it slows down data movement using a simple approach:

  1. Remote in to the remote server.
    1. Open SSMS, create a query, and execute a SELECT statement from the your ETL data flow task.
    2. Before running the query, you might want to go to the query options (Grid settings) and check “Discard results after execution” to prevent loading thousands of rows in the query grid and running SSMS out of memory.
    3. Record the time to execute the query. This is your baseline speed that doesn’t involve the network at all.
  2. Now remote in to the server where ETL is running and repeat the test. The delta between the two durations would tell you how much time is spent moving the data across the wire.

You now know how much the network slows down ETL but reducing network latencies might be out of your control. Hence, incremental extraction and parallel ETL execution become important.

Slow DML Operations

It’s not uncommon for BI ETL tasks to load huge datasets. Sure, you have SSD drives that got the fast I/O covered, right? Well, there are other things to check. In a recent project, we had a two-node Always On cluster for a high availability of an OLTP database. To reduce SQL Server licensing cost, the data warehouse is hosted on the cluster. Because availability group replication requires Full recovery mode, the data warehouse was configured for Full Recovery. This effectively disables minimal logging which becomes otherwise easily attainable with SQL Server 2016 or higher. Here are some stats from running a simple batch insert of 10 million rows:

  • Data warehouse database in Simple Recovery mode: 1:20 min
  • Data warehouse database in Full Recover mode: 1:34 min (no minimal logging)
  • Data warehouse database synchronized: 6:58 minutes!

As you can see, Full Recovery adds about 18% more overhead (besides huge logs during massive DML operations). However, the worst culprit is the synchronization between primary and secondary replicas, slowing down the overall execution time five times! You’ll know that you are waiting for replica synchronization when you see HADR_SYNC_COMMIT wait type. You can follow the steps in this blog to try reducing the cluster synchronization latency but the best strategy might be to remove the DW database from synchronization and to change its recovery mode to Simple, assuming that DW is not a mission critical resource.

All projects start simple. Adventure Works-type data loads are fast and require no optimization. But many of us are not that lucky and tackling 100s of millions (if not billions of rows) is becoming the norm. You must bend backwards to optimize data movement and reduce ETL processing windows before discovering that ETL tasks are running back to back 24×7.

A First Look at Power BI Aggregations

During the “Building a data model to support 1 trillion rows of data and more with Microsoft Power BI Premium” presentation at the Business Applications Summit, Microsoft discussed the technical details of how the forthcoming “Aggregations” feature can help you implement fast summarized queries on top of huge datasets. Following incremental refresh and composite models, aggregations are the next “pro” feature that debuts in Power BI and it aims to make it a more attractive option for deploying organizational semantic models. In this blog, I summarize my initial observations of this feature which should be available for preview in the September release of Power BI.

Aggregations are not a new concept to BI practitioners tackling large datasets. Ask a DBA what’s the next course of action after all tricks are exhausted to speed up massive queries and his answer would be summarized tables. That’s what aggregations are: predefined summaries of data, aimed to speed queries at the expense of more storage. BI pros would recall that Analysis Services Multidimensional (MD) has supported aggregations for a long time. Once you define an aggregation, MD maintains it automatically. When you process the partition, MD rebuilds the partition aggregations. An MD aggregation is tied to the source partition and it summarizes all measures in the partition. You might also recall that designing proper aggregations in MD isn’t easy and that the MD intra-dependencies could cause some grief, such as processing a dimension could invalidate the aggregations in the related partitions, requiring you to reprocess their indexes to restore aggregations. On the other hand, as it stands today, Analysis Services Tabular (Azure AS and SSAS Tabular) doesn’t support aggregations. Power BI takes the middle road. Like MD, Power BI would search for suitable aggregations to answer summarized queries, but it requires more work on your part to set them up.

Consider the following model which has a FactInternetSales fact table and three dimensions: DimCustomer, DimProduct, and DimDate. Suppose that most queries would request data at the Product and Date levels (not Customer) but such queries don’t give you the desired performance. This could be perhaps because FactinternetSales is imported (cached) but it’s huge. Or, FactInternetSales could be configured for DirectQuery but the underlying data source is not efficient for massive queries. This is where aggregations might help (contrary reasons for using aggregations is to compensate for bad design or inefficient DAX).

081318_1754_AFirstLooka1.png

As a first step for setting up aggregations, you need to add a summarized table. Unlike MD, you are responsible for designing, populating and maintaining this table. Why not the MD way? The short answer is flexibility. It’s up to you how you want to design and populate the summarized table: ETL or DAX. It’s also up to you which measures you want to aggregate. And, the summarized table doesn’t have to be imported (it could be left in DirectQuery). You can also have multiple aggregations tables (more on this in a moment). In my case, I’ve decided to base the FactInternetSalesSummary table on a SQL view that aggregates the FactInternetSales data, but I could have chosen to use a DAX calculated table or load it during ETL. In my case, FactInternetSalesSummary aggregates sales at the Product and Date level, assuming I want to speed up queries at that grain. In real life, FactInternetSalesSummary would be hidden to end users so they are not confused which table to use.

Note that a composite model has specific requirements for configuring dimension tables, which I discussed in my blog “Understanding Power BI Dual Storage“. For example, if FactInternetSalesSummary is imported but FactInternetSales is DirectQuery, DimProduct and DimDate must be configured in Dual storage mode. Once the aggregation table is defined, the next step is to define the actual aggregations. Note that this must be done for the aggregation table (not the detail table) so in my case this would be FactInternetSalesSummary. Configuring aggregations involves specifying the following configuration details in the “Manage aggregations” window:

  • Aggregation table – the aggregation table that you want to use for the aggregation design.
  • Precedence – in the case of multiple aggregation tables, you can define which aggregation table will take precedence (the server will probe the aggregation table that has a highest level first). Another feature that opens interesting scenarios.
  • Summarization function – Supported are Count, GroupBy, Max, Min, Sum, Count. If the aggregation table has relationships to dimension tables, there is no need to specify GroupBy. However, if the aggregation table can’t be joined to the dimension tables in a Many:1 relationship, GroupBy is required. For example, you might have a huge DirectQuery table where all dimension attributes are denormalized and another summarized table. Because currently aggregations don’t support M:M relationships, you must use GroupBy. Another usage scenario for GroupBy is for speeding up DistinctCount measures. If the column that the distinct count is performed is defined as GroupBy, then the query should result in an aggregation hit. Finally, note that derivative DAX calculations that directly or indirectly reference the aggregate measure would also benefit from the aggregation.
  • Detail table – which table should answer the query for aggregation misses. Note that Power BI can redirect to a different detail table for each measure. Even more flexibility!
  • Detail column – what is the underlying detail column.

081318_1754_AFirstLooka2.png

The next step is to deploy the model and schedule the aggregation table for refresh if it imports data. Larger aggregate tables would probably require incremental refresh. Once the manual part is done, the server takes over. In the presence of one or more aggregation tables, the server would try to find a suitable table for aggregation hits. You can’t discover aggregation hits in the SQL Server Profiler by monitoring the existing MD “Getting Data from Aggregation” event. Instead, a new ” Query Processing\Aggregate Table Rewrite Query” event was introduced:

{

“table”: “FactInternetSales”,

“mapping”: {

“table”: “FactInternetSalesSummary”

},

“matchingResult”: “matchFound“,

“dataRequest”: [

}

To get an aggregation hit at the joined dimensions granularity, the query must involve one or more of the actual dimensions. For example, this query would result in an aggregation hit because it involves the DimDate dimension which joins FactInternetSalesSummary.

EVALUATE
SUMMARIZECOLUMNS (
‘DimDate'[CalendarYear],
“Sales”, SUM ( FactInternetSales[SalesAmount] )
)

However, this query won’t result in an aggregation hit because it aggregates using a column from the InternetSales table, even though this column is used for the relationship to DimDate and the aggregation is at the OrderDateKey grain.

EVALUATE
SUMMARIZECOLUMNS (
FactInternetSales[OrderDateKey],
“Sales”, SUM ( FactInternetSales[SalesAmount] )
)

Aggregations are a new feature in Power BI to speed up summarized queries over large or slow datasets. Microsoft has designed aggregations with flexibility in mind allowing the modeler to support different scenarios.

Understanding Power BI Dual Storage

The July release of Power BI Desktop introduced composite models, which I wrote about in my blog “Power BI Composite Models: The Good, The Bad, The Ugly” Composite models make data acquisition much more flexible but also more complex. A Power BI table can now be configured in one of these storage modes:

  • Import – The table caches the data. All Power BI features work.
  • DirectQuery – The table has only the metadata while the data is left at the data source. When users interact with the report, Power BI generates DAX queries to the Analysis Services server hosting the model, which in turn generates native queries to the data source. DirectQuery supports limited Power Query transformations and DAX functions in calculated columns, as described in more details here. Besides Power Query and DAX limitations, DirectQuery and Live Connections have feature limitations, as I discussed in my blog “Power BI Feature Discrepancies for Data Acquisition“.
  • Live Connection – This is a special case when Power BI connects directly to multidimensional data sources, such as Analysis Services, published Power BI Datasets and SAP Hana. The fact that Power BI doesn’t support DirectQuery against these data sources is the reason why composite models are not supported as well.
  • Dual – This is a new storage mode that deserves more attention.

As the name implies, the dual storage mode is a hybrid between Import and DirectQuery. Like importing data, the dual storage mode caches the data in the table. However, it leaves it up to Power BI to determine the best way to query the table depending on the query context. Consider the following schema where all tables come from the same data source. Let’s assume the following configuration:

  • FactInternetSales is imported
  • FactResellerSales is DirectQuery
  • DimDate and DimProduct are Dual

080518_2257_Understandi1.png

Power BI will attempt the most efficient was to resolve the table joins. For example, if a query involves FactInternetSales and one or more of the dual dimensional tables, such as DimDate, the query would involve the DimDate cache. However, if the query involves FactResellerSales, Power BI will pass through the join. That’s because it would much more efficient to let the data source join the two tables in DirectQuery as opposed to bringing all the FactResellerSales table at the join granularity and then joining to the DimDate cache. In fact, Power BI will force you to change related dimension tables to Dual when it discovers hybrid joins among tables in the same data source. The following storage mode configurations are allowed between any two tables participating in M:1 join from a single data source:

Table on Many Side Is InTable on One Side Must Be In
DualDual
ImportImport or Dual
DirectQueryDirectQuery or Dual

Therefore, a conformed dimension table must be configured for a Dual mode to join a fact table in Import mode and a fact table in DirectQuery mode.

When Power BI Desktop detects an incompatible configuration, it may disallow the change. It may disallow the configuration because currently Power BI can’t switch from Import to any other storage mode.

Or, it may ask you to switch storage modes it if finds a way to perform the change on its own.

Dual mode is important for tables coming from a single data source. Heterogeneous joins spanning two data sources would always result in a M:M relationship where the Power BI Mashup Engine will resolve the join. Specifically, when the joined table cardinality exceeds an undisclosed by Microsoft threshold, Power BI would perform the join on its own. For example, if DimDate is imported from Data Source A but FactResellerSales is DirectQuery in Data Source B, Power BI would send a direct query to group FactResellerSales to B at the join cardinality (e.g. DateKey), get the data, and then join the tables internally.

On my question why Power BI doesn’t handle the dual storage mode on its own by selecting the most efficient join, Microsoft replied that there are two reasons to delegate this task to the modeler and to make the Dual storage configuration explicit:

  • Like Import, Dual requires refresh, whereas DirectQuery doesn’t.
  • Apart from being able to revert to DirectQuery mode, Dual is subject to the same restrictions as DirectQuery.

Therefore, the modeler needs to be aware that the “switch” may result in requiring a data refresh or may result in a data a loss.

The Power BI dual storage mode is important when creating 1:M joins between tables in different storage modes from the same data model and for the forthcoming “Aggregations” feature, which I’ll cover in another blog. Although you need to know how Dual works, you don’t need to remember rules as Power BI Desktop will detect incompatible configurations. Composite models, heterogeneous M:M relationships, dual storage mode, and aggregations are Power BI-only features. At this point, Microsoft hasn’t announced when they will be available in Azure Analysis Services or SSAS Tabular.

Great Atlanta MS BI and Power BI Group Meeting Last Night

A good turnout last night for our Atlanta MS BI and Power BI Group July meeting. Some 60+ attended the meeting at the MTC office at Avalon. Kudos to Scott Fairbanks for the great presentation and CCG for sponsoring the meeting. Thanks to Microsoft for hosting us. We’ll make the slides available shortly at http://atlantabi.pass.org/Resources.aspx.

Power BI News from the Microsoft Business Applications Summit

If you haven’t done so, watch the “Power BI and the Future for Modern and Enterprise BI” presentation from the Business Applications Summit (July 22-24) to find where Microsoft is bringing Power BI in the next few months.

Power BI Service Scale and Adoption

  • 19M data models hosted
  • 58M monthly publish to web views
  • 8M+ report and dashboard queries/hour
  • Power BI a top 10 fastest growing skill according to Upwork

Microsoft will extend Power BI by investing in three main areas:

  • Unified platform for both self-service and enterprise BI
  • Intuitive experience that drive a data culture
  • Pervasive Artificial Intelligence for BI

072818_1802_PowerBINews1.png

Here are most important forthcoming near-future features across these three areas:

Unified platform for both self-service and enterprise BI

  • Make Power BI Desktop a better tool for larger models — new diagram view with perspectives, multi-select fields to set common properties (it’s about time to have this feature), display folders as in SSAS
  • Composite models (delivered in July 2018 release) – I covered this killer feature in my blog “Power BI Composite Models: The Good, The Bad, The Ugly
  • Aggregations – Ability to define aggregations as we can do in SSAS MD. The demo showed an impressive response time for summarized views on top of a 1 trillion row dataset.
  • Multi-geo deployments

Intuitive experience that drive data culture

  • Certified datasets
  • Application lifecycle managements (DEV, QA, PROD environments)
  • Dataflows – dataflows refer to what was announced as “datapools” and Common Data Model for Analytics (review my comments here).
  • Modernized Visualizations pane (delivered in July 2018 release)
  • Report wallpaper (delivered in July 2018 release)
  • Out of box themes
  • Visual headers (delivered in July 2018 release)
  • Snap visual with other visuals
  • Expression-based formatting for every property in the Visualizations pane – This is huge and my users would rejoice to have more control over report properties!
  • Excel-like pivot tables
  • Export to PDF – most voted feature will finally be delivered

Pervasive Artificial Intelligence for BI

  • Quick Insights on right-click
  • Natural questions with suggestions
  • Quick insights inside natural questions results
  • Integration with Python – Connector and custom visuals like R
  • Sentiment analysis
  • Integration with Microsoft Cognitive Services – The demo showed the model deducing that the images of air conditioners in a hotel are broken
  • Better integration with Azure Machine Learning Studio

An impressive list of features indeed!