Posts

Atlanta Microsoft BI Group Meeting on November 4th (Accelerating your Fabric Data Estate with AI & Copilot)

Atlanta BI fans, please join us in person for our next meeting on Monday, November 4th at 6:30 PM ET. Stacey Jones (Principal Data & AI Cross-Solution Architect at Microsoft) and Elayne Jones (Solutions Architect at Coca-Cola Bottlers Sales and Services) will explore the AI and Copilot capabilities within Microsoft Fabrics. And I’ll help you catch up on Microsoft BI latest. I will sponsor the event which marks the 14th anniversary of the Atlanta Microsoft BI Group! For more details and sign up, visit our group page.

Details

Presentation: Accelerating your Fabric Data Estate with AI & Copilot
Delivery: In-person
Date: November 4th, 2024
Time: 18:30 – 20:30 ET
Level: Beginner to Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (events, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Venue
Improving Office
11675 Rainwater Dr
Suite #100
Alpharetta, GA 30009

Overview: In this presentation, we will explore the groundbreaking AI and Copilot capabilities within Microsoft Fabric, a comprehensive platform designed to enhance productivity and collaboration. By leveraging advanced machine learning algorithms and natural language processing, Microsoft Fabric’s AI/Copilot not only streamlines workflows but also provides intelligent insights and automation, empowering users to achieve more with less effort. Join us as we delve into the features and functionalities that make Microsoft Fabric an indispensable tool for modern enterprises.

Sponsor: CloudStaff.ai

PowerBILogo

Fabric Direct Lake: Memory Utilization with Interactive Operations

As I mentioned in my Power BI and Fabric Capacities: Thinking Outside the Box, memory limits of Fabric capacities could be rather restrictive for large semantic models with imported data. One relatively new option to combat out-of-memory scenarios that deserves to be evaluated and added to the list if Fabric is in your future is semantic models configured for Direct Lake storage. The blog covers results of limited testing that I did comparing side by side the memory utilization of two identical semantic models with the first one configured to import data and the second to use Direct Lake storage. If you need a Direct Lake primer, Chris Webb has done a great job covering its essentials here and here. As a disclaimer, the emphasis is on limited as these results reflect my personal observations based on some isolated tests I’ve done lately. Your results may and probably will vary considerably.

Understanding the Tests

My starting hypothesis was that Direct Lake on-demand loading will utilize memory much more efficiently for interactive operations, such as Power BI report execution. This is a bonus to the fact that data Direct Lake models don’t require refresh. Eliminating refresh could save tremendous amount of memory to start with, even if you apply advanced techniques such as incremental refresh or hybrid tables to models with imported data. Therefore, the tests that follow focus on memory utilization with interactive operations.

To test my hypothesis, I imported the first three months for year 2016 of the NY yellow taxi Azure open dataset to a lakehouse backed up by a Fabric F2 capacity. The resulted in 34.5 million rows distributed across several Delta Parquet files. I limited the data to three months because F2 ran out of memory around the 50 million rows mark with the error “This operation was canceled because there wasn’t enough memory to finish running it. Either reduce the memory footprint of your dataset by doing things such as limiting the amount of imported data, or if using Power BI Premium, increase the memory of the Premium capacity where this dataset is hosted. More details: consumed memory 2851 MB, memory limit 2851 MB, database size before command execution 220 MB”

Descriptive enough and in line with the F2 memory limit of maximum 3 GB per semantic model. I used Power BI desktop to import all that data into a YellowTaxiImported semantic model, which I published to Power BI Service and configured for large storage format. Then, I created online a second YellowTaxiDirectLake semantic model configured for Direct Lake storage mapped directly to the data in the lakehouse. I went back to Power BI desktop to whip up a few analytical (aggregate) queries and a few detail-level queries. Finally, I ran a few tests using DAX Studio.

Analyzing Import Mode

Even after a capacity restart, the YellowTaxiImported model immediately reported 1.4 GB of memory. My conclusion was that that the primary focus of Power BI Premium on-demand loading that was introduced a while back was to speed the first query after the model was evicted from memory. Indeed, I saw that many segments were memory resident and many weren’t, but using queries to touch the non-resident column didn’t increase the memory footprint. The following table lists the query execution times with “Clear On Run” enabled in DAX Studio (to avoid skewing due to cached query data).

Naturally, as the queriers get more detailed, the slower they get because VertiPaq is a columnar database. However, the important observation is that the memory footprint remains constant. Please note that Fabric allocates additional memory to execute the queries, so the memory footprint should grow up as the report load increases.

QueryDuration (ms)
//Analytical query 1

EVALUATE
ROW(
“SumtotalAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[totalAmount])),
“SumtollsAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tollsAmount])),
“SumtipAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tipAmount]))
)

124
//Analytical query 2

DEFINE
VAR __DS0Core =
SUMMARIZECOLUMNS(
‘nyc_yellowtaxi'[puMonth],
“SumtotalAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[totalAmount]))
)
VAR __DS0BodyLimited =
SAMPLE(3502, __DS0Core, ‘nyc_yellowtaxi'[puMonth], 1)

EVALUATE
__DS0BodyLimited
ORDER BY
‘nyc_yellowtaxi'[puMonth]

148
//Analytical query 3

DEFINE
VAR __SQDS0Core =
SUMMARIZECOLUMNS(
‘nyc_yellowtaxi'[doLocationId],
“AveragetripDistance”, CALCULATE(AVERAGE(‘nyc_yellowtaxi'[tripDistance]))
)

VAR __SQDS0BodyLimited =
TOPN(50, __SQDS0Core, [AveragetripDistance], 0)

VAR __DS0Core =
SUMMARIZECOLUMNS(
‘nyc_yellowtaxi'[doLocationId],
__SQDS0BodyLimited,
“AveragetripDistance”, CALCULATE(AVERAGE(‘nyc_yellowtaxi'[tripDistance]))
)

VAR __DS0PrimaryWindowed =
TOPN(1001, __DS0Core, [AveragetripDistance], 0, ‘nyc_yellowtaxi'[doLocationId], 1)

EVALUATE
__DS0PrimaryWindowed

ORDER BY
[AveragetripDistance] DESC, ‘nyc_yellowtaxi'[doLocationId]

114
//Analytical query 4

DEFINE
VAR __SQDS0Core =
SUMMARIZECOLUMNS(
‘nyc_yellowtaxi'[doLocationId],
“AveragetripDistance”, CALCULATE(AVERAGE(‘nyc_yellowtaxi'[tripDistance]))
)

VAR __SQDS0BodyLimited =
TOPN(50, __SQDS0Core, [AveragetripDistance], 0)

VAR __DS0Core =
SUMMARIZECOLUMNS(
‘nyc_yellowtaxi'[rateCodeId],
__SQDS0BodyLimited,
“AveragetipAmount”, CALCULATE(AVERAGE(‘nyc_yellowtaxi'[tipAmount]))
)

VAR __DS0BodyLimited =
SAMPLE(3502, __DS0Core, ‘nyc_yellowtaxi'[rateCodeId], 1)
EVALUATE
__DS0BodyLimited

ORDER BY
‘nyc_yellowtaxi'[rateCodeId]

80
//Detail query 1

DEFINE
VAR __DS0FilterTable =
FILTER(KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[startLat])), ‘nyc_yellowtaxi'[startLat] <> 0)

VAR __DS0FilterTable2 =
FILTER(
KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[tpepPickupDateTime])),
‘nyc_yellowtaxi'[tpepPickupDateTime] < DATE(2016, 1, 2)
)

VAR __DS0Core =
SUMMARIZECOLUMNS(
ROLLUPADDISSUBTOTAL(
ROLLUPGROUP(‘nyc_yellowtaxi'[startLat], ‘nyc_yellowtaxi'[startLon]), “IsGrandTotalRowTotal”
),
__DS0FilterTable,
__DS0FilterTable2,
“SumtotalAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[totalAmount])),
“SumtipAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tipAmount])),
“SumtollsAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tollsAmount]))
)

VAR __DS0PrimaryWindowed =
TOPN(
502,
__DS0Core,
[IsGrandTotalRowTotal],
0,
‘nyc_yellowtaxi'[startLat],
1,
‘nyc_yellowtaxi'[startLon],
1
)

EVALUATE
__DS0PrimaryWindowed

ORDER BY
[IsGrandTotalRowTotal] DESC, ‘nyc_yellowtaxi'[startLat], ‘nyc_yellowtaxi'[startLon]

844
//Detail query 2

DEFINE
VAR __DS0FilterTable =
FILTER(KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[startLat])), ‘nyc_yellowtaxi'[startLat] <> 0)

VAR __DS0FilterTable2 =
FILTER(
KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[tpepPickupDateTime])),
‘nyc_yellowtaxi'[tpepPickupDateTime] < DATE(2016, 1, 2)
)

VAR __DS0Core =
SUMMARIZECOLUMNS(
ROLLUPADDISSUBTOTAL(
ROLLUPGROUP(‘nyc_yellowtaxi'[startLat], ‘nyc_yellowtaxi'[startLon]), “IsGrandTotalRowTotal”
),
__DS0FilterTable,
__DS0FilterTable2,
“SumtotalAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[totalAmount])),
“SumtipAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tipAmount])),
“SumtollsAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tollsAmount]))
)

VAR __DS0PrimaryWindowed =
TOPN(
502,
__DS0Core,
[IsGrandTotalRowTotal],
0,
‘nyc_yellowtaxi'[startLat],
1,
‘nyc_yellowtaxi'[startLon],
1
)

EVALUATE
__DS0PrimaryWindowed

ORDER BY
[IsGrandTotalRowTotal] DESC, ‘nyc_yellowtaxi'[startLat], ‘nyc_yellowtaxi'[startLon]

860
//Detail query 3

DEFINE
VAR __DS0FilterTable =
FILTER(KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[startLat])), ‘nyc_yellowtaxi'[startLat] <> 0)

VAR __DS0FilterTable2 =
FILTER(
KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[tpepPickupDateTime])),
‘nyc_yellowtaxi'[tpepPickupDateTime] < DATE(2016, 1, 2)
)

VAR __DS0Core =
SUMMARIZECOLUMNS(
ROLLUPADDISSUBTOTAL(
ROLLUPGROUP(‘nyc_yellowtaxi'[startLat], ‘nyc_yellowtaxi'[startLon]), “IsGrandTotalRowTotal”
),
__DS0FilterTable,
__DS0FilterTable2,
“SumtotalAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[totalAmount])),
“SumtipAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tipAmount])),
“SumtollsAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tollsAmount]))
)

VAR __DS0PrimaryWindowed =
TOPN(
502,
__DS0Core,
[IsGrandTotalRowTotal],
0,
‘nyc_yellowtaxi'[startLat],
1,
‘nyc_yellowtaxi'[startLon],
1
)

EVALUATE
__DS0PrimaryWindowed

ORDER BY
[IsGrandTotalRowTotal] DESC, ‘nyc_yellowtaxi'[startLat], ‘nyc_yellowtaxi'[startLon]

1,213
//Detail query 4 (All Columns)

DEFINE
VAR __DS0FilterTable =
FILTER(KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[startLat])), ‘nyc_yellowtaxi'[startLat] <> 0)

VAR __DS0FilterTable2 =
FILTER(
KEEPFILTERS(VALUES(‘nyc_yellowtaxi'[tpepPickupDateTime])),
‘nyc_yellowtaxi'[tpepPickupDateTime] < DATE(2016, 1, 2)
)

VAR __ValueFilterDM1 =
FILTER(
KEEPFILTERS(
SUMMARIZECOLUMNS(
‘nyc_yellowtaxi'[startLat],
‘nyc_yellowtaxi'[startLon],
‘nyc_yellowtaxi'[paymentType],
‘nyc_yellowtaxi'[vendorID],
‘nyc_yellowtaxi'[improvementSurcharge],
‘nyc_yellowtaxi'[doLocationId],
‘nyc_yellowtaxi'[puLocationId],
‘nyc_yellowtaxi'[storeAndFwdFlag],
‘nyc_yellowtaxi'[tpepDropoffDateTime],
‘nyc_yellowtaxi'[tpepPickupDateTime],
__DS0FilterTable,
__DS0FilterTable2,
“SumtotalAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[totalAmount])),
“SumtipAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tipAmount])),
“SumtollsAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tollsAmount])),
“SumpassengerCount”, CALCULATE(SUM(‘nyc_yellowtaxi'[passengerCount])),
“SumtripDistance”, CALCULATE(SUM(‘nyc_yellowtaxi'[tripDistance])),
“SumendLat”, CALCULATE(SUM(‘nyc_yellowtaxi'[endLat])),
“SumendLon”, CALCULATE(SUM(‘nyc_yellowtaxi'[endLon])),
“Sumextra”, CALCULATE(SUM(‘nyc_yellowtaxi'[extra])),
“SumfareAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[fareAmount])),
“SummtaTax”, CALCULATE(SUM(‘nyc_yellowtaxi'[mtaTax])),
“SumpuMonth”, CALCULATE(SUM(‘nyc_yellowtaxi'[puMonth])),
“SumpuYear”, CALCULATE(SUM(‘nyc_yellowtaxi'[puYear])),
“CountrateCodeId”, CALCULATE(COUNTA(‘nyc_yellowtaxi'[rateCodeId]))
)
),
[SumtollsAmount] > 0
)

VAR __DS0Core =
SUMMARIZECOLUMNS(
ROLLUPADDISSUBTOTAL(
ROLLUPGROUP(
‘nyc_yellowtaxi'[startLat],
‘nyc_yellowtaxi'[startLon],
‘nyc_yellowtaxi'[paymentType],
‘nyc_yellowtaxi'[vendorID],
‘nyc_yellowtaxi'[improvementSurcharge],
‘nyc_yellowtaxi'[doLocationId],
‘nyc_yellowtaxi'[puLocationId],
‘nyc_yellowtaxi'[storeAndFwdFlag],
‘nyc_yellowtaxi'[tpepDropoffDateTime],
‘nyc_yellowtaxi'[tpepPickupDateTime]
), “IsGrandTotalRowTotal”
),
__DS0FilterTable,
__DS0FilterTable2,
__ValueFilterDM1,
“SumtotalAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[totalAmount])),
“SumtipAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tipAmount])),
“SumtollsAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[tollsAmount])),
“SumpassengerCount”, CALCULATE(SUM(‘nyc_yellowtaxi'[passengerCount])),
“SumtripDistance”, CALCULATE(SUM(‘nyc_yellowtaxi'[tripDistance])),
“SumendLat”, CALCULATE(SUM(‘nyc_yellowtaxi'[endLat])),
“SumendLon”, CALCULATE(SUM(‘nyc_yellowtaxi'[endLon])),
“Sumextra”, CALCULATE(SUM(‘nyc_yellowtaxi'[extra])),
“SumfareAmount”, CALCULATE(SUM(‘nyc_yellowtaxi'[fareAmount])),
“SummtaTax”, CALCULATE(SUM(‘nyc_yellowtaxi'[mtaTax])),
“SumpuMonth”, CALCULATE(SUM(‘nyc_yellowtaxi'[puMonth])),
“SumpuYear”, CALCULATE(SUM(‘nyc_yellowtaxi'[puYear])),
“CountrateCodeId”, CALCULATE(COUNTA(‘nyc_yellowtaxi'[rateCodeId]))
)

VAR __DS0PrimaryWindowed =
TOPN(
502,
__DS0Core,
[IsGrandTotalRowTotal],
0,
‘nyc_yellowtaxi'[startLat],
1,
‘nyc_yellowtaxi'[startLon],
1,
‘nyc_yellowtaxi'[paymentType],
1,
‘nyc_yellowtaxi'[vendorID],
1,
‘nyc_yellowtaxi'[improvementSurcharge],
1,
‘nyc_yellowtaxi'[doLocationId],
1,
‘nyc_yellowtaxi'[puLocationId],
1,
‘nyc_yellowtaxi'[storeAndFwdFlag],
1,
‘nyc_yellowtaxi'[tpepDropoffDateTime],
1,
‘nyc_yellowtaxi'[tpepPickupDateTime],
1
)

EVALUATE
__DS0PrimaryWindowed

ORDER BY
[IsGrandTotalRowTotal] DESC,
‘nyc_yellowtaxi'[startLat],
‘nyc_yellowtaxi'[startLon],
‘nyc_yellowtaxi'[paymentType],
‘nyc_yellowtaxi'[vendorID],
‘nyc_yellowtaxi'[improvementSurcharge],
‘nyc_yellowtaxi'[doLocationId],
‘nyc_yellowtaxi'[puLocationId],
‘nyc_yellowtaxi'[storeAndFwdFlag],
‘nyc_yellowtaxi'[tpepDropoffDateTime],
‘nyc_yellowtaxi'[tpepPickupDateTime]

4,240

Analyzing Direct Lake

After another restart, the resident memory footprint of the YellowTaxiDirectLake model was only 22.4 KB! Indeed, the $System.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS DMV showed that only system-generated RowNumber columns were memory resident.

For each query, I recorded two runs to understand how much time is spent in on-demand loading of columns into memory. The Import Mode column was added for convenience to compare the second run duration with the corresponding query duration from the Import Mode tests. Finally, the Model Resident Memory column records the memory footprint of the Direct Lake model.

QueryFirst Run
(ms)
Second Run (ms)Import Mode

(ms)

Model Resident Memory (MB)
//Analytical query 1797512414
//Analytical query 2797614814.3
//Analytical query 338213311468.1
//Analytical query 42091308068.13
//Detail query 17,7631,023844669.13
//Detail query 21,4841,453860670.53
//Detail query 31,8811,4631,213670.6
//Detail query 49,6633,6684,2401,270

Conclusion

To sum up this long post, the following observations can be made:

  1. As expected, the more columns the query touch, the higher the memory footprint of Direct Lake. For example, the last query requested all the columns, and the resulting memory footprint was at a par with imported mode.
  2. It’s important to note that when Fabric is under memory pressure, such as when the report load increases, Direct Lake will start paging out columns with low temperature. The exact thresholds and rules are not documented but I’d expect the eviction mechanism to be much more granular and intelligent than evicting entire datasets with imported mode.
  3. The reason that I didn’t see Direct Lake paging out memory is because I was still left with plenty (1.27 GB consumed out of 3 GB). It doesn’t make sense evicting data if there is no memory pressure since memory is the fasted storage.
  4. You’ll pay a certain price the first time a column is loaded on demand with Direct Lake. The more columns, the longer the wait. Subsequent runs, however, will be much faster if the column is still mapped in memory.
  5. Some queries will execute faster in import mode and some will execute slower. Overall, queries touching memory-resident columns should be comparable.

Therefore, if Direct Lake is an option for you, it should be at the forefront of your efforts to combat out-of-memory errors with large datasets. On the downside, more than likely you’ll have to implement ETL processes to synchronize your data warehouse to a Fabric lakehouse, unless your data is in Fabric to start with, or you use Fabric database mirroring for the currently supported data sources (Azure SQL DB, Cosmos, and Snowflake). I’m not counting the data synchronization time as a downside because it could supersede the time you currently spend in model refresh.

computer memory with queries executing and Microsoft Fabric logo. Image 4 of 4

Fabric Capacity Limits

Here is table that is getting more and more difficult to find as searching for Fabric capacity limits returns results about CU compute units (for the most part meaningless in my opinion). I embed in a searchable format below before it vanishes on Internet. The most important column for semantic modeling is the max memory which denotes the upper limit of memory Fabric will grant a semantic model.

SKUMax memory (GB)1, 2Max concurrent DirectQuery connections (per semantic model)1Max DirectQuery parallelism3Live connection (per second)1Max memory
per query (GB)1
Model refresh parallelismDirect Lake rows per table (in millions)1, 4Max Direct Lake model size on OneLake (GB)1, 4
F235121130010
F435121230010
F831013.751530010
F1651017.521030020
F32101011552030040
F6425504-83010401,500Unlimited
F12850756-126010803,000Unlimited
F2561001008-16120101606,000Unlimited
F51220020010-202402032012,000Unlimited
F102440020012-244804064024,000Unlimited
F2048400200 960401,28024,000Unlimited

The same page lists another important table that shows the background CPU cores assigned to a capacity. Although bursting, overages, smoothing, and throttling make Fabric capacity compute resources a whole lot more difficult to figure out, think of your capacity as a VM that has that many cores for backend loads, including loads from interactive operations, such as report queries, and loads from background operations, such as dataset refreshes. Not sure what it shows N/A for F2 and F4. If memory serves me right, I’ve previously seen 0.25 cores for F2 and 0.5 cores for F4.

SKUCapacity Units (CU)Power BI SKUPower BI v-cores
F22N/AN/A
F44N/AN/A
F88EM1/A11
F1616EM2/A22
F3232EM3/A34
F6464P1/A48
F128128P2/A516
F256256P3/A632
F5121512P4/A764
F102411,024P5/A8128
F204812,048N/AN/A

Microsoft Fabric capacity limits. Image 1 of 4

 

Atlanta Microsoft BI Group Meeting on September 3rd (Create Code Copilots with Large Language Models)

Atlanta BI fans, please join us in person for the next meeting on Monday, September 3th at 6:30 PM ET. Your humble correspondent will show you how to use Large Language Models, such as ChatGPT, to create your own copilots for Text2SQL and Text2DAX. I’ll also help you catch up on Microsoft BI latest. I will sponsor the event which marks the 14th anniversary of the Atlanta Microsoft BI Group! For more details and sign up, visit our group page.

Details

Presentation: Create Code Copilots with Large Language Models
Delivery: In-person
Date: September 3rd, 2024
Time: 18:30 – 20:30 ET
Level: Beginner to Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (events, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Venue
Improving Office
11675 Rainwater Dr
Suite #100
Alpharetta, GA 30009

Overview: Resistance is futile! Instead of fearing that AI will take over our jobs, embrace it and apply it to outsource mundane work and create a new class of applications that were not possible before. In this session, I’ll introduce you through the fascinating world of Large Language Models (LLMs) and one of their practical applications in creating Text2SQL and Text2DAX copilots. I’ll demonstrate how LLMs open new opportunities for intelligent exploration. As an optional challenge, bring your laptop, download the code from my website (use the download link below), and follow along using your favorite AI chat, such as ChatGPT, which is what I’ll use for the demos, Microsoft Copilot, Meta AI, Google Gemini, or Perplexity.ai. You’ll also discover how you can automate LLM-powered copilots using Python and Azure OpenAI.

Code download link: Create Code Copilots with Large Language Models

Speaker: Teo Lachev is a BI consultant, author, and mentor. Through his Atlanta-based company Prologika (https://prologika.com) he designs and implements innovative solutions that bring tremendous value to his clients and help them make sense of data. Teo has authored and co-authored many books, and he has been leading the Atlanta Microsoft Business Intelligence group since he founded it in 2010. Microsoft has recognized Teo’s contributions to the community by awarding him the prestigious Microsoft Most Valuable Professional (MVP) Data Platform status for 15 years. Microsoft selected Teo as one of only 30 FastTrack Solution Architects for Power Platform worldwide.

Sponsor: Prologika (https://prologika.com)

PowerBILogo

Power BI and Fabric Capacities: Thinking Outside the Box

I’m conducting an assessment for a client facing memory pressure in Power BI Premium. You know these pesky out of memory issues when refreshing a biggish dataset. They started with P1, moved to P2, and now are on P3 but still more memory is needed. The runtime memory footprint of the problematic semantic model with imported data is 45 GB and they’ve done their best to optimize it.

Since its beginning, Power BI Pro per-user licensing (and later Premium Per User (PPU) licensing) has been very attractive. Many organizations with a limited number of report users flocked to Power BI to save cost. However, organizations with more BI consumers gravitated toward premium licensing where they could have unlimited number of report readers against a fixed monthly fee starting at listed price of $5,000/mo for P1. Sounds like a great deal, right?

I must admit that I detest the premium licensing model because it boxes into certain resource constraints, such as 8 backend cores and 25 GB RAM for P1. There are no custom configurations to let you balance between compute and memory needs. And while there is an auto-scale compute model, it’s very coarse. The memory constraints are especially problematic given that that imported models are memory resident and require more than twice the memory for full refresh. From the outside, these memory constraints seem artificially low to force clients into perpetual upgrades. The new Fabric F capacities that supersede the P plans are even more expensive, justifying the price increase with the added flexibility to pause the capacity which is often impractical.

It looks to me that the premium licensing is pretty good deal for Microsoft. Outgrown 25 GB of RAM in P1? Time to shelve another 5K per month for 25 GB more even if you don’t need more compute power. Meanwhile, the price of 32GB of RAM is less than $100 and falling.

So, what should you do if you are strapped for cash? Consider evaluating and adopting one or more of the following techniques, including:

  • Switching to PPU licensing with a limited number of report users. PPU is equivalent of P3 and grants 100GB RAM per dataset.
  • Optimizing aggressively the model storage when possible, such as removing high-cardinality columns
  • Configuring aggressive incremental refresh policies with polling expressions
  • Moving large fact tables to a separate semantic model (remember that the memory constraints are per dataset and not across all the datasets in the capacity)
  • Implementing DirectQuery features, such as composite models and hybrid tables
  • Switching to a hybrid architecture with on-prem semantic model(s) hosted in SQL Server Analysis Services where you can control the hardware configuration and you’re not charge for more memory.
  • Lobbying Microsoft for much larger memory limits or to bring your own memory 🙂

It will be great if at some point Power BI introduces customized capacities. Even better, how about auto-scaling where the capacity resources scale up and down on demand within minutes, such as adding more memory during refresh and reducing the memory when the refresh is over?

Computer resource constraints with Microsoft Fabric logo. Image 4 of 4

 

Atlanta Microsoft BI Group Meeting on August 5th (Elevate Program Management with Power BI & DevOps)

Atlanta BI fans, please join us in person for the next meeting on Monday, August 5th at 6:30 PM ET. Elayne Jones and Matt Kim (Solutions Architects at Coca-Cola) will show us how to bring Azure DevOps data to life by creating data models and interactive reports in Power BI. Your humble correspondent will help you catch up on Microsoft BI latest. CloudStaff.ai will sponsor the event. For more details and sign up, visit our group page.

Details

Presentation: Elevate Program Management with Power BI & DevOps
Delivery: In-person
Date: August 5, 2024
Time: 18:30 – 20:30 ET
Level: Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (events, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Venue
Improving Office
11675 Rainwater Dr
Suite #100
Alpharetta, GA 30009

Overview: Have you ever opened Azure DevOps and felt overwhelmed by the vast sea of program management options? In large organizations, tracking progress across disparate projects and work items can be challenging. In this session, find out how to bring Azure DevOps data to life by creating data models and interactive reports in Power BI. Sleek Power BI visuals make even the most technical DevOps content both accessible to executives and actionable for project managers.

Speaker: Elayne Jones and Matt Kim are both Solutions Architects at Coca-Cola Bottlers Sales and Services. Elayne and Matt specialize in developing solutions that drive efficiency within organizations by utilizing the full set of Power Platform technologies. Elayne and Matt work together on a team focusing on designing and implementing automated solutions to enhance both internal and external stakeholders’ user experiences and to enforce consistency in reporting data.

Sponsor: Cloudstaff.ai

PowerBILogo

Atlanta Microsoft BI Group Meeting on July 1st (Commenting Power Query with Azure OpenAI)

Atlanta BI fans, please join us in person for the next meeting on Monday, July 1st at 6:30 PM ET. John Kerski (Microsoft MVP) will shows us how to integrate ChatGPT with Power BI. Your humble correspondent will help you catch up on Microsoft BI latest. CloudStaff.ai will sponsor the event. For more details and sign up, visit our group page.

Details

Presentation: Commenting Power Query with Azure OpenAI
Delivery: In-person
Date: July 1, 2024
Time: 18:30 – 20:30 ET
Level: Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (events, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Venue
Improving Office
11675 Rainwater Dr
Suite #100
Alpharetta, GA 30009

Overview: Large Language Models (such as ChatGPT) can greatly enhance the way you develop and deliver Power BI solutions. In this session I will show you how to integrate Azure Open AI into Power BI using prompt engineering techniques.

Speaker: John Kerski has over a decade of experience in technical and government leadership. He specializes in managing Data Analytics projects and implementing DataOps principles to enhance solution delivery and minimize errors. John’s expertise is showcased through his ability to offer patterns and templates that streamline the adoption of DataOps with Microsoft Fabric and Power BI. His in-depth knowledge and hands-on approach provide clients with practical tools to achieve efficient and effective data operations.

Sponsor: CloudStaff.ai

PowerBILogo

Prologika Newsletter Summer 2024

I’ve written in the past about the dangers of blindly following “modern” data architectures (see the “Are you modern yet?” and “Data Lakehouse: The Good, the Bad, and the Ugly”) but a recent assessment inspired to me write about this topic again. This newsletter advocates a hybrid and cautionary approach for data integration to avoid overdoing data lakes and warns about pitfalls of over-staging source data to files. It recommends instead following the “Discipline at the core, flexibility at the edge” methodology with emphasis on implementing enterprise data warehouse and organizational semantic models.

Data Lake Overstaging

How did the large vendor attempt to solve these horrible issues? Modern Data Warehouse (MDM) architecture of course. Nothing wrong with it except that EDW and organizational semantic model(s) are missing and that most of the effort went into implementing the data lake medallion architecture where all the incoming data ended up staged as Parquet files. It didn’t matter that 99% of the data came from relational databases. Further, to solve a data change tracking requirement, the vendor decided to create a new file each time ETL runs. So even if nothing has changed in the source feed, the data is duplicated should one day the user wants to go back in time and see what the data looked like then. There are of course better ways to handle this that doesn’t even require ETL, such as SQL Server temporal tables, but I digress.

At least some cool heads prevailed and the Silver layer got implemented as a relational ODS to serve the needs of home-grown applications, so the apps didn’t have to deal with files. What about EDW and organizational semantic models? Not there because the project ran out of budget and time. I bet if that vendor got hired today, they would have gone straight for Fabric Lakehouse and Fabric premium pricing (nowadays Microsoft treats partners as an extension to its salesforce and requires them to meet certain revenue targets as I explain in “Dissolving Partnerships“), which alone would have produced the same outcome.

What did the vendor accomplish? Not much. Nor only didn’t the implementation address the main challenges, but it introduced new, such as overcomplicated ETL and redundant data staging. Although there might be good reasons for file staging (see the second blog above), in most cases I consider it a lunacy to stage perfect relational data to files, along the way losing metadata, complicating ETL, ending up serverless, and then reloading the same data into a relational database (ODS in this case).

I’ve heard that the vendor justified the lake effort by empowering data scientists to do ML one day. I’d argue that if that day ever comes, the likelihood (pun not intended) of data scientists working directly on the source schema would be infinitely small since more than likely they would require the input datasets to be shaped in a different way which would probably require another ETL pipeline altogether.

Better Data Staging

I don’t subject my clients to excessive file staging. My file staging litmus test is what’s the source data format. If I can connect to a server and get in a tabular (relational) format, I stage it directly to a relational database (ODS or DW). However, if it’s provided as files (downloaded or pushed, reference data, or unstructured data), then obviously there is no other way. That’s why we have lakes.

A diagram of a computer data processing Description automatically generated

Fast forward a few years, and your humble correspondent got hired to assess the damage and come up with a strategy. Data lakes won’t do it. Lakehouses and Delta Parquet (a poor attempt to recreate and replace relational databases) won’t do it. Fabric won’t do it and it’s too bad that Microsoft pushes Lakehouse while the main focus should have been on Fabric Data Warehouse, which unfortunately is not ready for prime time (fortunately, we have plenty of other options).

What will do it? Going back to the basics and embracing the “Discipline at the core, flexibility at edge” ideology (kudos to Microsoft for publishing their lessons learned). From a technology standpoint, the critical pieces are EDW and organizational semantic models. If you don’t have these, I’m sorry but you are not modern yet. In fact, you aren’t even classic, considering that they have been around for long, long time.


Teo Lachev
Prologika, LLC | Making Sense of Data
logo

 

 

Fabric Analytics Engineer Certification Snagged

Just passed DP-600 and therefore I’m officially a Fabric Analytics Engineer. Besides renewals, I intend this to be my last Microsoft certification. After more than 30 years of certifying (12 certifications and 30 exams) and helping other certify with training and books, I think it’s time to close this chapter. Call me certifiable…

Atlanta Microsoft BI Group Meeting on June 3rd (Power BI Direct Lake storage mode)

Atlanta BI fans, please join us in person for the next meeting on Monday, June 3rd at 6:30 PM ET. Shabnam Watson (Consultant and Owner of ABI Cube) will discuss the benefits of using the Direct Lake storage mode in Microsoft Fabric. Your humble correspondent will help you catch up on Microsoft BI latest. CloudStaff.ai will sponsor the event. For more details and sign up, visit our group page.

Presentation: Power BI Direct Lake storage mode: How to achieve blazing fast performance without importing data
Delivery: In-person
Time: 18:30 – 20:30 ET
Level: Beginner/Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (events, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Venue
Improving Office
11675 Rainwater Dr
Suite #100
Alpharetta, GA 30009

Overview: Power BI engine in Microsoft Fabric has been significantly revamped to work directly with Delta files in OneLake. This brand-new storage mode is called Direct Lake which allows Power BI to achieve super-fast query performance on billion row datasets without having to import the data into Power BI. Join this session to learn how you can work with Direct Lake with just a few clicks.

Speaker: Shabnam is a business intelligence consultant and owner of ABI Cube, a company that specializes in delivering data solutions using the Microsoft Data Platform. She has over 20 years of experience and is recognized as a Microsoft Data Platform MVP for her technical excellence and community involvement. She is passionate about helping organizations harness the power of data to drive insights and innovation. She has a deep expertise in Microsoft Analysis Services, Power BI, Azure Synapse Analytics, and Microsoft Fabric. She is also a speaker, blogger, and organizer for SQL Saturday Atlanta – BI version, where she shares her knowledge and best practices with the data community.

Sponsor: CloudStaff.ai

PowerBILogo