Atlanta BI fans, please join us in person for our next meeting on Monday, May 4th at 18:30 ET. Jackie Kiadii will show you how you use the Copilot capabilities in Power BI. CloudStaff will sponsor the meeting. For more details and sign up, visit our group page.
Delivery: In-person Level: Beginner/Intermediate Food: Pizza and drinks will be provided
Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (news, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A
Overview: Copilot in Power BI is generating excitement — and significant confusion. Between multiple Copilot experiences, Fabric capacity requirements, and differences across Desktop, Service, and Fabric, many Power BI professionals struggle to explain what Copilot actually does and when it makes sense to use it.
This session provides a clear, practical overview of:
Copilot use cases that exist today
Where Copilot works (and where it doesn’t)
Licensing and capacity requirements
Current limits that impact real‑world adoption
Rather than a technical deep dive or demo, the focus is on clarity and expectation‑setting — helping attendees evaluate Copilot realistically and explain it confidently to Excel users, business stakeholders, and clients.
Attendees will leave with a framework they can use to make informed decisions about Copilot and confidently explain it to others.
Speaker: Jackie Kiadii is a Power BI trainer specializing in helping Excel users successfully transition to Power BI and adopt Microsoft analytics tools with confidence. She is a retired Microsoft Excel MVP, Microsoft Certified Trainer (MCT), Microsoft Data Analyst Associate (PL‑300), and Microsoft Office Specialist: Excel Expert. Jackie focuses on turning complex Microsoft BI topics into clear, practical guidance that supports real‑world adoption. Her work emphasizes licensing clarity, user expectations, and helping teams avoid costly or unnecessary decisions when implementing Power BI and Copilot.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2026-04-28 15:58:082026-04-28 15:58:08Atlanta Microsoft BI Group Meeting on May 4th (Making Sense of Copilot in Power BI)
Coming back from a long vacation, I’ve almost missed this interesting Power BI enhancement: Power BI unmaterialized calculated columns. Normally, I avoid the traditional DAX calculated columns for a variety of reasons, such as confusion about where business logic is applied, limited support across storage modes (for example, Direct Lake doesn’t support them), longer refresh times, etc. This not to say that calculated columns can’t be useful, such as in the case where you need to flatten a parent-child hierarchy. But unmaterialized calculated columns could open interesting scenarios that go beyond content translation to other languages mentioned by Microsoft in the April 2026 update.
Understanding unmaterialized columns
To start with, the announcement does a good job to confuse the audience by implying that they are applicable only to Direct Lake storage mode. I’ve found the documentation page more useful to understand them (specifically this table). The important takeaway is that they are also available in import storage mode.
Historically, DAX calculated columns are materialized during data refresh, meaning that once the engine calculates the formula, the output is saved (materialized). From this point on, a calculated column behaves like a regular column. However, the calculated column expression can’t reference runtime report conditions, such as the identity of the interactive user or filter selection. By contrast, like a DAX measure, the expression of the unmaterialized calculated column is evaluated at runtime. Why would you ever want to do this if we have DAX measures? Let’s consider an example.
Using unmaterialized columns
The Adventure Works DW schema has a DimSalesTerritory dimension. Suppose that the sales rep responsible for a given sales region would like to see his region as “My Region” on reports. This is probably a somewhat contrived scenario but I’m sure once you understand it, you will find other scenarios that can benefit from unmaterialized columns.
Implementing this without unmaterialized columns presents a challenge. You can come up with a DAX measure, but you will run into report limitations, such as that the measure can’t be used as a dimension to slice measures by. Or, you can go down the path of extending the model with other tables, but you will increase the complexity and user confusion. Unmaterialized columns open a new possibility by dynamically evaluating the expression, such as implementing runtime lookups. In my case, the expression of the PersonalizedRegion column is simple, but it can look up at runtime the assigned region from another table in the model, such as DimUser.
PersonalizedRegion = if (USERPRINCIPALNAME() = "<my email>" && SalesTerritory[SalesTerritoryRegion] = "Southeast", "My Region", SalesTerritory[SalesTerritoryRegion])
As you can see, the column expression can reference any DAX function, just like a measure. For this to work, you must flag the column expression context to User Context in the advanced column properties in Model View. Consequently, the column data is no longer materialized.
But the most important point is that you can continue using the column as a dimension, such as by adding it to the Rows or Columns wells in a Matrix visual. You can’t do this with a measure and that makes all the difference.
Summary
In summary, unmaterialized calculated columns bridge two previously completely distinct DAX worlds: calculated columns and measures. Like measures, they can reference runtime report conditions, such as the interactive user identity and report filters. Like columns, they can be used as dimensions. On the downside, like measures, complex formulas might impede the report performance.
I’ve mentioned previously in the “A Couple of Direct Lake Gotchas” post that unlike Power BI reports, Excel surprised me by not showing user-defined hierarchies in a Direct Lake model. Direct Lake comes with other gotchas, such as not supporting DAX calculated columns and SQL views. I normally don’t use calculated columns, but they can come in handy, such as for flatting a parent-child hierarchies outside ETL. And I like SQL views as an insurance policy for making quick transforms or filters on top of loaded tables to avoid ETL changes.
Recently, Microsoft introduced composite Direct Lake models which I demonstrated as a part of a pilot project, mainly to preserve the Excel report experience for financial users.
Direct Lake Only vs Composite Direct Lake
I view the composite Direct Lake models as the best of both worlds and this table summarizes their characteristics.
Direct Lake Only
Composite (Direct Lake + Import) Public Preview
Storage mode
Direct Lake
Some tables, such as dimensions, in import mode, others, such as fact tables, in Direct Lake
Imported tables must be refreshed, such as overnight for dimension changes
Memory consumption
Used columns in reports are paged in and out
Refresh requires at least twice the memory of imported objects
SQL Views
No
Yes
Calculated columns
No
Yes, such as PATH and PATHITEM to flatten parent-child hierarchies outside ETL
User-defined hierarchies
Power BI reports
Power BI, Excel, and Anaplan
Power Query
No
Yes
Therefore, composite Direct Lake models could be a good compromise between real-time BI and flexibility. For example, now you can implement the following configuration:
1. Dimensions in Import mode and refresh them overnight since they probably don’t change frequently anyway.
2. Large fact tables or tables requiring real-time BI in Direct Lake without refresh.
Lessons Learned
If composite models sound appealing, you might be eager to convert an existing Direct Lake model to composite. Here are some issues/gotchas that I ran into doing so:
The web modeling experience (currently, the only way to add imported tables using Microsoft tooling) would show the Get Data buttons disabled. After some reverse-engineering of a brand new model, I fixed it my changing the connection expression in the *.bim file to use onelake (previously, it was pointing to a Fabric warehouse).
In my case, since there weren’t that many tables, I converted the dimension table partitions manually to “M” partitions, as in this example (change bolded text for each table):
After switching a partition from Direct Lake to Import in PBI Desktop project and synchronizing to the connected published model, Fabric rejected the change complaining that existing DirectLake table can’t be switched to imported storage. As a workaround, I dropped the connected model.
Being in public review, Composite Direct Lake is rough around the edges. I’ve got various complaints about credentials missing which I fixed in the dataset settings.
Although the documentation says that web modeling is the only tooling experience, Power BI Desktop worked for me as with the Direct Lake only counterpart. However, currently Power Query and Get Data (unless you add the tables directly in *.bim file) is only available on the web.
A “Limited” Performance Note
I know everyone is interested in performance. I did some limited performance tests by tracing a massive query against equivalent Direct Lake Only and Composite Direct Lake models. On a cold cache, Composite outperformed Direct Lake Only by some 20%. On a warm cache, surprisingly I saw the reverse, Direct Lake Only outperforming Composite five to six times. Please don’t take this verbatim. More than likely, your results will vary. For example, in that blog I said that I saw much better performance with SWITCH…CASE in Imported vs Direct Lake. Test!
Atlanta BI fans, please join us in person for our next meeting on Monday, March 2nd at 18:30 ET. Shabnam Watson will show you how you can apply your SQL skills in Microsoft Fabric. And your humble correspondent will walk you through some of the latest Power BI and Fabric enhancements. I will sponsor the meeting. For more details and sign up, visit our group page.
Delivery: In-person Level: Beginner/Intermediate Food: Pizza and drinks will be provided
Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (news, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A
Overview: New to Microsoft Fabric? Don’t worry—you already know more than you think. In this beginner-friendly session, we’ll explore how your existing SQL skills translate directly into Fabric without needing to learn Spark, Python, or unfamiliar engineering tools. You’ll see how SQL can be applied across Fabric items to explore, shape, and analyze data with confidence. If you’re just beginning your Fabric journey, this session offers a simple, approachable path to success using the skills you already have.
Speaker: Shabnam is a business intelligence consultant and owner of ABI Cube, a company that specializes in delivering data solutions using the Microsoft Data Platform. She has over 20 years of experience and is recognized as a Microsoft Data Platform MVP for her technical excellence and community involvement. She is passionate about helping organizations harness the power of data to drive insights and innovation. She has a deep expertise in Microsoft Analysis Services, Power BI, Azure Synapse Analytics, and Microsoft Fabric. She is also a speaker, blogger, and organizer for SQL Saturday Atlanta – BI version, where she shares her knowledge and best practices with the data community.
Sponsor: Prologika (https://prologika.com) helps organizations of all sizes to make sense of data by delivering tailored BI solutions that drive actionable insights and maximize ROI. Your BI project will be your best investment, we guarantee it!
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2026-02-25 13:29:302026-02-25 13:29:30Atlanta Microsoft BI Group Meeting on March 2nd (Your First Steps in Microsoft Fabric Using Just SQL)
I’m helping an enterprise client modernize their data analytics estate. As a part of this exercise, a SSAS Multidimensional financial cube must be converted to a Power BI semantic model. The challenge is that business users ask for almost real-time BI during the forecasting period, where a change in the source forecasting system must be quickly propagated to the reporting the layer, so the users don’t sit around waiting to analyze the impact. An important part of this architecture is the Fabric Direct Lake storage to eliminate the refresh latency, but it came up with a couple of gotchas.
Performance issues with calculated accounts
Financial MD cubes are notoriously difficult to convert to Tabular/Power BI because of advanced features that aren’t supported in the new world, such as Account Intelligence, scope assignments, parent-child hierarchies, and calculated dimension members. The latter presented a performance challenge. Consider the following MDX construct:
CREATE MEMBER CURRENTCUBE.[Account].[Accounts].&[1].[Calculations List].[ROI %]
AS IIF([Account].[Accounts].&[1].[Calculations List].[Average Invested Capital] = 0, NULL, ..,
FORMAT_STRING = "#,##0.0 %;-#,##0.0 %",
This construct adds an expression-based account as though the account is physically present in the chart of accounts. MD evaluates the MDX expression only for that account.
No such a construct exists in Tabular. To provide a similar reporting experience, I attempted to overwrite the Value measure conditionally based on the “current” account, such as:
However, no matter what I tried, the report performance got a big dent (from milliseconds to 10+ seconds) even when the Calculations List account was excluded. Interestingly, report performance in Direct Lake fared 2-3 times worse than an equivalent Power BI imported model.
So, we had to scrap this approach in favor of one of these workarounds:
Pre-calculating the calculated accounts values (materializing)
Pros: same reporting behavior as MD, faster performance compared to MDX expressions
Cons: effort shifted to ETL, potentially impacting real-time forecasting if calculations must be recomputed with each change.
Separate DAX measures
Pros: formulas applied at runtime as MD, no impact on ETL
Cons: different report experience
Excel dropping user-defined hierarchies
Excel never fails to disappoint me. Sad, considering its potential as an alternative reporting client, especially for financial users.
This time Excel pivots decided not to show user-defined hierarchies, which turns out to be a document limitation for DirectQuery and Direct Lake. Microsoft provides no explanation and I’m sure the Excel team has no plans to fix it, as well as to finally embrace DAX and Power BI semantic models.
Luckily, the client uses a third-party Excel-based tool, which provides better report experience and supports user-defined hierarchies. If the Excel limitation becomes an issue, Fabric Direct Lake is expected soon to support composite models. This will let you implement models with hybrid storage, such as importing dimensions, which don’t change frequently, but leave fact tables in Direct Lake. Luckily, Excel supports user-defined hierarchies with imported tables.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2026-02-24 15:17:002026-02-24 15:17:00A Couple of Direct Lake Gotchas
Atlanta BI fans, please join us in person for our next meeting on Monday, February 2nd at 18:30 ET. Sukhwant Kaur (Product Manager at Microsoft) will show you how supercharge your Power BI reports with translytical taskflows. And your humble correspondent will walk you through some of the latest Power BI and Fabric enhancements. CloudStaff.au will sponsor the meeting. For more details and sign up, visit our group page.
Delivery: In-person Level: Beginner/Intermediate Food: Pizza and drinks will be provided
Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (news, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A
Overview: Join us for an engaging session exploring how to build powerful translytical applications using Power BI, Functions, and SQL Database within Microsoft Fabric. We’ll discuss best practices for integrating analytics and transactional workloads, demonstrate real-world use cases, and provide actionable tips for leveraging Fabric’s unified platform. This talk is ideal for data professionals interested in bridging analytics and operations for enhanced business value.
Speaker: Sukhwant has served as a Product Manager at Microsoft for the past few development cycles. During this time, she’s focused on the entire product management lifecycle, from working with development teams and user experience to collaborating with cross-functional teams to drive customer satisfaction in ensuring our products not only meet but exceed customer expectations.
Sponsor: At CloudStaff.ai we’re making work MORE. HUMAN. We believe in the power of technology to enhance human potential, not replace it. Our innovative AI and automation solutions are designed to make work easier, more efficient, and more meaningful. We help businesses of all sizes streamline their operations, boost productivity, and solve real-world challenges. Our approach combines cutting-edge technology with a deep understanding of human needs, creating solutions that work the way people do! https://cloudstaff.ai
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2026-01-28 14:56:522026-01-28 15:09:17Atlanta Microsoft BI Group Meeting on February 2nd (Power BI Translytical Taskflows)
Happy New Year with oldies but goodies…especially given that this technique might be hard to find elsewhere! How do you implement a tooltip page that shows only the measure the end user hovers over? Consider this matrix with three measures on columns grouped by product categories on rows. The user wants to see a trend chart for the measure they hover over.
Understanding the challenge
Power BI has long supported custom tooltip pages. Passing member context, such as the selected product category, is not an issue. However, passing the selected measure won’t work without some blackbelt magic because measures are not field members.
And so, we will have to turn the measures used in the matrix in a dimension. Then, instead of adding measures to the matrix as you would typically do, you would use the dimension. You can probably follow a similar implementation approach using Power BI field parameters, but since I’m a purist, I went with a regular table.
As a disclaimer, I’d rather show the trends in separate graphs side by side on a real-life dashboard, so the end user can see everything at a glance instead popping up tooltips and because I’ll have more control and flexibility over the visualization setup. But, just in case you must use tooltips…
Adding a measure “dimension”
Start by adding a new table to the model either way you want. In this case, I used the Power BI Enter Data feature to create a MatrixMeasures table. This table lists the measures that will be added to the matrix. The format column will be used for dynamic formatting later on.
Then, I added a DAX measure to the MatrixMeasures table as follows:
This formula switches dynamically the DAX measure based on the “current” matrix column. For example, the Sales column will use the existing [Sales] measure. If nothing is selected, it piggy backs on the [Sales] measure. This default provides data to work with when you’re configuring the tooltip chart (otherwise, nothing will be shown because the matrix doesn’t pass the context).
While we are in the DAX rabbit hole, let’s add one more measure for dynamic tooltip chart title:
Next, remove all the individual measures from the matrix and add only the SelectedMeasure measure to the matrix Values well. At this point, the matrix should look like before, except that the measure format settings would be lost.
Tip: If the measures need to be sorted in a specific way, add a new column to the MatrixMeasures table that defines the custom sort order. Then, sort the Measure column by this new column.
To fix formatting:
Select the SelectedMeasure field in the Data pane.
In the Measure Tools ribbon, change the Format dropdown to Dynamic.
In the formula bar, enter this formula to dynamically change the measure format using the format specified in the Format column of the MatrixMeasures table:
SELECTEDVALUE(MatrixMeasures[Format])
Then, implement the tooltip page and hook it up to the page with the matrix, as explained here. To change the chart title dynamically in the tooltip page, bind it to the SelectedTitle measure.
There you have it. With some wizardry and DAX, you have a measure context-aware tooltip.
Atlanta BI fans, please join us in person for our next meeting on Monday, January 5th at 18:30 ET. Dean Jurecic will show you how Power BI visual calculations can simplify the process of writing DAX. And your humble correspondent will walk you through some of the latest Power BI and Fabric enhancements. Key2 Consulting will sponsor the meeting. For more details and sign up, visit our group page.
Delivery: In-person Level: Beginner/Intermediate Food: Pizza and drinks will be provided
Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (news, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A
Overview: Do you sometimes get lost in a sea of complicated DAX and wonder if there is an easier way? Is it difficult to drive self-service reporting in your organization because business users aren’t familiar with the nuances of DAX and Semantic Models? Visual Calculations might be able to help!
Introduced in 2024 and currently in preview, this feature is designed to simplify the process of writing DAX and combines the simplicity of calculated columns with the on-demand calculation flexibility of measures. This session is an overview of Visual Calculations and how they can be used to quickly produce results including:
• Background
• Example Use Cases
• Performance
• Considerations and Limitations
Speaker: Dean Jurecic is a business intelligence analyst and consultant specializing in Power BI and Microsoft Fabric with experience across diverse industries, including utilities, retail, government, and education. Dean is a Fabric Community Super User who holds a number of Microsoft certifications and has participated in the “Ask the Experts” program for Power BI at the Microsoft Fabric Community Conference.
Sponsor: Key2 Consulting is a cloud analytics consultancy that helps business leaders maximize their data. We are a Microsoft Gold-Certified Partner and our specialty is the Microsoft cloud analytics stack (Azure, Power BI, SQL Server).
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2025-12-30 07:19:272025-12-30 07:19:27Atlanta Microsoft BI Group Meeting on January 5th (Visual Calculations in Power BI)
Telegraph sang a song about the world outside
Telegraph road got so deep and so wide
Like a rolling river…
The Telegraph Road, Dire Straits
At Ignite in November, 2025, Microsoft introducedFabric IQ. I noted to go beyond the marketing hype and check if Fabric IQ makes any sense. The next thing I know, around the holidays I’m talking to an enterprise strategy manager from an airline company and McKinsey consultant about ontologies.
Ontology – A branch of philosophy, ontology is the study of being that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. In computer science and AI, ontology refers to a set of concepts and categories in a subject area or domain that shows their properties and the relations between them.
So, what better way to spend the holidays than to play with new shaky software?
What is Fabric IQ?
According to Microsoft, Fabric IQ is “a unified intelligence platform developed by Microsoft that enhances data management and decision-making through semantic understanding and AI capabilities.” Clear enough? If not, if you view Fabric as Microsoft’s answer to Palantir’s Foundry, then Fabric IQ is the Microsoft equivalent of Palantir’s Foundry Ontology, whose success apparently inspired Microsoft.
Therefore, my unassuming layman definition of Fabric IQ is a metadata layer on top of data in Fabric that defines entities and their relationships so that AI can make sense of and relate the underlying data.
For example, you may have an organizational semantic model built on top of an enterprise data warehouse (EDW) that spans several subject areas. And then you might have some data that isn’t in EDW and therefore outside the semantic model, such as HR file extracts in a lakehouse. You can use Fabric IQ as a glue that bridges that data together. And so, when the user asks the agent “correlate revenue by employee with hours they worked”, the agent knows where to go for answers.
Following this line of thinking, Microsoft BI practitioners may view Fabric IQ as a Power BI composite semantic model on steroids. The big difference is that a composite model can only reference other semantic models while Fabric IQ can span data in multiple formats.
The Good
Palantir had a head start of a decade or so compared to Microsoft Fabric, but yet even in its preview stage, I like a thing or two about Fabric IQ from what I’ve seen so far:
Its oncology can span Power BI semantic models (with caveats explained in the next section), powered by best-in-class technology. As I mentioned before, this allows you to bridge all the business logic and calculations you carefully crafted in a semantic model to the rest of your Fabric data estate.
Fabric IQ integrates with other Microsoft technologies, such as real-time intelligence (eventhouses), Copilot Studio, Graph. This tight integration turns Fabric into a true “intelligence platform,” reducing duplicated logic, one-off models, and maintenance while enabling multi-hop reasoning and real-time operational agents.
Democratized and no-code friendly – Visual tools allow business users to build and evolve the ontology, lowering barriers compared to more engineering-heavy alternatives. Making it easy to use has always been a Microsoft strength.
Groundbreaking semantics for AI Agents: Fabric IQ elevates AI from pattern-matching to true business understanding, allowing agents to reason over cascading effects, constraints, and objectives—leading to more reliable, auditable decisions and automation.
Compared to Palantir, I also like that Fabric OneLake has standardized on an open Delta Parquet format and embraced data movements tools Microsoft BI pros and business users are already familiar with, such as Dataflows and pipelines, to bring data in Fabric and therefore Fabric IQ.
The Bad
I hope some of these limitations will be lifted after the preview but:
Only DirectLake semantic models are accessible to AI agents. Import and DirectQuery models are not currently supported for entity and relationships binding. Not only does this limitation rule out pretty much 99.9% of the existing semantic models, but it also prevents useful business scenarios, such as accessing the data where it is with DirectQuery instead of duplicating the data in OneLake.
No automatic ontology building – It requires cross-functional agreement on business definitions, workshops, and governance—labor-intensive for organizations without mature semantic models. I hope Microsoft will simplify this process like how Purview has automated scans.
Risk of overhype vs. delivery gap – We’ve seen this before when new products got unveiled with a lot of fanfare, only to be abandoned later.
The Ugly
OneLake-centric dependency. Except for shortcuts to Delta Parquet files which can be kept external, your data must be in OneLake. What about these enterprises with investments in Google BigQuery, Teradata, Snowflake, and even SQL Server or Azure SQL DB? Gotta bring that data over to OneLake. Even shortcut transformations to CSV, Parquet, JSON files in OneLake, S3, Google Cloud Storage, will copy the data to OneLake. By contrast, Palantir has limited support for virtual tables to some popular file formats, such as Parquet, Iceberg, Delta, etc.
What happened to all the investments in data virtualization and logical warehouses that Microsoft has made over years, such as PolyBase and the deprecated Polaris in Synapse Serverless? What’s this fascination with copying data and having all the data in OneLake? Why can’t we build Fabric IQ on top of true data virtualization?
Which is where I was thinking that semantic models with DirectQuery can be used as a workaround to avoid copying data over from supported data sources, but alas Fabric IQ doesn’t like them yet.
Summary
Microsoft Fabric IQ is a metadata layer on top of Fabric data to build ontologies and expose relevant data to AI reasoning. It will be undoubtedly appealing to enterprise customers with complex data estates and existing investments in Power BI and Fabric. However, as it stands, Fabric IQ is OneLake-centric. Expect Microsoft to invest heavily in Fabric and Fabric IQ to compete better with Palantir.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2025-12-27 16:09:382026-01-08 16:36:39First Look at Fabric IQ: The Good, The Bad, and The Ugly
What performance and cost considerations should you keep in mind if you are currently on Power BI Pro/PPU, but Fabric looks increasingly enticing and you want to upgrade an existing workspace to Fabric? For example, let’s say you’ve started with a pay-per-user workspace, but now you want that workspace to have Fabric features, such as Copilot, Lakehouse, etc. Or, as a typical use case for small to mid-size companies, you could have a Corporate BI workspace with org semantic model(s) that you want to transition to Fabric, such as to take advantage of DirectLake.
Performance
Performance is difficult to translate because Power BI Pro/PPU run in a shared capacity, meaning compute resources (v‑cores) are pooled across many tenants and dynamically allocated, whereas Fabric capacities are dedicated, meaning that Microsoft grants specific resources expressed as number of cores and memory. Therefore, Fabric performance is predicable while Pro/PPU might not be, although I’m yet to hear from client complaining about unpredictable performance.
Also, keep in mind that Power BI Pro limits you to a quota of 1 GB per dataset, PPU to 100 GB per dataset, and Fabric starts at 3 GB per dataset with F2 and doubles the grant up the chain. This is important for semantic models with imported data.
Although the tool wasn’t designed for estimating upgrade scenarios, you could start with the Fabric Capacity Estimator (preview) to get an initial ballpark estimate for the Fabric capacity. Start low, then monitor the capacity performance using the Microsoft Fabric Capacity Metrics app and be prepared to upgrade if necessary, such as when more parallelism is needed.
Cost
This is easier. Here are the advertised, undiscounted and unreserved prices:
Power BI Pro: $14/user/month (free with M365 E5 plan)
PPU: $24/user/month ($14 discount with M365 E5 plan)
Fabric: Starts at $262.80 per month with F2 and doubles the price up the chain. Finding what capacity you need requires evaluating what workloads you will be running to ensure you have enough resources.
It’s important to note that Fabric capacities lower than F64 require a Power BI Pro license for every user who accesses shared content, regardless of viewing or creating content. Microsoft Copilot and ChatGPT got this wrong by adamantly claiming that viewers don’t require Pro license, while Grok got it right, so be careful which agent you use when researching. The Fabric Capacity estimator also correctly identifies the required Pro licenses.
Of course, Fabric gives you features unfortunately not available in the pay per user licensing plans, so the actual decision in favor of Fabric will probably transcend just performance and cost. When evaluating the performance of the lower Fabric SKUs, you might find the following blogs I wrote on this subject helpful:
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2025-12-24 17:14:362025-12-29 12:31:11Performance and Cost Considerations from Power BI Pro/PPU to Fabric