Power BI Dynamic Measure Tooltip

Happy New Year with oldies but goodies…especially given that this technique might be hard to find elsewhere! How do you implement a tooltip page that shows only the measure the end user hovers over? Consider this matrix with three measures on columns grouped by product categories on rows. The user wants to see a trend chart for the measure they hover over.

Understanding the challenge

Power BI has long supported custom tooltip pages. Passing member context, such as the selected product category, is not an issue. However, passing the selected measure won’t work without some blackbelt magic because measures are not field members.

And so, we will have to turn the measures used in the matrix in a dimension. Then, instead of adding measures to the matrix as you would typically do, you would use the dimension. You can probably follow a similar implementation approach using Power BI field parameters, but since I’m a purist, I went with a regular table.

As a disclaimer, I’d rather show the trends in separate graphs side by side on a real-life dashboard, so the end user can see everything at a glance instead popping up tooltips and because I’ll have more control and flexibility over the visualization setup. But, just in case you must use tooltips…

Adding a measure “dimension”

Start by adding a new table to the model either way you want. In this case, I used the Power BI Enter Data feature to create a MatrixMeasures table. This table lists the measures that will be added to the matrix. The format column will be used for dynamic formatting later on.

Then, I added a DAX measure to the MatrixMeasures table as follows:

SelectedMeasure =
VAR _measure = SELECTEDVALUE(MatrixMeasures[Measure])
RETURN
SWITCH(TRUE(),
_measure = "Sales", [Sales],
_measure = "PY Sales", [PY Sales],
_measure = "% Growth", [% Growth],
[Sales]
)

This formula switches dynamically the DAX measure based on the “current” matrix column. For example, the Sales column will use the existing [Sales] measure. If nothing is selected, it piggy backs on the [Sales] measure. This default provides data to work with when you’re configuring the tooltip chart (otherwise, nothing will be shown because the matrix doesn’t pass the context).

While we are in the DAX rabbit hole, let’s add one more measure for dynamic tooltip chart title:

SelectedTitle = SELECTEDVALUE(MatrixMeasures[Measure])

Making report changes

Next, remove all the individual measures from the matrix and add only the SelectedMeasure measure to the matrix Values well. At this point, the matrix should look like before, except that the measure format settings would be lost.

Tip: If the measures need to be sorted in a specific way, add a new column to the MatrixMeasures table that defines the custom sort order. Then, sort the Measure column by this new column.

To fix formatting:

  1. Select the SelectedMeasure field in the Data pane.
  2. In the Measure Tools ribbon, change the Format dropdown to Dynamic.
  3. In the formula bar, enter this formula to dynamically change the measure format using the format specified in the Format column of the MatrixMeasures table:
SELECTEDVALUE(MatrixMeasures[Format])

Then, implement the tooltip page and hook it up to the page with the matrix, as explained here. To change the chart title dynamically in the tooltip page, bind it to the SelectedTitle measure.

There you have it. With some wizardry and DAX, you have a measure context-aware tooltip.

Atlanta Microsoft BI Group Meeting on January 5th (Visual Calculations in Power BI)

Atlanta BI fans, please join us in person for our next meeting on Monday, January 5th at 18:30 ET. Dean Jurecic will show you how Power BI visual calculations can simplify the process of writing DAX. And your humble correspondent will walk you through some of the latest Power BI and Fabric enhancements. Key2 Consulting will sponsor the meeting. For more details and sign up, visit our group page.

Delivery: In-person
Level: Beginner/Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (news, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Overview: Do you sometimes get lost in a sea of complicated DAX and wonder if there is an easier way? Is it difficult to drive self-service reporting in your organization because business users aren’t familiar with the nuances of DAX and Semantic Models? Visual Calculations might be able to help!

Introduced in 2024 and currently in preview, this feature is designed to simplify the process of writing DAX and combines the simplicity of calculated columns with the on-demand calculation flexibility of measures. This session is an overview of Visual Calculations and how they can be used to quickly produce results including:
• Background
• Example Use Cases
• Performance
• Considerations and Limitations

Speaker: Dean Jurecic is a business intelligence analyst and consultant specializing in Power BI and Microsoft Fabric with experience across diverse industries, including utilities, retail, government, and education. Dean is a Fabric Community Super User who holds a number of Microsoft certifications and has participated in the “Ask the Experts” program for Power BI at the Microsoft Fabric Community Conference.

Sponsor: Key2 Consulting is a cloud analytics consultancy that helps business leaders maximize their data. We are a Microsoft Gold-Certified Partner and our specialty is the Microsoft cloud analytics stack (Azure, Power BI, SQL Server).

PowerBILogo

First Look at Fabric IQ: The Good, The Bad, and The Ugly

Telegraph sang a song about the world outside
Telegraph road got so deep and so wide
Like a rolling river…

The Telegraph Road, Dire Straits

At Ignite in November, 2025, Microsoft introduced Fabric IQ. I noted to go beyond the marketing hype and check if Fabric IQ makes any sense. The next thing I know, around the holidays I’m talking to an enterprise strategy manager from an airline company and McKinsey consultant about ontologies.

Ontology – A branch of philosophy, ontology is the study of being that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. In computer science and AI, ontology refers to a set of concepts and categories in a subject area or domain that shows their properties and the relations between them.

So, what better way to spend the holidays than to play with new shaky software?

What is Fabric IQ?

According to Microsoft, Fabric IQ is “a unified intelligence platform developed by Microsoft that enhances data management and decision-making through semantic understanding and AI capabilities.” Clear enough? If not, if you view Fabric as Microsoft’s answer to Palantir’s Foundry, then Fabric IQ is the Microsoft equivalent of Palantir’s Foundry Ontology, whose success apparently inspired Microsoft.

Therefore, my unassuming layman definition of Fabric IQ is a metadata layer on top of data in Fabric that defines entities and their relationships so that AI can make sense of and relate the underlying data.

For example, you may have an organizational semantic model built on top of an enterprise data warehouse (EDW) that spans several subject areas. And then you might have some data that isn’t in EDW and therefore outside the semantic model, such as HR file extracts in a lakehouse. You can use Fabric IQ as a glue that bridges that data together. And so, when the user asks the agent “correlate revenue by employee with hours they worked”, the agent knows where to go for answers.

Following this line of thinking, Microsoft BI practitioners may view Fabric IQ as a Power BI composite semantic model on steroids. The big difference is that a composite model can only reference other semantic models while Fabric IQ can span data in multiple formats.

The Good

Palantir had a head start of a decade or so compared to Microsoft Fabric, but yet even in its preview stage, I like a thing or two about Fabric IQ from what I’ve seen so far:

  • Its oncology can span Power BI semantic models (with caveats explained in the next section), powered by best-in-class technology. As I mentioned before, this allows you to bridge all the business logic and calculations you carefully crafted in a semantic model to the rest of your Fabric data estate.
  • Fabric IQ integrates with other Microsoft technologies, such as real-time intelligence (eventhouses), Copilot Studio, Graph. This tight integration turns Fabric into a true “intelligence platform,” reducing duplicated logic, one-off models, and maintenance while enabling multi-hop reasoning and real-time operational agents.
  • Democratized and no-code friendly – Visual tools allow business users to build and evolve the ontology, lowering barriers compared to more engineering-heavy alternatives. Making it easy to use has always been a Microsoft strength.
  • Groundbreaking semantics for AI Agents: Fabric IQ elevates AI from pattern-matching to true business understanding, allowing agents to reason over cascading effects, constraints, and objectives—leading to more reliable, auditable decisions and automation.
  • Compared to Palantir, I also like that Fabric OneLake has standardized on an open Delta Parquet format and embraced data movements tools Microsoft BI pros and business users are already familiar with, such as Dataflows and pipelines, to bring data in Fabric and therefore Fabric IQ.

The Bad

I hope some of these limitations will be lifted after the preview but:

  • Only DirectLake semantic models are accessible to AI agents. Import and DirectQuery models are not currently supported for entity and relationships binding. Not only does this limitation rule out pretty much 99.9% of the existing semantic models, but it also prevents useful business scenarios, such as accessing the data where it is with DirectQuery instead of duplicating the data in OneLake.
  • No automatic ontology building – It requires cross-functional agreement on business definitions, workshops, and governance—labor-intensive for organizations without mature semantic models. I hope Microsoft will simplify this process like how Purview has automated scans.
  • Risk of overhype vs. delivery gap – We’ve seen this before when new products got unveiled with a lot of fanfare, only to be abandoned later.

The Ugly

OneLake-centric dependency. Except for shortcuts to Delta Parquet files which can be kept external, your data must be in OneLake. What about these enterprises with investments in Google BigQuery, Teradata, Snowflake, and even SQL Server or Azure SQL DB? Gotta bring that data over to OneLake. Even shortcut transformations to CSV, Parquet, JSON files in OneLake, S3, Google Cloud Storage, will copy the data to OneLake. By contrast, Palantir has limited support for virtual tables to some popular file formats, such as Parquet, Iceberg, Delta, etc.

What happened to all the investments in data virtualization and logical warehouses that Microsoft has made over years, such as PolyBase and the deprecated Polaris in Synapse Serverless? What’s this fascination with copying data and having all the data in OneLake? Why can’t we build Fabric IQ on top of true data virtualization?

Which is where I was thinking that semantic models with DirectQuery can be used as a workaround to avoid copying data over from supported data sources, but alas Fabric IQ doesn’t like them yet.

Summary

Microsoft Fabric IQ is a metadata layer on top of Fabric data to build ontologies and expose relevant data to AI reasoning. It will be undoubtedly appealing to enterprise customers with complex data estates and existing investments in Power BI and Fabric. However, as it stands, Fabric IQ is OneLake-centric. Expect Microsoft to invest heavily in Fabric and Fabric IQ to compete better with Palantir.

Performance and Cost Considerations from Power BI Pro/PPU to Fabric

What performance and cost considerations should you keep in mind if you are currently on Power BI Pro/PPU, but Fabric looks increasingly enticing and you want to upgrade an existing workspace to Fabric? For example, let’s say you’ve started with a pay-per-user workspace, but now you want that workspace to have Fabric features, such as Copilot, Lakehouse, etc. Or, as a typical use case for small to mid-size companies, you could have a Corporate BI workspace with org semantic model(s) that you want to transition to Fabric, such as to take advantage of DirectLake.

Performance

Performance is difficult to translate because Power BI Pro/PPU run in a shared capacity, meaning compute resources (v‑cores) are pooled across many tenants and dynamically allocated, whereas Fabric capacities are dedicated, meaning that Microsoft grants specific resources expressed as number of cores and memory. Therefore, Fabric performance is predicable while Pro/PPU might not be, although I’m yet to hear from client complaining about unpredictable performance.

Also, keep in mind that Power BI Pro limits you to a quota of 1 GB per dataset, PPU to 100 GB per dataset, and Fabric starts at 3 GB per dataset with F2 and doubles the grant up the chain. This is important for semantic models with imported data.

Although the tool wasn’t designed for estimating upgrade scenarios, you could start with the Fabric Capacity Estimator (preview) to get an initial ballpark estimate for the Fabric capacity. Start low, then monitor the capacity performance using the Microsoft Fabric Capacity Metrics app and be prepared to upgrade if necessary, such as when more parallelism is needed. 

Cost

This is easier. Here are the advertised, undiscounted and unreserved prices:

  • Power BI Pro: $14/user/month (free with M365 E5 plan)
  • PPU: $24/user/month ($14 discount with M365 E5 plan)
  • Fabric: Starts at $262.80 per month with F2 and doubles the price up the chain. Finding what capacity you need requires evaluating what workloads you will be running to ensure you have enough resources.

It’s important to note that Fabric capacities lower than F64 require a Power BI Pro license for every user who accesses shared content, regardless of viewing or creating content. Microsoft Copilot and ChatGPT got this wrong by adamantly claiming that viewers don’t require Pro license, while Grok got it right, so be careful which agent you use when researching. The Fabric Capacity estimator also correctly identifies the required Pro licenses.

Of course, Fabric gives you features unfortunately not available in the pay per user licensing plans, so the actual decision in favor of Fabric will probably transcend just performance and cost. When evaluating the performance of the lower Fabric SKUs, you might find the following blogs I wrote on this subject helpful:

Notes on Fabric F2 Performance: Warehouse ETL

Notes on Fabric F2 Performance: Report Load

 

Prologika Newsletter Winter 2025

Diogenes holding a lantern

If Microsoft Fabric is in your future, you need to come up with a strategy to get your data in Fabric OneLake. That’s because the holy grail of Fabric is the Delta Parquet file format. The good news is that all Fabric data ingestion options (Dataflows Gen 2, pipelines, Copy Job and notebooks) support this format and the Microsoft V-Order extension that’s important for Direct Lake performance. Fabric also supports mirroring data from a growing list of data sources. This could be useful if your data is outside Fabric, such as EDW hosted in Google BigQuery, which is the scenario discussed in this newsletter.

Avoiding mirroring issues

A recent engagement required replicating some DW tables from Google BigQuery to a Fabric Lakehouse. We considered the Fabric mirroring feature for Google BigQuery (back then in private preview, now in public preview) and learned some lessons along the way:

1. 400 Error during replication configuration – Caused by attempting to use a read-only GBQ dataset that is linked to another GBQ dataset, but the link was broken.

2. Internal System Error – Again caused by GBQ linked datasets which are read-only. Fabric mirroring requires GBQ change history to be enabled on tables so that it can track changes and only mirror incremental changes after first initial load.

3. (Showstopper for this project) The two permissions that raised security red flags are bigquery.datasets.create and bigquery.jobs.create. To grant those permissions, you must assign one of these BigQuery roles:

• BigQuery Admin

• BigQuery Data Editor

• BigQuery Data Owner

• BigQuery Studio Admin

• BigQuery User

All these roles grant other permissions, and the client was cautious about data security. At the end, we end up using a nightly Fabric Copy Job to replicate the data.

Fabric Copy Job Pros and Cons

The client was overall pleased with the Fabric Copy Job.

Pros

  • 250 million rows replicated in 30-40 seconds!
  • You can have only one job to replicate all tables in Overwrite mode.
  • In the simplest case, you don’t need to create pipelines.

Cons

The Copy Job is work in progress and subject to various limitations.

  • No incremental extraction
  • You can’t mix different load options (Append and Overwrite) so you must split tables in separate jobs
  • No custom SQL SELECT when copying multiple tables
  • (Bug) Lost explicit column bindings when making changes
  • Cannot change the job’s JSON file
  • The user interface is clunky and it’s difficult to work with
  • No failure notification mechanism. As a workaround: add Copy Job to data pipeline or call it via REST API

Summary

In summary, the Fabric Google BigQuery built-in mirroring could be useful for real-time data replication. However, it relies on GBQ change history which requires certain permissions. Kudos to Microsoft for their excellent support during the private preview.


Teo Lachev
Prologika, LLC | Making Sense of Data
logo

Power BI Workspace Identity Authentication

What credentials do you use to refresh your Power BI semantic models from Azure SQL SKUs, such as Azure SQL Database. Probably your credentials or a designated Entra account? Both are not ideal for a variety of reasons, including requiring a password. More advanced users might be using service principals, which are more secure but require secret renewal after a maximum of 24 months, which is a hustle.

Somewhere along the way without me noticing, Microsoft added a better authentication option for refreshing Power BI semantic models: workspace identity. This option lets the Power BI workspace using its own managed identity to authenticate to the data source. And it’s available in all Power BI and Fabric SKUs!

What’s not clear from the documentation is how to grant permissions to the workspace identity to read data from Azure SQL SKUs but it’s no different that granting access to the Azure Data Factory managed identity.

  1. Create the workspace identity as explained in the documentation. It has to be done for each workspace that has your published model(s).
  2. In SSMS, connect to your Azure SQL Database using Entra credentials that has permissions to manage security (SQL login won’t work).
  3. Open a new query connected to your database.
  4. Execute the following script assuming you want to grant read permissions to the workspace identity:
CREATE USER [<workspace name>] FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER [<workspace name>];

Then back to Power BI, configure your semantic model for workspace identity authentication:

  1. Navigate to the semantic model settings and click “Edit credentials”.
    A screenshot of a computer AI-generated content may be incorrect.
  2. Select “Workspace identity” as the authentication method.

A screenshot of a computer AI-generated content may be incorrect.

That’s it. Using the workspace identity to read data during model refresh is more secure and easier to manage.