Kudos to Tabular Editor!

What tool do you use for Analysis Services Tabular development? SSDT right, what else? Here is a little secret. I almost don’t use SSDT anymore, except for limited tasks, such as importing new tables and visualizing relationships. I switched to a great community tool – Tabular Editor and you should too if you’re frustrated with the SSDT Tabular Designer. Back in 2012 Microsoft ported the Power Pivot designer to SSDT to let BI practitioners implement Tabular models. This is why you still get weird errors that Excel has encountered some error. Microsoft haven’t made any “professional” optimizations despite all the attention that Tabular gets. As a result, developers face:

  • Performance issues – As your model grows in complexity, it gets progressively slower for even simple changes, such as renaming columns. The problem of course is that any change results in a commit operation to the workspace database. SSDT requires a workspace database for the Data View but it slows down all tasks even if it doesn’t have data. While the data view is useful for data analysts, I’d personally rather sacrifice it to gain development speed.
  • The horrible measure grid – Enough said. To Microsoft credit, the Tabular Explorer helps somewhat but it still doesn’t support the equivalent of the SSAS MD script editor.
  • No automation for repetitive tasks – It’s not unusual to create many measure variants, such as YTD, QTD. SSDT doesn’t help much automating them.

Tabular Editor to the rescue! It will probably take you some time to get used to it and as the author, Daniel Otykier, admits “…the tool is probably not well-suited for first time Tabular developers”. But once you get used to it you probably won’t go back to SSDT. No more workspace databases and lightning fast performance! And the cherry on top of the pie is that you can write scripts to automate repetitive tasks. Say for example, you have a bunch of base numeric columns you want to convert to explicit measures to make Excel happy (don’t get me started on the Excel support for Tabular). What is a developer to do? Write some code of course. With some C# knowledge, you whip out the following script:

Selected.Columns.ForEach( c => {

var myM = Selected.Table.AddMeasure(c.Name.Substring(0, c.Name.Length-5),

String.Format(“SUM(‘{0}'[{1}])”, Selected.Table.Name, c.Name));

myM.FormatString = @”#,##0″;

}

);

This script enumerates through the selected columns and create a measure for each column that sums the underlying column. For example, if the base column is Sales Amount Base in table Reseller Sales, the script will create a measure Sales Amount = SUM(‘Reseller Sales'[Sales Amount]). Then, you can save the script as a custom action (not be confused with AS actions). Then, select all the “base” columns, invoke the custom action and voila! – you get all the measures autogenerated! You can then select all these measures and change their properties, such as assign them all to a display folder. You also have much better control over deploying your changes to the server. Everything is simple with Tabular Editor.

101218_0108_KudostoTabu2.png

Tabular Editor is a great community tool that can help you implement and maintain Tabular models much faster than SSDT. I highly recommend it if you’re frustrated with the SSDT developer experience. Kudos to Daniel Otykier and the other community members who contributed to this tool.

ADF or SSIS, That Is the Question

Microsoft introduced Azure Data Factory (ADF) in 2015 to handle a specific scenario: tumbling window loads for Hadoop and other big data systems for internal MS usage. Then a need arose for a PaaS public-facing data integration platform which led to an interesting dilemma: SSIS or something else? Since SSIS wasn’t designed to scale out in the cloud, ADF rose to the challenge. I could argue that a better approach would have been migrating the SSIS design experience to the cloud to retain existing skills and make thousands of BI developers feel at home, as Microsoft did with SQL Server Database Engine and SSAS, but it’s a moot point now. So, ADF was incorrectly positioned as “SSIS for the Cloud” and unfortunately once that message made it out there was a messaging problem that Microsoft has been fighting ever since. Like Azure ML, on the glory road to the cloud things that were difficult with SSIS (installation, projects, deployment) became simple, and things that were simple became difficult. Naturally, Microsoft took a lot of criticism from the customers and community, including from your humble correspondent. ADF, or course, has nothing to do with SSIS, thus leaving many data integration practitioners with a difficult choice: should you take the risk and take the road less traveled with ADF, or continue with the tried-and-true SSIS for data integration on Azure?

To Microsoft’s credits, ADF v2 has made significant enhancements in features, usability, and maintainability. There is an also a “lift and shift” option to run SSIS inside ADF but since this architecture requires a VM, I consider it a narrow case scenario, such as when you need to extend ADF with SSIS features that it doesn’t have. Otherwise, why would you start new development with SSIS hosted under ADF, if you could provision and license the VM yourself and have full control over it?

SSIS has been evolving for 13 years (or 20 years if you count DTS), and it has a rich ecosystem of add-ons to tackle challenging data integration needs. ADF, on the other hand, has a humble 3-year history. It would probably take Microsoft another decade to get ADF where SSIS is today. Here are some limitations that you’ll probably run into:

  • ADF supports a limited set of connectors (even Excel is not on the list)
  • There is practically no extensibility, except custom activities (require .NET coding).
  • The ADF Execute Pipeline activity doesn’t support dynamic content for the pipeline name. This means that you can’t dynamically bind the activity to execute pipelines (an ADF pipeline fulfills the same role as a package in the SSIS world) from a list stored in a database table. Although this sounds trivial, it won’t let you implement a comprehensive ETL framework. An ETL framework is important from an architectural standpoint as any self-respected data integration developer would agree. For instance, our SSIS ETL framework complements the SSIS catalog with features that SSIS doesn’t have, such as declarative parallelism, restartability, actual vs target duration monitoring, logging, and others that I discussed in my “Is ETL (E)ating (T)hou (L)ive?” newsletter. Unfortunately, it’s not possible to migrate or implement such frameworks in ADF because of the above-mentioned Execute Pipeline restriction.
  • Continuing on limitations, ADF lags considerably behind SSIS in the areas of troubleshooting and monitoring. For example, if you misconfigure a previous step, you’ll get “Activity failed because an inner activity failed” during debugging with no way to troubleshoot this. This took hours to troubleshoot on a recent project! Of course, ADF is a cloud technology so I hope the momentum behind it continues and the feature gap narrows to bring ADF on a par with SSIS.

As of now, consider Azure Data Factory for pure PaaS solutions (sometimes, this is mandated by upper management and you don’t have much latitude). ADF would be also a good choice when your data integration loads need to scale, such as when you run a lot of ETL jobs that might be difficult to scale on a VM running SSIS. If you decide for ADF, simplify your data integration pipeline as much as possible. Strongly consider the ELT pattern instead of ETL (see that newsletter again if you don’t know what that means). Although all Microsoft cloud architecture diagrams prominently show ADF, my personal preference for cloud data integration projects is still SSIS on a VM with the Azure Feature Pack.

I’d love to hear your feedback.

Power BI Features Report

Want to know what Power BI features were released in a certain time period? Check out the Power BI Features report. After some delay, you should see the report embedded on the page but please be patient. If no patience, you can also download the pbix file from the same page. Then, use the slicer on the first page to filter your date range. In the “Count of Category” bar chart, right-click the category and then click See Records to see to the actual features. Once you drill through the category, there is a link next to each feature that redirects you to the corresponding blog to learn more.

pbifeatures

The Cost of xVelocity Relationships

How expensive is an xVelocity relationship? The answer depends on several factors, such as column cardinality, DAX calculations, and query itself. But it general, a relationship can add a significant overhead. Consider two tables: Customer (7.1 million rows) and Account Snapshot (187 million rows) and a relationship ‘Account Snapshot'[CustomerKey] -> Customer[CustomerKey]. I’ll use two queries for the test. The Relationship query uses the ‘Account Snapshot'[CustomerKey]->Customer[CustomerKey] relationship, while the No Relationship query scans directly the Account Snapshot table bypassing the relationship.

RelationshipNo Relationship
EVALUATE
CALCULATETABLE (
ADDCOLUMNS (
FILTER (
VALUES ( ‘Customer'[CustomerKey], … ),
‘Customer'[CustomerKey] IN { 731102730822895922 }
),
“Balance”, [Balance]
),
FILTER ( VALUES ( ‘Date'[Date] ), ‘Date'[Date] = DATE ( 2018, 3, 31 ) )
)
EVALUATE
CALCULATETABLE (
ADDCOLUMNS (
FILTER (
VALUES ( ‘Account Snapshot'[CustomerKey], … ),
‘Account Snapshot'[CustomerKey] IN { 731102730822895922}
),
“Balance”, [Balance]
),
FILTER ( VALUES ( ‘Date'[Date] ), ‘Date'[Date] = DATE ( 2018, 3, 31 ) )
)
1 customer: 198 (181/17)1 customer: 141 (84/57)
1,000 customers: 901(547/354)1,000 customers: 1,194 (315/877)

Each of these DAX queries results in two significant queries sent to the storage engine. The first calculates the measure for each customer and the second returns the selected customers (the filter clause in the query).  In the case of the Relationship query, this is what the first SE call looks like:

SELECT
‘Customer'[CustomerKey],
SUM ( ‘Account Snapshot'[Balance Base] )
FROM ‘Account Snapshot’
LEFT OUTER JOIN ‘Customer’ ON ‘Account Snapshot'[CustomerKey]=’Customer'[CustomerKey]

And, of course, there is no join for the No Relationship query. The statistics below the queries shows the query execution for 1 and 1,000 customers (customer identifiers were comma-separated and added to the IN clause). The first number shows the overall execution time in the storage engine while the number in parenthesis show the breakdown of the two SE queries.

As we can see, for one customer the relationship adds more than twice of overhead (181 vs 84 milliseconds). But for 1,000 customers, the overall execution time for the No Relationship query is higher 1,194 vs 901. How come? The first storage query is still faster (almost x2) but the second SE query is a way slower. As it turns out, the storage engine locates 1,000 rows in the smaller table (Customer) much faster than in the snapshot table when it executes the filter query. However, if we add more measures than at some point the No Relationship query would become faster. In a real-life project where the query requested some 200 measures, queries without relationship executed x3 faster than the ones with relationships.

xVelocity relationships between large tables could be expensive especially when the query requests many measures. If your query semantics allows it, consider denormalizing attributes from large dimension tables into fact tables to eliminate relationships.

One case where the query semantics might require a relationship is if it requests measures from multiple fact tables. But if you find that eliminating the relationship results in a significant performance boost, consider breaking down the query to request measures from each table and then union the result.

Another Successful SQL Saturday!

Kudos to all the hard working individuals that made SQL Saturday BI (Atlanta) a success! It was great meeting many BI fans.

sqlsaturday201809

Prologika Newsletter Fall 2018

Semantics relates to discovering the meaning of the message behind the words. In the context of data and BI, semantics represents the user’s perspective of data: how the end user views the data to derive knowledge from it. A modeler translates the machine-friendly database structures and terminology into a user-friendly semantic model that describes the business problems to be solved. To address this need, you create a semantic model. In my “Why Semantic Layer?” newsletter I explained the advantages of an organizational semantic model. In this newsletter, I’ll discuss how Power BI extends semantic modeling and converges it on a single platform. But before I go into details and speaking of semantic models, I’m excited to announce the availability of my new “Applied DAX with Power BI” workshop and its first in-person and public enrollment class on Oct 15-16 in Atlanta! Space is limited so RSVP today.

Semantic Model Flavors

In Microsoft BI, you can implement a semantic model using Power BI Desktop, Excel (Power Pivot) and Analysis Services (Multidimensional and Tabular). The first two are typically used by data analysts, while Analysis Services is considered a professional technology. Thus, we can classify semantic models into two broad categories: personal (self-service) and organizational. Behind the scenes, Power BI Desktop, Power Pivot and Analysis Services Tabular use the same foundation and storage engine. Nevertheless, up to now there have been feature differences and a strict division between these two types.

PersonalOrganizational
AuthorData analystBI Pro
ToolPower BI Desktop, Excel (Power Pivot)SSDT and Analysis Services
ScopeNarrow (usually to solve specific need)Wide (multiple subject areas)
Implementation effortShortLonger (data warehouse, ETL, model, testing)
Data capacityLimited (up to a few million rows)Larger (millions or billions of rows)
Data qualityTrust authorTrust modeler and testers
Data centralizationMay lead to data duplicationSingle version of truth
DeploymentPower BI Service, Power BI Report ServerSSAS (on premises)

Azure Analysis Services (cloud)

ConsumersDepartment, projectPotentially entire organization

How Power BI Changes Semantic Modeling

Power BI will blur the boundary between the personal and organizational aspects of semantic modeling. First, we’ve already seen how Microsoft introduced the following “pro” features in Power BI that don’t even exist or more difficult to implement with Analysis Services:

  • Incremental refresh – Delivers the ability to refresh portions of a larger dataset, such as the last 7 days. Currently, the largest dataset size supported by Power BI Premium is 10 GB, but Microsoft has already announced that soon Power BI will support larger datasets. What this means for you is that you’d be able to deploy organizational semantic models to Power BI Premium and schedule them for incremental refresh. My blog “Notes on Power BI Incremental Refresh” provides the details on this feature.
  • Composite semantic models – A composite model has heterogenous storage, such as some tables are imported and some are DirectQuery, as I discussed in my blog “Power BI Composite Models: The Good, The Bad, The Ugly“. This brings a lot of flexibility to how you connect to data.
  • Aggregations – Aggregations are predefined data summaries to speed up queries with very large models. My blog “A First Look at Power BI Aggregations” covers Power BI aggregations.

On the tooling side of things, Power BI Desktop will also pick “pro” features, such as perspectives and displays folders. Microsoft hopes that in time Power BI Desktop will serve the needs of both data analysts and BI pros. However, the lack of extensibility and source control, as well as performance issues caused by committing every model change to the background Analysis Services instance, makes me skeptical that pros will embrace Power BI Desktop. But because Microsoft announced plans to open the Power BI Tabular management endpoint (recall that published Power BI datasets are hosted in a “hidden” Tabular server), pros can still use SSDT and community tools, such as Tabular Editor, to design and deploy their models to Power BI Premium.

In time Power BI Premium will become a single cloud platform for hosting Microsoft BI artifacts (semantic models and reports) and facilitating the continuum from personal to organizational BI. This is a great news for BI practitioners frustrated by tooling and deployment differences. At the end, the personal and organizational paths will converge without feature discrepancies. The only difference would be the scope of your organizational model and how for you want it to become “organizational”.

Teo Lachev
Prologika, LLC | Making Sense of Data
Microsoft Partner | Gold Data Analytics

logo

2-Day Applied DAX with Power BI Workshop in Atlanta (Oct 15-16)

Struggling with DAX? Data Analysis Expressions (DAX) is the expression language of Power BI, Power Pivot, and Analysis Services Tabular. It’s very powerful but it’s usually perceived as complex and requiring a steep learning curve. Taught by an established expert, this two-day workshop is designed to help you become proficient with DAX. Think of this workshop as advanced Power BI training. Reserve your seat today to attend this insightful 2-day workshop for only $999, when Teo Lachev (CEO of Prologika and Microsoft Data Platform MVP) teaches you the necessary DAX skills that you can immediately apply to your job.

Event Summary

  • Date: October 15 and 16
  • Time: 8:30 AM – 5 PM
  • Location: Microsoft Office in Alpharetta
  • Catering: lunch provided
  • Syllabus: available here
  • Price: $999 (use coupon SQL201810 for an instant 5% discount if you sign up two or more attendees from your company)
  • For more information and to register, go to http://bit.ly/daxworkshop201810

Key Benefits

  • Understand best practices for model design
  • Create advanced calculated columns
  • Master measures and evaluation contexts
  • Apply time intelligence
  • Work with advanced relationships and data security
  • Learn how to troubleshoot performance issues
  • Brainstorm your DAX puzzles
  • and much more…

091818_1243_2DayApplied1.png

New “Applied DAX with Power BI” Workshop

Data Analysis Expressions (DAX) is the expression language of Power BI, Power Pivot, and Analysis Services Tabular. It’s very powerful but it’s perceived as complex, requiring a steep learning curve. I’m excited to announce a new 2-day “Applied DAX with Power BI” workshop that I designed to help you become proficient with DAX. You’ll learn practical skills that will help you tackle a wide range of reporting requirements. We’ll start with DAX fundamentals, such as calculated columns and measures, and then progress to more advanced concepts, including such as context transitions, variables, filters, time intelligence, advanced relationships, row-level security, query optimization, and much more. Think of this workshop as Advanced Power BI and the next level from my “Applied Power BI” class. The target audience is data analysts and BI developers wanting to hone their DAX skills with Power BI, Power Pivot, or Tabular.

Here is my entire training catalog with a brief description and link to each course page.

COURSEDESCRIPTION

DURATION
(days)

Applied Power BI
This two-day workshop is designed to help you become proficient with Power BI and acquire the necessary skills to work with online and on-premises data, implement data models on a par with professional models created by BI pros, unlock the power of data by creating interactive reports and dashboards, and share insights with other users. No prior data modeling or reporting knowledge is assumed. Students are welcome to bring their own data to the second day of the class.

2

Applied DAX with Power BI
NEW!
Data Analysis Expressions (DAX) is the expression language of Power BI, Power Pivot, and Analysis Services Tabular. It’s very powerful but it’s usually perceived as complex requiring a steep learning curve. This two-day class is designed to help you become proficient with implementing business calculations with Data Analysis Services (DAX).

2

Applied BI Semantic ModelTargeting BI developers, this intensive 5-day onsite class is designed to help you become proficient with Analysis Services and acquire the necessary skills to implement Tabular and Multidimensional semantic models. Use the opportunity to ask questions and study best practices that will help you achieve a single version of the truth by implementing scalable and secure organizational models. Bring your organizational BI to the next level by learning these two powerful BI technologies in one class!

5

Applied SQL FundamentalsSQL Server is the most deployed and popular database today. Different types of users need to query data stored in SQL Server data structures. This 2-day instructor led course provides you with the necessary skills to query Microsoft SQL Server databases with Transact-SQL. This course starts with the basics of a SELECT statement and its syntax, and progresses to teach you how to join, aggregate, and convert data.

2

Applied Microsoft BI (End to End)This four-day class is designed to help you become proficient with the Microsoft BI toolset and acquire the necessary skills to implement an organizational BI solution. You’ll learn how to design a star schema, use SQL Server Integration Services to transform data, and implement a Tabular semantic model. Depending on the students’ skillset, it can be customized, such as to reduce coverage of specific technologies, replace them with other topics of interest, such as Multidimensional instead of Tabular, or cover additional topics, such as Power BI or Reporting Services.

4

Applied Reporting ServicesMicrosoft SQL Server Reporting Services has evolved into a sophisticated reporting platform that lets you present and analyze data consistently, quickly, and reliably. This intensive 3-day class is designed to help you become proficient with Reporting Services and acquire the necessary skills to author, manage, and deliver reports.

3

Applied Analysis Services-MultidimensionalThis intensive four-day class is designed to help you become proficient with Analysis Services (Multidimensional) and acquire the necessary skills to implement OLAP and data mining solutions. Learn how to build a cube from scratch. Use the opportunity to ask questions and study best practices!

4

Applied Excel and Analysis ServicesIf your organization have Analysis Services Multidimensional cubes or Tabular models and you want to gain valuable insights from them in Excel, then this course is for you. Designed as a step-by-step tour, this course teaches business users how to become data analysts and unlock the hidden power of data. You’ll learn how to apply the Excel desktop BI capabilities to create versatile reports and dashboards for historical and trend analysis.

2

Applied Power BI with ExcelPower BI is a suite of products for personal business intelligence (BI). It brings the power of Microsoft’s Business Intelligence platform to business users. At the same time, Power BI lets IT monitor and manage published models to track their usage, security, and estimate hardware and software resources. With Power BI, anyone can easily build personal BI models using the most popular tool – Excel and share them on premises or the cloud.

2

Applied Microsoft Visualization ToolsThis two-day class is designed to help you learn the visualization tools that are included in the Microsoft Data Analytics Platform. We’ll start by exploring the Excel reporting capabilities that include pivot (PivotTable and PivotChart) and Power View reports. Then, you’ll learn how to explore data interactively with Power BI Desktop. During the second day of the class, we’ll focus on learning how to create paginated reports with Reporting Services. The class can be customized to discuss other tools, such as SSRS mobile reports or Power Map reports.

2

Applied Master Data ManagementThis two-day class is designed to help you become proficient with Master Data Services (MDS) and Data Quality Services (DQS). IT and business users learn how to design MDS models and extend them with business rules, attribute groups, and hierarchies. IT will learn how to integrate MDS with upstream and downstream systems and how to enforce secured access. Business users will learn how to use Excel to manage data with the tool they love most!

2

DAX Variables to Rescue

Scenario: DAX has its own share of idiosyncrasies that can humble both novice and experienced users. Consider a common example where a measure attempts to return sales for the last date in the Date table. What makes this common is that many real-life calculations require measures that evaluate as of the user-specified date (as of date), which is what MAX(‘Date'[Date]) returns.

=CALCULATE(SUM(Sales[SalesAmount]), MAX(Date[CalendarYear]) )

As innocent and logically correct this measure is, it fails with “A function ‘MAX’ has been used in a True/False expression that is used as a table filter expression. This is not allowed.” It’s also amusing to see even Microsoft struggling to explain the reason of this error which at some point was on the internal top 10 Microsoft list for DAX “ease of use”. This is what the documentation states about this error:

“The filter expression, MAX(‘Date'[CalendarYear]) attempts to return the largest numeric value in the CalendarYear column. However, in context of the measure expression, it cannot be passed as a table filter expression to the CALCULATE function, causing an error.”

This of course is incorrect. First, the MAX function doesn’t return a table but a scalar value. Second, the CALCULATE function is perfectly capable of taking Boolean expressions. The actual issue is that the Boolean expression is surrounded by a hidden CALCULATE and it’s ambiguous in what context the maximum date should be evaluated. To be consistent with the way filters propagate, it should be in the filter context outside of CALCULATE, but in the row context of the as-of date, which becomes a filter context with MAX expression. But this is not what you would expect, so DAX fails safe with the error.

The workaround suggested by the documentation is to filter the Date table and pass it as a table filter to calculate. This requires ignoring first the filter context on the Date table, only to overwrite it later with the ‘as of date’.

=CALCULATE( SUM(Sales[SalesAmount]), FILTER( ALL( ‘Date’[CalendarYear]), [CalendarYear] = MAX(‘Date’[CalendarYear]) ) )

Solution: A better solution is to use a variable. This example defines an EOP (End of Period) variable.

= VAR EOP = MAX(Date[CalendarYear]) RETURN CALCULATE(SUM(Sales[SalesAmount]), [CalendarYear] = EOP )

Because the EOP variable is evaluated where it’s defined there is no hidden context and the measure works. Unfortunately, as it stands today, DAX doesn’t allow us to create a global session variable for such scenarios. So, you’d need to include this variable in every measure that requires it.

Speaking of issues with the DAX documentation and variables, here is another example (last example on that page) that doesn’t work:

YoY% = var Sales = SUM(SalesTable[SalesAmount]) var SalesLastYear=CALCULATE(Sales, SAMEPERIODLASTYEAR(‘Calendar‘[Date])) return
if(Sales, DIVIDE(Sales – SalesLastYear, Sales))

This doesn’t work because the SalesLastYear var attempts to overwrite the context of the Sales var. In other words, it attempts to treat it as a measure but SalesLastYear would always return this year sales. The correct example should be:

YoY% = var Sales = SUM(SalesTable[SalesAmount]) var SalesLastYear=CALCULATE(SUM(SalesTable[SalesAmount]), SAMEPERIODLASTYEAR(‘Calendar‘[Date])) return
if(Sales, DIVIDE(Sales – SalesLastYear, Sales))

The documentation should say that although one variable can reference another, it can’t overwrite its context. Also, to avoid a logical bug, the example should actually be:

YoY% = var Sales = SUM(SalesTable[SalesAmount]) var SalesLastYear=CALCULATE(SUM(SalesTable[SalesAmount]), SAMEPERIODLASTYEAR(‘Calendar‘[Date])) return
if(AND(Sales, SalesLastYear), DIVIDE(Sales – SalesLastYear, Sales))

Because it’s meaningless to calculate YoY if this year or last year sales are missing.


Power BI Conversations

As Microsoft announced in the “Distribute insights across the organization with Microsoft Power BI” presentation (forward to the Nikhil Gaekwad part starting at the 32 min), they’ve been releasing the following set of features to improve the end user experience in Power BI Service:

  • Home (not yet available) – Personalized landing page
  • Personal bookmarks (not yet available) – End users can create their own bookmarks
  • Persistent filters (already available) – Remembers filters and slicers set by end users
  • Conversations (dashboards conversations are available) – Dashboard and report discussions
  • Sharing with filters (not yet available) – Propagates the current filters when sharing individual reports and dashboards

I’ve noticed that the dashboard conversations are now available. Just open a Power BI dashboard and click the Comments menu. This will open a Comments pane when you can post comments related to the entire dashboard. You can also post comments for a specific tile by clicking the tile ellipsis menu and then choosing “Add a comment”. You know that a tile has comments when you see the “Show tile conversations” button that floats on the tile. Clicking this button brings to the Comments pane to see and participate in the discussion.

conversations

For tile-related comments, you can click the icon below the person in the Comments pane, to navigate to the specific visual that comments are associated with. Conversations are available in Power BI Service and Power BI Mobile. To avoid posting a comment and waiting someone to see it to act on it, you can @mention people as you can do on Twitter. For example, in my first comment I referenced @Prologika Info. Then that user will get an email that looks like this:

090318_2020_PowerBIConv2.png

Collaboration features are not new to Power BI. Workspaces backed by O365 groups (the old-style workspaces) have supported email-based conversations. Power BI Mobile lets users annotate reports or tiles with text and emotions, and then send a screenshot to another user. Dashboard and report conversations bring collaboration a step further by allowing end users to collaborate in the context of a specific report, dashboard, or even a visual.

Usage scenarioLimitations
Workspace conversationsWorkspace-level email-based collaborationThe workspace needs to be backed by O365 group.
Power BI Mobile annotationsSend annotated screenshots to other usersPower BI Mobile only; no discussion thread
Dashboard and report conversationsDiscussion thread on dashboard/report/tileComments don’t save the state of the visual or include a screenshot of the visual if it changes after data refresh