Understanding AAS Synchronization

None of the Multidimensional and Tabular semantic models I’ve implemented required scaling out, but I understand that some scenarios might benefit from it, such as B2B or B2C reporting. Depends on the implementation approach, another benefit could be implementing a highly available farm. Implementing a scaled-out, on-premises SSAS farm is not easy. There are different techniques but all of them involve a compromise between availability and latency. Azure Analysis Services sought to make this easier. It’s as easy as changing a slider and AAS synchronizes everything somehow, right? Wrong. Here are the high-level notes of how AAS synchronization works:

  1. As a part of configuring synchronization, you can specify a designated processing node. If you do specify a processing node, processing takes place on that node only and the updated cache is distributed to the query nodes. If you don’t specify a processing node, one of the query nodes handles both processing and querying. The node that handles processing is referred to as a primary node.
  2. AAS synchronization is not automatic. You’re responsible to initiate the synchronization step, either manually from the Azure portal or automated by using Sync-AzureAnalysisServicesInstance from PowerShell, or using the REST API for Azure AS. This means that every time you process the model, you’d need to use one of these mechanisms to sync the model with the query nodes.
  3. When the sync command is initiated, AAS copies the entire AS database from the primary node to blob storage. That’s because the primary (read/write) node is on a different copy than the query nodes so that it can be updated without effecting end-user queries. The file copy happens inside the blob storage (no network traffic is involved). Additional details about the file copy process:
    1. Multi-threaded (unlike the SSAS sync wizard)
    2. Hydrates multiple replicas from a single data folder
    3. The data folder is striped for better I/O
  4. Once the updated database (all files) are copied to the blob storage, the query nodes detach the old copy and attach to the new one.
  5. AAS waits until the model is fully loaded into memory and available for queries before making the copy (replica) online.

How fast is the sync process? We measured with a 50 GB database and it took 20 minutes for the sync process to complete. I guess most time was spent in the last step. So, as it stands, AAS synchronization is more suitable for predictable processing windows, such as in the “classic” case where ETL processes and synchronizes the model once per day, outside working hours. It might not work well for real-time solutions, where new data might arrive throughout the day and business requirements call for minimum latency because you might not have enough time for the file copy to complete.

Another gotcha that we ran into was that there seems to be an issue with the metrics reporting and a scale-out farm. During load testing we saw that the average QPU of a two-node farm (S4 performance tier with max QPU of 400) showed saturation at 200 average QPU. It looks like the Azure portal collects 400 from each node and then divides by the number of nodes. We raised the issue to Microsoft and probably it will addressed soon.

Scaling out is not about reducing query times. It’s about handling more load when scaling up doesn’t go enough. When you scale out to a two-node farm, you’re still limited by the resource constraints (memory, CPU) of each node. Unless you are after a highly available BI solution, scaling out should be your last option. Time spent in optimizing your model design, data storage, and queries is usually much more productive than taking the easy way out and throwing more hardware (and money).

This blog explained how AAS synchronization works. I hope Microsoft will offer more options in future to make it more useful. A welcome enhancement would be the ability to configure an automatic and less granular synchronization, such as to copy only the files that have changed.

Demystifying Tabular Object Level Security (OLS)

Object-level security (OLS) is a frequently requested security feature when implementing semantic models. For example, a current project disallows some sales persons to see sensitive measures, such as Margin and Profit. In Multidimensional, modelers could use cell security to meet such requirements with the risk of compromising performance. Starting with SQL Server 2017 (compatibility level 1200), Tabular supports object-level security to disallow access to entire tables or specific columns without a performance penalty. As it stands, OLS has the following limitations (for a full list, review the documentation):

  1. There is no user interface for defining OLS. Instead, after defining your roles and row filters as you’d typically do, you need the extra step to open the model.bim source code and enter the OLS definition manually. For example, this Users role definition disallows access to the GrossProfit column. Fortunately, SSDT preserves OLS when you make changes to the role in the Role Editor and save.
    “roles”: [ {

    “name”: “Users”,

    “description”: “All allowed users to query the model”,

    “modelPermission”: “read”,

    “tablePermissions”: [

    {

    “name”: “Reseller Sales”,

    “columnPermissions”: [

    {

    “name”: “Gross Profit”,

    “metadataPermission”: “none”

    }] }

    ] }

  2. Unlike Multidimensional cell security, Tabular OLS doesn’t support specifying additional or different conditions that apply to OLS only within the same role. For example, you can’t have a single role that applies one condition (row filter) for data-level security and another condition for object-level security. In other words, OLS inherits the row filters defined in the role. Because of these limitations, security requirements might force you to create multiple roles, such as a Sales RLS role for sales people who can see all columns, and “Sales OLS” role for sales people that are disallowed access to some columns even if the same row filters apply.
  3. OLS enforces access to physical columns only. You can’t secure measures directly. If you need to secure a measure, you need to secure a base column that the measure uses either directly or indirectly.
  4. There is a current bug that we ran into regarding drillthrough and OLS. If a user belongs to a role that defines OLS and the user drills through any column in the secured table, Tabular generates this error:

The ‘<sensitivecolumnname>’ column cannot be found in the ‘$<tablename>’ table.

Microsoft has promised a fix in the next cumulative update. UPDATE 4/24/2018 – This issue was fixed in CU6 of SQL Server 2017.

Use Tabular object-level security to protect sensitive columns or exclude entire tables. Coupled with row-level security (RLS), OLS allows you to implement both data and metadata restrictions, subject to the limitations I discussed in this blog.

Another Successful Power BI Workshop

Thank you to everyone attending the Power BI workshop by Prologika yesterday. We had great turnout with more than 50 people from 30 companies attending the event at the Microsoft Technology Center in Alpharetta! The key takeaways were:

  • Many companies transition to Power BI because of its great value-to-cost proposition.
  • PowerQuery is a fantastic tool for data cleansing and transformation that is missing in other competitor offerings.
  • Power BI can deliver rich visualization capabilities that delight end users and surpass by far the capabilities of traditional reporting tools.
  • Instead of being a just a stand-alone BI tool, Power BI is a part of a rich ecosystem of products and services that allows you to extend Power BI in versatile ways.

Notes on Analysis Services Tabular Performance and Parallelism (Part 2)

In my “Notes on Analysis Services Performance and Parallelism” blog, I shared limited performance results about scaling up Azure Analysis Services Tabular. As I mentioned, I expect the same findings to apply to SQL Server Analysis Services Tabular for on-premises deployments. As the official load test results came in, I’m ready to provide more info and general recommendations. We load tested the AAS semantic model mentioned in the previous blog (about 2 billion rows) by capturing a representative subset of DAX queries including slow, medium and fast queries. We used a stepped load starting with one user and increased the number of concurrent users with one every 30 seconds. We considered the server saturated when 90% of maximum QPU was reached on average. For example, the S4 performance level has 400 QPU (20 cores) so the saturation level would be 360 QPU on average.

Note It takes a while for the Azure portal to refresh the graphs with metrics, with 30 seconds or more lag time being the norm. Also, you’ll probably find that one test controller is sufficient to saturate Tabular. Also, don’t focus too much on the number of concurrent users alone. In real life, you might not even have two concurrent users (user executing queries at the same time) but you might have a dashboard that runs queries in parallel, such as in the case of a Power BI report with multiple visuals. It all depends on what performance goal you’ve established prior to running the test. You established a performance goal right? If you don’t know how, my “Can Your BI Solution Scale” presentation should help. I personally prefer to express the goal as queries/sec instead of concurrent users as it’s very difficult to determine concurrency.

The results were consistent with my previous observations. As you can see in the chart below, in this case scaling up to a higher price tier increases the system throughput almost linearly.

S2 (200 QPU)S4 (400 QPU)S9 (640 QPU)
Number of cores102032
Max concurrent users4812
90% QPU capacity reached in time from test start2 min3.5 min6

To recap, scaling up Tabular would benefit two primary scenarios:

  1. When queries are storage engine bound – This would typically happen with larger models when significant time is spent scanning RAM.
  2. High concurrency – As you scale up and the number of cores increase, the system throughput will increase as well. Remember that you max out the highest tier (s9), you can easily scale out AAS as well, giving you the ability to scale almost linearly.

The scenarios that I don’t expect to benefit from scaling up fall into two general categories:

  1. Smaller models – Remember that Tabular allocates one core per a segment (the default segment size is 8 million rows). So, if you have a model of 100 million rows, the S2 level of parallelism (10 cores) might be good enough already. In this case, increasing the parallelism won’t help as the model just doesn’t have enough data to scan in parallel.
  2. Complex queries – Because formula engine is single threaded, formula-engine bound queries would be poor candidates for scaling up.

Plan for a load testing effort when in doubt if your semantic model would scale  Every model is different so take the above notes as general guidelines. Before scaling up, if possible, consider an alternative design to reduce number of rows and column data size and cardinality (if the queries are storage engine-bound), or optimize DAX queries relentlessly (if queries are formula engine-bound). Needless to say, fast storage doesn’t justify or compensate for a bad design or inefficient queries.

Notes on Analysis Services Performance and Parallelism

As a part of a semantic model assessment, I got to experiment with different configurations of Azure Analysis Services. In this case, the largest fact table has almost 2 billion rows so it’s a good size model. I was particularly interested the correlation between a higher performance tier and query performance. One thing that I like about PaaS is that it’s easy to change and experiment with different configurations. The thing that I don’t like is that I can’t peek under the hood. This is probably done on purpose to keep people like me out, just like these green nets surrounding construction zones. So, I don’t know what hardware AAS is running on and how it’s configured. And I failed to get clarification from Microsoft. So, I might be out of line here, but I thought my findings are worth sharing albeit not officially vetted.

AAS Basic and Standard tiers are advertised as dedicated tiers running on dedicated virtual machines preconfigured with a certain number of cores that are a 20x multiple of the tier QPUs. For example, S2 tier (200 QPUs) comes with 10 cores while S9 (640 QPUs) comes with 32 cores. If you examine the advanced properties of the AAS instance in SSMS, you’ll see Group Affinity masks that confirms this. For example, the mask for S2 is x3FF (or xFFC) which converted to binary is 1111111111, whereas the mask for S9 is FFFFFFFFFFFF for 32 1’s. I don’t know why masks are used given that the VM has already a predefined number of cores. One, and most probable, explanation is that Microsoft co-hosts multiple Analysis Services instances on the same VM. When you provision a new AAS instance, Azure finds a VM that has enough capacity and spins the new service there. If you upgrade AAS, it could get relocated to a different VM if the old one doesn’t have enough capacity. Or, Microsoft might allocates more cores to the VM and a subset to AAS.

Anyone who’s done parallel programming knows that it’s not easy. Safe access and locks need to be used to synchronize access and protect shared resources. The AS xVelocity engine (aka Vertipaq) is a multi-threaded application that accesses RAM as a shared resource. As each thread competes for a global lock for memory allocation/deallocation, scalability decreases. Because processor cores reserve memory in chunks, called cache lines, additional synchronization is required when threads access memory location that are close to each other. Before SQL Server 2016 SP1, this was a serious issue with Tabular. As we’ve learned, starting with SQL Server 2016 SP1, Microsoft switched to using the Intel Threading Building Clocks (TBB) C++ library. Specifically, Tabular now uses the TBB scalable memory allocator. Mind you that TBB is not a CPU-specific extension so any modern Intel CPU should get these benefits. “Analysis Services SP1 uses an Intel TBB-based scalable allocator that provides separate memory pools for every core. As the number of cores increases, the system can scale almost linearly” and “The Intel TBB-based scalable allocator is also expected to help mitigate performance problems due to heap fragmentation that have been shown to occur with the Windows Heap”. Further, starting with SP1, Tabular is NUMA aware with 4-node NUMA system but I don’t know if the AAS VMs are preconfigured for NUMA.

These changes seem to be very beneficial and I do see the promised linear scalability (or close to it) as the number of cores increase. After all, how would you explain a customer that the query times remain the same when they switch to a higher tier? I used this query as a sample query to measure the memory scan performance (the filter is used to prevent Tabular short-circuiting the count by using internal statistics):

EVALUATE CALCULATETABLE (ROW ( “result”, DISTINCTCOUNT ( ‘Table'[day_id] ) ),    ‘Table'[day_id] > 0)

The day_id column has a 2.5 GB of total column size with 875 distinct values. The query execution time decreased 2.5 times when switching from S2 to S9, while the storage engine scan time decreased 3 times. Of course, more complicated queries would carry additional overhead in the formula engine which as we know is single threaded.

AAS TierStorage Engine
Parallelism
Storage Engine Time (ms)Total Execution Time
(ms)
S2 (200 QPUs)9x1,5631,922
S9 (640 QPUs)29x516812

Tabular does an excellent job parallelizing the storage engine queries and maxing out all available cores in short bursts of time. Staring with SQL Server 2016 SP1, you should see substantial performance gains as the number of cores increase if the query is storage engine bound (which it will probably be with larger models).

I plan to update this blog when results from official load testing we plan to do with this semantic model are available.

Power BI Models Scaling Up

Chris Webb wrote a great post today “Power BI Roadmap Announcements In the Dynamics 365 Spring ’18 Release Notes” citing the many exciting new features coming to Power BI in the next few months and highlighting the huge momentum behind Power BI. I want to bring your attention to a couple of them:

  1. Incremental data refresh: Organizations can use incremental data refresh to scale datasets to even larger sizes. With incremental data refresh, users can load only new or changed data. When creating a dataset in the Power BI Desktop, a user configures a refresh table for each table in the model that is to load incrementally, and then publishes it to a Power BI Premium workspace. Thereafter, each scheduled refresh loads only new or changed data.
  2. Query acceleration for large datasets (Public Preview): Users can create DirectQuery models over any size data in sources, such as Spark and Azure SQL Data Warehouse, and then accelerate common queries by building in-memory aggregations over some of the data. Common queries use the aggregated cache to return results in a fraction of a second instead of directly querying the source. Users can create datasets of massive size and still provide interactive querying.

Why is this important? Because important professional features have crossed the Analysis Services-Power BI line and now business users can build large-scale models using Power BI Desktop. In fact, it looks like “in-memory aggregations” would come first to Power BI as we don’t have them in Analysis Services (yet). Now, I’m not a big proponent of business users building “massive” models (such datasets should be centralized and sanctioned by professionals) but apparently there is a good case for it. In fact, in my “Why Business Like Yours Choose Power BI Over Sisense” blog, I mentioned that SiSense uses this as a competitive advantage. At that time, the maximum Power BI dataset size was 1 GB. If you wanted to build larger models, you had to use Analysis Services, which has a separate price tag, and requires Visual Studio for development. Not to mention that you have to migrate your models from Power BI Desktop  to Analysis Services (supported in AAS but not officially supported with on-premises SSAS) and learn new skills.

A lot has changed since. Notably, Power BI Premium raised the maximum dataset size to 10 GB and it’s likely that this limit would be raised further. While I’m not ready to embrace Power BI Desktop for organizational semantic models yet, soon Power BI Premium users will have the option to use a business-user friendly tool (Power BI Desktop) for implementing larger semantic models. And, of course, another important motivation to take this path is reducing licensing cost.

Atlanta MS BI and Power BI Group Meeting on March 26th

MS BI fans, join us for the next Atlanta MS BI and Power BI Group meeting on Monday, March 26th at 6:30 PM. Leo Furlong will introduce you to implementing organizational semantic models with Azure Analysis Services. BlueGranite will sponsor the meeting. And your humble correspondent will demo several latest Power BI features: persistent filters, tooltip pages, and share to anyone. For more details, visit our group page and don’t forget to RSVP (use the RSVP survey on the group page) if you’re planning to attend.

Presentation:Introduction to Azure Analysis Services
Date:March 26, 2018
Time6:30 – 8:30 PM ET
Place:South Terraces Building (Auditorium Room)

115 Perimeter Center Place

Atlanta, GA 30346

Overview:Azure Analysis Services (AAS) is an exciting tool within the Azure Data Services tool set. In this session, we’ll learn about the basics of AAS including how to create the service in the Azure portal, options for developing solutions and deploying them, processing data, scaling up and out, security, pricing, and where AAS fits in an overall modern data architecture. At the end of this session, you’ll have a basic understanding of how to get started with the tool and know how it could fit in your organization.
Speaker:Leo Furlong is an experienced Data and Analytics Principal at BlueGranite with extensive experience implementing Digital Transformation Strategy using on-premise and Azure cloud technologies.
Sponsor:At BlueGranite we help our clients utilize data as a strategic asset, delivering data architecture and data integration solutions to drive insights and analytics across your organization. Our solutions monitor trends, measure performance, and provide real-time analytics for rapid, informed decision making. We leverage industry leading technology from Microsoft to build modern data platforms that reduce operating costs, increase market share, improve workforce efficiency, and so much more.


092417_1708_AtlantaMSBI1.png

Too Many Measures?

I’m doing an assessment of a data mart and semantic layer for an organizational BI solution. I was given the source of the Analysis Services Tabular model. Upon opening it locally, it took Visual Studio several minutes to open the project. Even worse, after the project was finally loaded, my laptop CPU utilization went into 80-90% and all the 16 GB of memory got exhausted rather quickly. With an empty workspace database! What’s going on?

As it turned out, a business requirement asked for all measures to be in the same table. So, developers introduced a Metrics DAX calculated table and assign it as a home table for all measures organized in display folders. Nothing wrong with that. However, the number of DAX measures were 2,774! It’s common practice to “flatten” DAX measures, such as to have separate measure for time calculations: SalesAmount, SalesAmountQTD, SalesAmountYTD, and so on. This can easily result in a measure explosion, but I haven’t experienced such a performance hit before with other clients.

As it turned out, each measure requires a data structure even with no data. When you select a table in Visual Studio, Visual Studio sends a DAX query asking for the value of each measure in the table: EVALUATE CALCULATETABLE(ROW, “Measure1”, [Measure1], “Measure2”, [Measure2]… It does so because the Measure Grid shows not only the measure formulas but also their values. Currently, there is no way to suppress this query, not even if you hide the Measure Grid. When the server that hosts the workspace database receives the query, it allocates a data structure for each measure. When the query asks for thousands of measures, the server would allocate a lot of CPU and memory resources just to evaluate the query. In this case, the server would exhaust all the memory on my laptop and return an “Out of memory” exception to the client. Visual Studio doesn’t give up and switches then to evaluating each measure separately by sending separate EVALUATE CALCULATETABLE query measure by measure! Once the project is finally loaded, design performance is bad, especially in the Diagram view. Every task, even selecting a table or moving the Visual Studio window from one monitor to another, results in a CPU utilization spike and takes a few seconds until UI becomes responsive.

I asked Microsoft to introduce a switch to suppress the EVALUATE CALCULATETABLE query, such as when the Measure Grid is disabled. I personally don’t care at all about measures values in the grid and I don’t care about the measure grid at all for that matter. These values are meaningless in most cases anyway, such as when they show blank values for time calculations since there is no date context at design time.

While Microsoft is investigating and working on improving the SSDT performance, you can mitigate the performance issue with many measures by doing the following:

  1. If possible, assign measures to multiple home tables.
  2. Switch your project to Manual Calculation Mode (Model menu, Calculation Options, Manual Calculation). Note that manual calculation won’t process calculated columns and relationship automatically. You’d have to switch to automatic calculation or explicitly process the model with Process Default when you make design changes.
  3. Create perspectives to reduce the number of tables in Diagram view.
  4. Give the Tabular Editor community tool a try.

031818_1803_TooManyMeas1.png

Prologika Newsletter Spring 2018

Extending Power BI with PowerApps and Flow

With hundreds of vendors out there, choosing a BI tool isn’t easy. If you’ve been following me for a while, you know that I love Power BI. One of the Power BI most prominent strengths is that it’s a part of the much broader Microsoft Data Platform that spans various on-premises and cloud offerings for data storage, processing, and analysis. Thanks to this integration, you can implement real-time dashboards, extend your Power BI reports with predictive capabilities, integrate with Big Data, and much more. In this newsletter, I’ll talk about an exciting integration option: extending Power BI with business applications, thanks to its integration with PowerApps and Flow. And if you happen to be in the Microsoft office in Alpharetta, feel free to ask for a demo of the Power BI Writeback that Prologika prepared and deployed to the Microsoft environment. You’ll be directed to one of the big screens when you can test the report that I mention in this newsletter.

What’s PowerApps?

Every organization requires some sort of custom data-driven apps. And every organization is under pressure to do more with less. Have you used Microsoft Access? If so, you probably remember that Access makes it easy to create data-driven forms. But they were available only in Access and could access only Access database. You might also remember InfoPath – an app for designing, distributing, filling and submitting electronic forms containing structured data which Microsoft discontinued in 2014. Think of PowerApps as the evaluation and replacement of InfoPath. PowerApps is an Azure-hosted service that lets you build data-driven business apps that run in a browser or on mobile devices. Unlike Access and InfoPath, PowerApps supports many cloud or on-premises data sources. Connectivity to on-premises data sources requires installed a data gateway, which is the same software that Power BI uses to tunnel into your corporate network.

Targeting power users and developers, PowerApps requires minimum code and it has its own expression language. In the process of building an app, you specify where you want your data to come from. Then, you choose pre-defined layouts for your forms and link them together. You can develop the app online or by downloading a Windows-based desktop app. Developing the app is free. Sharing the app requires a plan, but if use Office 365, chances are that your O365 business plan already includes PowerApps.

What’s Flow?

As I explained in my “Going with the Flow” blog, Microsoft Flow is another cloud service for creating automation flows without writing code, similar to Zapier’s “zaps” and IFTTT’s “recipes”. Like PowerApps, Flow targets business users and developers willing to create basic multistep flows for automating tasks. For example, you might want to start a workflow when an email is received from specific recipients or containing specific words. Microsoft Flow makes it possible to automate this scenario and many, many more. Business users can connect their accounts, take advantage of pre-built templates, and create their own flows in an easy visual designer. And your PowerApps app can start a flow thanks to the fact that PowerApps and Flow work together.

How Can PowerApps and Flow Enrich BI?

After the integration with Visio, Power BI just took another leap forward with the introduction of the PowerApps custom visual. You can use this visual in your Power BI reports to bridge the Data Analytics and Developer worlds. Now your Power BI apps can integrate with Line of Business (LOB) applications in versatile ways and the app can trigger workflows. All this with minimum code! Suppose you have a Power BI report that shows sales by customers. As the user browses the data, he realizes that some changes need to be made. Granted, the user can open the appropriate business app and make the changes there, and then go back to the report to see the changes. But read-only reports and are so 20th century. Instead, why can’t we allow the user to make the changes on the report by integrating it with PowerApps? This scenario is commonly referred to as writeback.

This is exactly the approach my Customer Writeback app demonstrates. The user selects a customer in the table and the customer details show up in the Change Customer visual to the right. This is the PowerApps visual that references my Customer Writeback PowerApps app. The user can make changes in the PowerApps form and save the changes back to the underlying database. Then the user can refresh the report to see these changes. Bringing this one step further, the app can start a workflow, such as when a formal approval is required to approve the changes.

022518_1855_PrologikaNe2.png

If you have experience with PowerApps, implementing this scenario takes minutes. You can find the implementation steps and source code in my “Power BI Writeback” blog.

The Power BI integration with PowerApps and Flow opens exiting new possibilities and redefines the meaning of a report. These three technologies let you integrate your reports with “smart” applications that you can implement with almost no code! Featuring more than 100 data sources, you can use PowerApps and Flow as an integration hub to mash data from almost any place, create workflows, and then embed PowerApps forms in Power BI reports.

Teo Lachev
Prologika, LLC | Making Sense of Data
Microsoft Partner | Gold Data Analytics

logo

Using Profiler to Trace Azure Analysis Services

Scenario: You use Azure Analysis Services and you want to trace events in the SQL Server Profiler. However, you run the Profiler, connect to AAS, and you get greeted with “Unable to locate trace definition file Microsoft Analysis Services TraceDefinition 15.0.0.xml“. The prompt gives you an option to download the trace definition from the server, but nothing happens. You’ll encounter the same issue if you attempt to access a higher version of AAS or Database Engine from a lower version of Profiler, such as when you install SSMS 2012 on your laptop, but you connect to SQL Server 2016 or higher.

Workaround: While we’re waiting for Microsoft to provide the Azure AS trace definition, you can use this workaround:

  1. Navigate to the Profiler trace definition folder. In my case this folder is C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Profiler\TraceDefinitions. Copy the latest trace definition file (in my case it was “Microsoft Analysis Services TraceDefinition 14.0.800.xml”) and paste it as Microsoft Analysis Services TraceDefinition 15.0.0.xml. You need admin privileges to do this.
  2. Start Notepad or Notepad++ with admin rights and open Microsoft Analysis Services TraceDefinition 15.0.0.xml from that folder.
  3. Change the Major version to 15, minor version to 0, and build number to 0 at the top of the file and save the file.
  4. If you want the default trace templates so you don’t have to configure the trace from scratch, navigate to the Profiler’s Templates folder (C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Profiler\Templates\Microsoft Analysis Services). Copy the 140 folder and paste it as 150 folder. This folder has the Replay and Standard template definitions.

As Analysis Services evolves, the version  number would increase. You can connect SSMS to AAS and check its version in the server properties. In this case, version 15.0.1.245 corresponds to 150. This why the folder name is 150.