Prologika Business Intelligence Consulting and Training in Atlanta 2025-02-18T21:32:45Z https://prologika.com/feed/atom/ WordPress Prologika - Teo Lachev <![CDATA[LLM Adventures: xAI Grok]]> https://prologika.com/?p=9359 2025-02-18T21:32:45Z 2025-02-18T21:30:28Z I’ve been recently impressed with xAI Grok compared to Microsoft Copilot. Grok performs better that Copilot which seems to be crippled with political correctness to a point of absurdity although it seems to be getting more permissive of late. But the most important Grok advantage is that it’s always recent… and it doesn’t crash every now and then.

A couple of examples:

  1. Question: Where is this image from?

A group of people walking in a cave AI-generated content may be incorrect.

    1. Microsoft Copilot – what exactly is inappropriate is not clear.
      A white box with black text AI-generated content may be incorrect.
    2. Grok (impressive!) – This image depicts the rock formations and cave dwellings of Cappadocia, a historical region in central Turkey. The unique rock structures and the cave homes carved into them are iconic features of this area, particularly around the town of Göreme.
  1. Question: What are the most important political events to watch for tomorrow (as of 2/18/2025)?
    1. Microsoft Copilot (chaos of past and current events):
      1. Presidential Primaries: Former President Donald Trump and President Joe Biden have both secured enough delegates to become their parties’ presumptive nominees for president. The primaries in Georgia, Mississippi, and Washington state have been particularly significant.
      2. Senate Hearings: There are ongoing confirmation hearings for various cabinet positions, including the Department of Homeland Security and the Department of Housing and Urban Development.
      3. International Relations: Discussions and negotiations on international issues, such as the ongoing conflict in Ukraine and relations with China, are likely to be in the spotlight.
      4. Domestic Policy Debates: Debates on domestic policies, including immigration reform, climate change, and economic policies, will continue to be important topics
    2. Grok – Based on the available information, here are some of the most important political events to watch for on February 19, 2025:
      1. Pat King’s Sentencing in Canada: This event is noted in posts on X, indicating its significance as people are discussing its impact. Pat King was involved in the Freedom Convoy protests, and his sentencing could have broader implications on discussions around civil liberties and protest laws in Canada.
      2. Global Political Dynamics: Although not specific to February 19, the context from web sources suggests a focus on ongoing geopolitical tensions and diplomatic engagements. For instance, the broader political calendar highlights events like the NATO defense ministers’ meeting in Brussels, which might influence or be influenced by daily developments in international relations.

I’m looking forward to Grok 3!

]]>
0
Prologika - Teo Lachev <![CDATA[Atlanta Microsoft BI Group Meeting on February 3rd (Fabric Warehouse vs. Fabric Lakehouse)]]> https://prologika.com/?p=9355 2025-01-28T16:41:23Z 2025-01-28T16:41:23Z Atlanta BI fans, please join us in person for our next meeting on Monday, February 3rd at 18:30 ET. Jeff Levy (Data Architect @ Protiviti) will present the core concepts, architectures, and use cases of the Fabric Warehouse and Fabric Lakehouse. Your humble correspondent will walk you through the most significant Power BI and Fabric enhancements of late. For more details and sign up, visit our group page.

Presentation: Fabric Warehouse vs. Fabric Lakehouse: Choosing the Right Architecture for Your Environment
Delivery: In-person
Level: Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (news, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Overview: The modern data landscape demands scalable, flexible, and efficient architectures to support diverse business needs. With Microsoft Fabric, two leading paradigms have emerged to address these challenges: the Fabric Warehouse and the Fabric Lakehouse. While both tools aim to provide robust solutions for data storage, processing, and analytics, their approaches, strengths, and trade-offs differ significantly.

This presentation explores the core concepts, architectures, and use cases of the Fabric Warehouse and Fabric Lakehouse. I will compare their performance in areas such as data integration, scalability, and cost-efficiency. Attendees will gain insights into how these approaches align with specific business objectives and workloads, enabling informed decisions about which model best suits their organization’s data strategy.

Sponsor: Protiviti (www.protiviti.com) is a global consulting firm that delivers deep expertise, objective insights, a tailored approach and unparalleled collaboration to help leaders confidently face the future. Protiviti and its independent and locally owned member firms provide clients with consulting and managed solutions in finance, technology, operations, data, digital, legal, HR, risk and internal audit through a network of more than 90 offices in over 25 countries.

PowerBILogo

]]>
0
Prologika - Teo Lachev <![CDATA[TMDL View in Power BI Desktop]]> https://prologika.com/?p=9351 2025-01-22T22:12:26Z 2025-01-22T22:12:26Z I had the privilege to participate in the early preview program of the new TMDL View in Power BI Desktop which is currently in public preview in the latest January release of Power BI Desktop. Without reiterating what was said in the announcement, I’d like to mention three main benefits of this feature:

  1. Ability to access the entire model metadata – This includes features don’t have User interface in Power BI Desktop. Traditionally, BI developers have been relying on Tabular Editor to do so. Now you have another option although it requires knowing the TMLDL language. Alas, TMLD doesn’t come with user interface although it does support Autocomplete.
  2. Ability to copy specific model features from one Power BI Desktop file to another – For example, in the screenshot below, I have scripted a calculation group. Now, I can open another Power BI Desktop file, copy the script and apply it. Of course, the target model must include the referenced entities, otherwise I’ll get an error.
  3. Automating tasks – Hopefully, in near future support creating add-ins to automate certain aspects like creating macros in Excel by programming the Excel VBA object model. For example, a developer should be to use the Tabular Object Model (TOM) API to create TMDL scripts and apply them to a semantic model.

]]>
0
Prologika - Teo Lachev <![CDATA[Prologika Newsletter Winter 2024]]> https://prologika.com/?p=9337 2024-12-14T15:55:02Z 2024-12-14T15:53:28Z computer memory with queries executing and Microsoft Fabric logo. Image 4 of 4

I conducted recently an assessment for a client facing memory pressure in Power BI Premium. You know these pesky out of memory errors when refreshing a biggish dataset. They started with P1, moved to P2, and now are on P3 but still more memory is needed to satisfy the memory appetite of full refresh. The runtime memory footprint of the problematic semantic model with imported data is 45 GB and they’ve done their best to optimize it. This newsletter outlines a few strategies to tackle excessive memory consumption with large semantic models. Unfortunately, given the current state of Power BI boxed capacities, no option is perfect and at end a compromise will probably be needed somewhere between latency and performance.

Why I don’t like Premium licensing

Since its beginning, Power BI Pro per-user licensing (and later Premium Per User (PPU) licensing) has been very attractive. Many organizations with a limited number of report users flocked to Power BI to save cost. However, organizations with more BI consumers gravitated toward premium licensing where they could have unlimited number of report readers against a fixed monthly fee starting at listed price of $5,000/mo for P1. Sounds like a great deal, right?

I must admit that I detest the premium licensing model because it boxes into certain resource constraints, such as 8 backend cores and 25 GB RAM for P1. There are no custom configurations to let you balance between compute and memory needs. And while there is an auto-scale compute model, it’s very coarse and it applies only to processing cores. The memory constraints are especially problematic given that imported models are memory resident and require more than twice the memory for full refresh. From the outside, these memory constraints seem artificially low to force clients into perpetual upgrades. The new Fabric F capacities that supersede the P plans are even more expensive, justifying the price increase with the added flexibility to pause the capacity which is often impractical.

It looks to me that the premium licensing is pretty good deal for Microsoft. Outgrown 25 GB of RAM in P1? Time to shelve another 5K per month for 25 GB more even if you don’t need more compute power. Meanwhile, the price of 32GB of RAM is less than $100 and falling.

It will be great if at some point Power BI introduces custom capacities. Even better, how about auto-scaling where the capacity resources (both memory and CPU) scale up and down on demand within minutes, such as adding more memory during refresh and reducing the memory when the refresh is over?

Strategies to combat out-of-memory scenarios

So, what should you do if you are strapped for cash? Consider evaluating and adopting one or more of the following memory saving techniques, including:

  • Switching to PPU licensing with a limited number of report users. PPU is equivalent of P3 and grants 100GB RAM per dataset.
  • Optimizing aggressively the model storage when possible, such as removing high-cardinality columns
  • Configuring aggressive incremental refresh policies with polling expressions
  • Moving large fact tables to a separate semantic model (remember that the memory constraints are per dataset and not across all the datasets in the capacity)
  • Implementing DirectQuery features, such as composite models and hybrid tables
  • Switching to a hybrid architecture with on-prem semantic model(s) hosted in SQL Server Analysis Services where you can control the hardware configuration and you’re not charge for more memory.
  • Lobbying Microsoft for much larger memory limits or to bring your own memory (good luck with that but it might be an option if you work for a large and important company)

Considering Direct Lake storage

If Fabric is in your future, one relatively new option to tackle out-of-memory scenarios that deserves to be evaluated and added to the list is semantic models configured for Direct Lake storage. Direct Lake on-demand loading should utilize memory much more efficiently for interactive operations, such as Power BI report execution. This is a bonus to the fact that data Direct Lake models don’t require refresh. Eliminating refresh could save tremendous amount of memory to start with, even if you apply advanced techniques such as incremental refresh or hybrid tables to models with imported data.

I did limited testing to compare performance of import and Direct Lake and posted detailed results in the “Fabric Direct Lake: Memory Utilization with Interactive Operations” blog.

I concluded that if Direct Lake is an option for you, it should be at the forefront of your efforts to combat out-of-memory errors with large datasets.

On the downside, more than likely you’ll have to implement ETL processes to synchronize your data warehouse to a Fabric lakehouse, unless your data is in Fabric to start with, or you use Fabric database mirroring for the currently supported data sources (Azure SQL DB, Cosmos, and Snowflake). I’m not counting the data synchronization time as a downside.


Teo Lachev
Prologika, LLC | Making Sense of Data
logo

]]>
0
Prologika - Teo Lachev <![CDATA[Handling Data Quality and Data Enrichment]]> https://prologika.com/?p=9343 2024-12-07T14:36:49Z 2024-12-07T14:36:49Z Some of the most difficult and tedious issues in almost every BI project are handling data quality issues (aka exceptions) and data enrichment. Examples of data quality issues typically originate from wrong data entries or violated business rules, such as misspelled and misclassified products. Data enrichment tasks may go beyond the design of the original data source, such as introducing new taxonomies for products or customers.

In a recent project, a client needed guidance on where to handle data quality and enrichment processes. Like cakes and ogres, a classic BI solution has layers. Starting upstream in the data pipeline and moving downstream are data sources, ODS (data staging and change data tracking), EDW (star schema) and semantic models.

In general, unless absolutely necessary, I’m against master data management (MDM) systems because of their complexity and cost. Instead, I believe that most data quality and enrichment tasks can be addressed efficiently without formal MDM. I’m also a big believer in tackling these issues as further upstream as possible, ideally in the data source.

Consider the following guidelines:

  1. There are different types of exceptions and enrichment requirements, and they range in complexity and resolution domain. Each case must be triaged by the team to determine how to best handle it by examining the data pipeline upstream to downstream direction and traversing the layers.
  2. As a best practice, exceptions and enrichments tasks should be dealt with as further upstream as possible because the further downstream the correction is made, the narrower its availability to consumers will be and more implementation effort might be required.
    1. Data Source – examples include wrong entries, such as wrong employe’s job start date or adding custom fields to implement categories requires for standard hierarchies. The team should always start with the source system in attempt to address data quality issues at the source.
    2. ODS – examples include corrections not possible in the upstream layers but necessary for other consumers besides EDW.
    3. EDW – examples include corrections applicable only to analytics, such as IsAttendance flag to flag which attendance events should be evaluated.
    4. Semantic layer – calculated columns for data enrichment best done in DAX and additional fields the end user needs in a composite model.
  3. Shortcuts – When resolving the change upstream is time prohibitive, it should be permissible to handle the change further downstream while waiting for changes to an upstream system. For example, if fixing an issue in the data source is rather involved, a temporary fix can be done in ODS to prototype the change and expedite it to consumers. However, the team will remove that fix once the source system is enhanced.
  4. When consumers prefer access to uncorrected data, a pair of fields could be introduced as upstream as possible, such as:
    1. EmployeeId – raw field
    2. EmployeeIdCorrected – corrected field. Joins to the related dimension table will be done on this field.

data quality issues

]]>
0
Prologika - Teo Lachev <![CDATA[LLM Adventures: Microsoft Copilot Studio (The Good, The Bad, and The Ugly)]]> https://prologika.com/?p=9322 2024-11-28T20:30:59Z 2024-11-28T20:30:59Z “A momentary lapse of reason
That binds a life to a life
You won’t regret, you will never forget
There’ll be no sleep in here tonight”
“The Slip”, Pink Floyd

Happy Thanksgiving! What a better way for me to spend a Thanksgiving week than doing more AI? After a year of letting it (and other Microsoft LLM offerings) simmer and awaken by the latest AI hoopla from the Ignite conference, I took another look at Microsoft Copilot Studio. For the uninitiated, Copilot Studio lets you implement AI-powered smart bots (“agents”) for deriving knowledge from documents or websites. Basically, you can view the relationship of Copilot Studio to Retrieval-augmented Generation apps as what Power BI is to self-service BI.

Copilot Studio licensing starts at $200 per month for up to 25,000 messages (interactions between user and agent) although at Ignite Microsoft hinted that pay-as-you-go licensing will be coming.

The Good

A few months ago, when I discussed RAG apps, it was obvious that a lot of custom code had to be written to glue the services together and implement the user interface. Microsoft Copilot Studio has the potential to change and simplify this. It offers a Power Automate-like environment for no-code, low-code implementation of AI agents and therefore opens new possibilities for faster implementation of various and specialized AI agents across the enterprise. I was impressed by how easy the process was and how capable the tool was to create more complex topics, such as conditional branches based on user input.

Like Power BI, the tool gets additional appeal from its integration with the Microsoft ecosystem. For example, it can index SharePoint and OneDrive documents. It can integrate with Power Automate, Azure AI Search and Azure Open AI.

I was impressed by how easy is to use the tool to connect to and intelligently search an existing website. For now, I see this as being its main strength. Organizations can quickly implement agents to help their employees or external users to derive knowledge from intranet or Internet websites.

To demonstrate this, I implemented an agent to index my blog and embedded it below for you to try it out before my free trial expires. Please feel free to ask more sophisticated questions, such as “What’s the author’s sentiment toward Fabric?”, “What are the pros and cons of Fabric?”. Or “I need help with Power BI budget” (I got innovative here and implemented a conditional topic with branches depending on the budget you specify). I instructed the tool to stay only within the content of  my website, so the answers are not diluted from other public sources. Given that no custom code was written, Copilot Studio is pretty impressive.

The Bad

Everyone wants to be autonomous and AI agents are no exception. In fact, “autonomous agent” is the buzzword of AI world today. Not to be outdone, Copilot Studio claims that it can “build agents that operate independently to dynamically plan, learn, and escalate on your behalf”. However, as the tool stands today, I don’t think there is much to this claim. Or it could be that my definition of “autonomous” is different than Microsoft’s.

To me, an autonomous agent must be capable of making decisions and taking actions on its own. Like you tell your assistant that you plan a trip, give her some constraints, such as how much to spend on hotel and air, and let her make travel reservations. As it stands, Copilot Studio offers none of this. It follows a workflow you specify. Again, its output is more or less a smarter bot than the ones you see on many websites.

However, at Ignite Microsoft claimed that autonomy is coming so it will be interesting to see how the tool will evolve. Don’t get me wrong. Even as it stands, I believe the tool has enormous potential for more intelligent search and retrieval of information.

The Ugly

My basic complaint as of now is performance. It took the tool 10 minutes to index a PDF document. Then in a momentary lapse of reason, I connected it to an Azure SQL Database with Adventure Works with 15 tables (the max number of tables currently supported) and it’s still not done indexing after a day. Given that many implementations would require searching the data in relational databases, I believe this is not acceptable. Not to mention there isn’t much insight on how far it’s done indexing or limit the number of fields it should index.

Therefore, I believe most real-world architectures for implementing AI agents will take the path Copilot Studio->Azure AI Search ->Azure Open AI, where Copilot Studio is used for implementing the UI and workflows (topics and actions), while the data indexing is done by Azure AI Search with semantic ranking in conjunction with Azure Open AI for embedded vectors.

 

]]>
0
Prologika - Teo Lachev <![CDATA[Atlanta Microsoft BI Group Meeting on December 2nd (Semantic Modeling as Code)]]> https://prologika.com/?p=9320 2024-11-26T18:51:45Z 2024-11-26T18:51:45Z Atlanta BI fans, please join us online for our next meeting on Monday, December 2nd at 5PM ET (please note the change to our usual meeting time to accommodate the presenter). Rui Romano (Product Manager at Microsoft) will discuss how the new TMDL language for Power BI models can unlock new scenarios that previously weren’t possible. For more details and sign up, visit our group page.

Presentation: “Semantic Modeling as Code” with TMDL using Power BI Desktop Developer Mode (PBIP) and VS Code
Delivery: Online
Level: Intermediate to Advanced

Overview: The landscape for developing enterprise-scale models has never been more exciting than it is now! Developer mode in Power BI Desktop and the new TMDL language unlock new scenarios that previously weren’t possible, such as great source control and co-development experiences with Git integration. Additionally, the TMDL Visual Studio Code extension offers a new, powerful and efficient, code-first semantic modeling experience. Join us to discover the new and powerful ways you can leverage TMDL to accelerate your model development and get a sneak peek into the TMDL roadmap from the Power BI product team.

Speaker: Rui Romano is an experienced Microsoft Professional with a deep passion for data and analytics. He has spent the last decade helping companies make better data-driven decisions and is known for his innovative and practical solutions to complex problems. Currently works as a Product Manager at Microsoft on the Power BI product team, focusing on Pro-BI experiences.

PowerBILogo

]]>
0
Prologika - Teo Lachev <![CDATA[Temporal Tables]]> https://prologika.com/?p=9313 2024-11-10T17:15:43Z 2024-11-10T17:15:43Z I like SQL Server temporal tables for implementing ODS-style tables and change data tracking (CDS) for three main reasons:

  1. SQL Server maintains the system versioning. By contrast, I have witnessed erroneous Start/End dates for pretty much all home grown implementation. Further, SQL Server grains the changes at millisecond level.
  2. There is a clean separation between the current state of data and historical data. SQL Server separates the historical changes to a history table.
  3. You can establish a flexible data retention policy. A retention policy can be established at database or table level. SQL Server take care of purging the expired data.

At the same time, temporal tables are somewhat more difficult to work with. For example, you must disable system versioning before you alter the table. Here is the recommended approach for altering the schema by the documentation:

BEGIN TRANSACTION
ALTER TABLE [dbo].[CompanyLocation] SET (SYSTEM_VERSIONING = OFF);
ALTER TABLE [CompanyLocation] ADD Cntr INT IDENTITY (1, 1);
ALTER TABLE [dbo].[CompanyLocation] SET
(
SYSTEM_VERSIONING = ON
(HISTORY_TABLE = [dbo].[CompanyLocationHistory])
);
COMMIT;

However, if you follow these steps, you will be greeted with the following error when you attempt to restore the system versioning in the third step:

Cannot set SYSTEM_VERSIONING to ON when SYSTEM_TIME period is not defined and the LEDGER=ON option is not specified.

Instead, the following works:

ALTER TABLE <system versioned table>  SET (SYSTEM_VERSIONING = OFF) -- disable system versioning temporarily
-- make schema changes, such as adding new columns
ALTER TABLE <system versioned table>  ADD PERIOD FOR SYSTEM_TIME (StartDate, EndDate); -- restore time period
ALTER TABLE [mulesoft].[Employee] SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = [mulesoft].[EmployeeHistory])) -- restore system versioning

]]>
0
Prologika - Teo Lachev <![CDATA[Atlanta Microsoft BI Group Meeting on November 4th (Accelerating your Fabric Data Estate with AI & Copilot)]]> https://prologika.com/?p=9310 2024-10-29T15:57:30Z 2024-10-29T15:57:30Z Atlanta BI fans, please join us in person for our next meeting on Monday, November 4th at 6:30 PM ET. Stacey Jones (Principal Data & AI Cross-Solution Architect at Microsoft) and Elayne Jones (Solutions Architect at Coca-Cola Bottlers Sales and Services) will explore the AI and Copilot capabilities within Microsoft Fabrics. And I’ll help you catch up on Microsoft BI latest. I will sponsor the event which marks the 14th anniversary of the Atlanta Microsoft BI Group! For more details and sign up, visit our group page.

Details

Presentation: Accelerating your Fabric Data Estate with AI & Copilot
Delivery: In-person
Date: November 4th, 2024
Time: 18:30 – 20:30 ET
Level: Beginner to Intermediate
Food: Pizza and drinks will be provided

Agenda:
18:15-18:30 Registration and networking
18:30-19:00 Organizer and sponsor time (events, Power BI latest, sponsor marketing)
19:00-20:15 Main presentation
20:15-20:30 Q&A

Venue
Improving Office
11675 Rainwater Dr
Suite #100
Alpharetta, GA 30009

Overview: In this presentation, we will explore the groundbreaking AI and Copilot capabilities within Microsoft Fabric, a comprehensive platform designed to enhance productivity and collaboration. By leveraging advanced machine learning algorithms and natural language processing, Microsoft Fabric’s AI/Copilot not only streamlines workflows but also provides intelligent insights and automation, empowering users to achieve more with less effort. Join us as we delve into the features and functionalities that make Microsoft Fabric an indispensable tool for modern enterprises.

Sponsor: CloudStaff.ai

PowerBILogo

]]>
0
Prologika - Teo Lachev <![CDATA[Implementing Role-playing Dimensions in Power BI]]> https://prologika.com/?p=9303 2024-10-11T21:42:50Z 2024-10-11T21:42:50Z Role-playing dimensions are a popular business requirement but yet challenging to implement in Power BI (and Tabular) due to a long-standing limitation that two tables can’t be joined multiple times with active relationships. Declarative relationships are both a blessing and a curse and, in this case, we are confronted with their limitations. Had Power BI allowed multiple relationships, the user must be prompted which path to take. Interestingly, a long time ago Microsoft considered a user interface for the prompting but dropped the idea for unknown reasons.

Given the existing technology limitations, you have two implementation choices for implementing subsequent role-playing dimensions: duplicating the dimension table (either in DW or semantic model) or denormalizing the dimension fields into the fact table. The following table presents pros and cons of each option:

 Option Pros Cons
Duplicate dimension table in semantic model or DW

 

No or minimum impact on ETL

Minimum maintenance in semantic model

All dimension attributes are available

Metadata complexity and confusion

(potentially mitigated with perspectives that will filter metadata for specific subject area)

Denormalizing fields from into fact table Avoid role-playing dimension instances

More intuitive model to business users

Increased fact table size and memory footprint

Impact on ETL

Limited number of dimension attributes

Track visited dimension changes as Type 2 with incremental extraction (while it could be Type 1)

If applicable, inability to reuse the role-playing dimension for another fact table and do cross-fact table analysis

So, which approach should you take? The middle path might make sense. If you need only a limited number of fields for the second role-playing dimension, you could add them to the fact table to avoid another dimension and confusion. For example, if you have a DimEmployee dimension and you need a second instance for the person making the changes to the fact table, you can add the administrator’s full name to the fact table assuming you need only this field from DimEmployee.

By contrast, if you need most of the fields in the role-playing instances, then cloning might make more sense. For example, analyzing fact data by shipped date or due date that requires the established hierarchies in DimDate, could be addressed by cloning DimDate. Then to avoid confusion, consider using Tabular Editor to create perspectives for each subject area where each perspective includes only the role-playing dimensions applicable to that subject area.

Yet, a narrow-case third option exists when you only need role-playing measures, such as SalesAmountByShipDate and SalesAmountByDueDate. This scenario can be addressed by forcing DAX measures to “travel” the inactive relationship by using the USERELATIONSHIP function.

dimensions connected to a fact table in a database

 

]]>
0