I’ll be presenting “Personal BI with PowerPivot v2” for the Atlanta BI Group on Monday, February 27th. Join us to learn when personal BI makes sense and why PowerPivot is the best tool on the market when it does. The focus will be on the new features of PowerPivot version 2 which launch officially in a week or so.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2012-02-23 17:52:352016-02-16 08:53:13Presenting Personal BI with PowerPivot v2
My new book, Applied Microsoft SQL Server 2012 Analysis Services (Tabular Modeling), will start shipping in a week with all popular resellers, such as Amazon, B&N, etc. I’ve been working on it for a few months and I’m excited to have it done. While waiting to buy the book, check the following resources to get you started with PowerPivot and Tabular:
– A sample chapter – “Introducing Business Intelligence Semantic Model”
– Video demos – I recorded over three hours of video content for selected exercises in the book
An insightful tour that provides an authoritative yet independent view of this exciting technology, this guide introduces the Tabular side of the innovative Business Intelligence Semantic Model (BISM) that promotes rapid professional and self-service BI application development. Business analysts and power users will learn how to integrate data from multiple data sources, implement self-service BI applications with Excel, and deploy them to SharePoint. Business intelligence professionals and database administrators will discover how to build corporate solutions powered by BISM Tabular, delivering supreme performance with large data volumes, and how to implement a wide range of solutions for the entire spectrum of personal-team-organizational BI needs.
WHAT’S INSIDE: Chapter 1: Introducing Business Intelligence Semantic Model
PART I: PERSONAL BI WITH POWERPIVOT FOR EXCEL Chapter 2: Personal BI Basics Chapter 3: Importing Data Chapter 4: Refining the Model Chapter 5: Analyzing Data Chapter 6: Implementing Calculations
PART II: TEAM BI WITH POWERPIVOT FOR SHAREPOINT Chapter 7: Team BI Basics Chapter 8: SharePoint Insights Chapter 9: Managing PowerPivot for SharePoint
PART III: ORGANIZATIONAL BI WITH ANALYSIS SERVICES TABULAR Chapter 10: Organizational BI Basics Chapter 11: Designing Storage and Security Chapter 12: Managing Tabular Models
Enoy!
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2012-02-11 14:55:002021-02-16 03:48:54Announcing My New Book
I’m working on minimizing the cube processing for a cube with some two billion rows in a fact table. I put together an ETL package that processes the dimensions with ProcessUpdate following by processing the latest partitions. I’ve noticed that processing one of the dimensions , which happens to be the largest (some 1 million rows) and most complicated dimension, resulted in partitions scans. The SQL Profiler showed the scans with the following events:
Finished processing the ‘<partition name>‘ partition
These partition scans don’t result in SQL queries to the database and normally should execute pretty fast. In this case, however, the scans were taking plenty of time resulting in some 15 minutes delay in the incremental processing flow. With some help from T.K. Anand, it turned out that I had a design issue with that dimension. I discovered the issue by changing the KeyDuplicate error in the dimension ErrorConfiguration to True and fully processing the dimension. Dimension processing failed because of the attribute relationships implied a one-to-many relationship between two attributes but the data didn’t support it. Somewhere along the design cycle, I turned off KeyDuplicate probably to get around the same issue in order to process the dimension successfully.
So, the moral of this story is:
Always check your dimensions with the Dimension Health Check feature of the BIDS Helper.
Don’t turn off KeyDuplicate.
BTW, the problem with duplicate key errors is that the server can move members around and that would cause indexes and aggregations to require being rebuilt. For example, if you have the following members in the source data:
Attr1Key Attr2Key
55 32
55 35
The first time, the engine may store 32 as the related member. But the second time it sees the rows during ProcessUpdate, it would potentially choose 35 as the related member. And that’s going to cause indexes and aggregations to need rebuilding.
We had the pleasure to have some 70 people attending our January 30th, 2012 meeting of the Atlanta BI group. Our sponsor, Matrix Resources, was kind enough to give us the auditorium. FisionIO sponsored the meeting. Phil Per-Lee did us a “Prototypes with Pizza” presentation, titled Connecting the Dots. And, Carlos Rodrigues rocked the stage with the main presentation about dimensional modeling.
I’ve uploaded pictures to the Photo Gallery section of our website and the slides to the Resources section. We’ve got some cool presentations lined up for next few months. Check our Calendar section to see what’s coming.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2012-02-01 01:16:002016-02-16 09:10:27Record Attendance for Atlanta BI Group Last Night