Posts

Happy New Year 2012!

As 2011 is winding down, it’s time to reflect on the past and plan for the future. 2011 has been a very exciting year for Microsoft BI and me.

  1. Gartner positioned Microsoft as a leader in the 2011 Magic Quadrant for Business Intelligence Platforms.
  2. Although SQL Server 2012 will technically ship early next year, we can say it’s a done deal as it’s currently in a release candidate phase. The most important news from a BI perspective is the evolution of the Business Intelligence Semantic Model (BISM), which an umbrella name for both Multidimensional and Tabular models.
  3. The Tabular model provides us with a nice personal (PowerPivot for Excel)-team (PowerPivot for SharePoint)-organizational (Analysis Services Tabular) continuum on a single platform.
  4. Power View extends the BI reporting toolset with a sleek web-based reporting tool for authoring highly interactive and presentation-ready reports.
  5. In its second release, Master Data Services (MDS) comes out of age and now allows end users to use Excel to manage master data. The newcomer, Data Quality Services (DQS), complements MDS nicely in the never-ending pursuit for clean and trusted data. Integration Services has also nice enhancements. Finally, columnstore indexes will help to aggregate large datasets, such as the scenario I mentioned in this blog.

Looking forward to 2012 and beyond, here is my top 5 BI wish list:

  1. Extending the Tabular capabilities with more professional features, such as scope assignments, role-playing dimensions, MDX query support, and so on, to enhance its reach further in the corporate space. Ideally, I expect at some point in future unification of Multidimensional and Tabular so BI pros don’t have to choose a model.
  2. Extending Power View to support multidimensional cubes. Further, in the reporting area, I expect an embeddable web-based OLAP browser (it’s time for Dundas Chart to come back to live) and an improved MDX query designer (no, I haven’t lost hope for this one).
  3. Enhanced Excel BI capabilities so Excel becomes the BI tool of choice. This includes supporting PowerPivot natively and overhauling the reporting capabilities beyond the venerable PivotTable and PivotChart. Ideally, what I am hoping for is decoupling Power View from SharePoint and integrating it with Excel and custom web applications. Power View is too cool to be confined onlyin SharePoint.
  4. Extending Microsoft Azure with BI capabilities to let solution providers host BI models in the cloud.
  5. Bringing BI to mobile devices.

On the personal side of things, I’ve been fortunate to stay healthy and busy (very busy). The Atlanta BI group, which I am leading, has grown in size and we now enjoy having 40-50 people attending our monthly meetings. For the past few months, I’ve been working on my next book, Applied Microsoft SQL Server 2012 Analysis Services (Tabular Modeling), which I expect to get published in March. And, my consulting business has been great!

I wish you a healthy and prosperous year! I hope to meet many of you in 2012. Meanwhile, you can find me online at the usual places: www.prologika.com | blog | linkedin | twitter.

Happy New Year!

MVP For Another Year!

Just got the news that my MVP status got extended for another year! This will make it six consecutive years as MVP and as a member of an elite group of professionals that I am proud of belonging to.

Most Requested Features

You can use the Microsoft Connect website to find most requested features. Unfortunately, the search doesn’t let you specify a product so the search results may be related to other products. For example, searching on reporting services may bring in results from Analysis Services and reporting. Nevertheless, it was quite interesting to find the top voted suggestions. For example, the following query shows the top suggestions for Reporting Services (flip to the Suggestions tab):

https://connect.microsoft.com/SQLServer/SearchResults.aspx?KeywordSearchIn=2&SearchQuery=%22reporting%22+AND+%22services%22&FeedbackType=1&Scope=0&SortOrder=10&TabView=0&wa=wsignin1.0

  1. Reporting Services-Recognize multiple result sets returned from a stored procedure – (50 votes)
  2. Merging / Linking datasets on report level (50 votes) – No 6 on my SSRS Top 10 wish list.
  3. SQL Reports should support stylesheets (43 votes) – No 9 on my list.
  4. Support for XML Paper Specification (XPS) Output Format (29 votes) – I am personally surprised about this one.
  5. Reporting Services Security Using Membership and Roles (29 votes), and so on

Microsoft Live Labs Pivot

In case you’ve missed this, the Pivot era has begun. After Excel PivotTable and PivotChart, we’ll have PowerPivot in SQL Server 2008 R2. But Pivot evolves… A co-worker showed me today a glimpse of the Pivot future which I guess is the Microsoft Live Labs Pivot. Since grids and charts are not cool anymore we now have pictures and animation. It’s hard for me to understand at this point how this would apply to Business Intelligence but the Silverlight app with all these pictures sure looks catchy.

Long live Pivot!

Cumulative Update 3 for SQL Server 2008 Service Pack 1

Microsoft released today a Cumulative Update 3 for SQL Server 2008 Service Pack 1. Among other things it fixes the Report Builder 2.0 ClickOnce deployment issue on x64 which I reported here.

Google – The Best Thing that Ever Happened to Microsoft

On a different subject, you’ve probably heard the news: Google will release an operating system called Google Chrome OS which will challenge Windows and instill fear into the Microsoft camp. To the contrary, I think Google is the best thing that ever happened to Microsoft. How come?

I remember reading somewhere that after the perestroika, Mikhail Gorbachev had supposedly told Ronald Reagan “we’ll now do the worst thing to you (USA); we’ll leave you without enemy”. Perhaps, too extreme but paraphrased to business, completion is a good thing. When challenged, good companies become better, products improved and consumers benefit. So, I hope Google will continue expanding their ambitions. Similarly, I hope Microsoft gives Google a run for its money by competing relentlessly to increase their share of the search market. I’d personally love to see business intelligence features added to the search results. Why not, the first two letter s of bing are BI, right? How come I can’t even sort the search results by date to see the most recent matches on top? A chart showing the popularity of the search item over time? People who searched for X searched also for Y?

It will be interesting to see how this mega competition will evolve in time. As Chinese say “may you live in interesting times”.

Transmissions from TechEd USA 2009 (Day 3)

Today was a long day. I started by attending the Richard Tkachuk’s A First Look at Large-Scale Data Warehousing in Microsoft SQL Server Code Name “Madison”. Those of you familiar with Analysis Services would probably recognize the presenter’s name since Richard came from the Analysis Services team and maintains the www.sqlservernanalysisservices.com website. He recently moved to the Madison team. Madison is a new product and it’s based on a product by DATAllegro which Microsoft acquired sometime ago. As the session name suggests, it’s all about large scale databases, such as those exceeding 1 terabyte of data. Now, this is enormous amount of data that not many companies will ever amass. I’ve been fortunate (or unfortunate) that I never had to deal with such data volumes. If you do, then you may want to take a look at Madison. It’s designed to maximize sequential querying of data by employing a shared-nothing architecture where each processor core is given dedicated resources, such as a table partition. A controller node orchestrates the query execution. For example, if a query spans several tables, the controller node parses the query to understand where the data is located. Then, it forwards the query to each computing node that handles the required resources. The computing nodes are clustered in a SQL Server 2008 fail-over cluster which runs on Windows Server 2008. The tool provides a management dashboard where the administrator can see the utilization of each computing node.

Next, I attended the Fifth Annual Power Hour session. As its name suggests, TechEd has been carrying out this session for the past five years. The session format was originally introduced by Bill Baker who’s not longer with Microsoft. If you ever attended one of these sessions, you know the format. Product managers from all BI teams (Ofice, SharePoint, PerformancePoint, and SQL Server) show bizarre demos and throw t-shirt and toys to everything that moves (OK, sits). The Office team showed an Excel Services demo where an Excel spreadsheet ranked popular comics characters. Not to be outdone, the PerformancePoint team showed a pixel-based image on Mona Lisa. Not sure what PerformancePoint capabilities this demonstrated since I don’t know PerformancePoint that well but it looked great.

The Reporting Services team showed a cool demo where the WinForms ReportViewer control would render a clickable map (the map control will debut in SQL Server 2008 R2) that re-assigns the number of Microsoft sales employees around the US states. For me, the coolest part of this demo was that there was no visible refresh when the map image is clicked although there was probably round tripping between the control and the server. Thierry D’Hers later on clued me in that there is some kind of buffering going on which I have to learn more about. This map control looks cool! Once I get my hands on it with some tweaking maybe I’ll be able to configure it as a heat map that is not geospatial.

Finally, Donald Farmer showed another Gemini demo which helped learn more about Gemini. I realized that 20 mil+ rows were compressed to 200 MB Excel file. However, the level of compression really depends on the data loaded in Excel. Specifically, it depends on the redundant values in each column. I learned that the in-memory model that is constructed in Excel is implemented as in-process DLL whose code was derived from the Analysis Services code base. The speed of the in-memory model is phenomenal! 20 mil rows sorted within a second on the Donald’s notebook (not even laptop, mind you). At this point Microsoft hasn’t decided yet how Gemini will be licensed and priced.

As usual, after lunch I decided to hang around in the BI learning center and help with questions. Then, it was a show time for my presentation! I don’t why but every TechEd I get one of these rooms that I feel intimidated just to look at them. How come Microsoft presenters who demo cooler stuff than mine, such as features in the next version, get smaller rooms and I get those monstrous rooms? It must be intentional; I have to ask the TechEd organizers. The room I got was next to the keynote hall and could easily accommodate 500-600 people, if not more. Two years ago, I actually had a record of 500+ people attending my session which was scheduled right after the keynote.

This year, the attendance was more modest. I don’t have the final count yet, but I think about 150+ folks attended my session so there was plenty of room to scale up. I think the presentation well very well. The preliminary evaluation reports confirm this. I demoed report authoring, management, and delivery tips sprinkled with real-life examples. We had some good time and I think everyone enjoyed the show.

It’s always good to know that your work is done. I look forward to enjoying the rest of TechED and LA.

Transmissions from TechEd USA 2009 (Day 2)

I started the day by attending the Donald Farmer and Daniel Yu’s session “Creating the Right Cubes for Microsoft Excel and Excel Services” hoping that I’ll get a sneak preview of Excel 2010. Alas, it was all about refining the cube definition with display folders, perspectives, hierarchies, etc. so it appears more user-friendly in Excel. Later, I learned that Office 2010 (or whatever it will be called) is under strict NDA which explains the lack of demos. The most interesting thing about that session was that I finally understood why the SSAS team decided to scale down the cube wizard in SSAS 2008 to generate basic dimensions only. The reason was performance. You see, the BIDS 2005 cube wizard would oftentimes suggest non-optimal dimension hierarchies and the modeler wouldn’t revise the design leading to bad performance.

Next, I attended the Thierry D’Hers Top Ten Reasons for Using Microsoft SQL Server 2008 Reporting Services. It’s always good to watch RS-related sessions. Thierry listed the community support as one of the reasons for organizations to consider moving to SSRS. I agree with this and the great community support is applicable to all MS technologies. Speaking about BI only, a few years ago there wasn’t a single book about Cognos for example. Granted, the last I look on Amazon there was one book. In comparison, Microsoft has a vibrant community of book authors, MVPs, trainers, etc. Almost every publisher has a book about SSRS 2008. This is of course good for the community and so good about authors as the competition is tough J

Thierry listed the Report Builder 2.0 as #1 reason to move to SSRS 2008, followed by data visualization, and tablix. Based on my real-life projects, I’d personally have listed them in the reverse order, with tablix being #1. This session officially announced that the map control will make to the Kilimanjaro release (now officially called SQL Server 2008 R2). Later on, Thierry was kind enough to show me a demo of the map pre-release bits. One of the data modes is using the SQL Server 2008 geospatial data types which would let you map any region in the world. Thierry showed a cool report showing the worldwide sales of SQL Server, where each country had a different color gradient based on the sales volume.

After lunch, I hang around the BI area in the learning center to answer questions and rub shoulders with Microsoft employees and peers. I was surprised to learn that there are only two Reporting Services-related sessions for the entire TechEd and one of them is mine! All of a sudden, I felt 2″ taller. Later on, I felt adventurous to learn something completely new and attended a SQL Server 2008 Failover Clustering only to realize how much I don’t know about it since I’ve never used it. BTW, there are great advancements in the SQL Server 2008 failover clustering, such as ability to upgrade or patch a cluster node without stopping it.

Transmissions from TechEd USA 2009 (Day 1)

Day 1 of TechEd 2009 is almost over with the exception of the Community Influencers Party tonight. I heard that this year they expect 7,000 attendees. This is a huge scale-down from previous years. For instance, we had 16,000 attendees at TechEd USA 2007. Economy is hitting everything hard.

I thought the keynote was kind of lame. Judging by it, Microsoft has only three products: Windows 7 (officially announced to ship around holidays although Microsoft didn’t say which holidays), Windows Server 2008 (the buzz is now the forthcoming R2 release), and Exchange Server 2010. Unlike previous TechEds, there wasn’t a single announcement about other products. SQL Server KJ, Office 2010, Azure, dev tools? Nope, apparently not worth mentioning. Sure Mark Russinovich, whom I respect very much, did some cool Windows 7 demos but there were not enough to pique my interest. I understand that OS and Exchange Server are bedrock for every business and after the sad Vista saga, we have to show the world that now we’ll do things right with Windows 7, but the BI soul in me was thirsty for more.

After lunch, I hang around the BI area of the Learning Center, where I answered questions and met with other peers, including Nick Barclay (MVP) whom I wanted to meet personally for a while. Then, I attended the excellent Donald Farmer and Kamal Hathi ‘s Microsoft Project Code Name “Gemini: Self Service Analysis and the Future of BI and I had the chance to see the Gemini, which I blogged briefly about before without knowing too much, for the first time in action and gain more in-depth knowledge.

The Gemini is an end-user oriented Excel add-in that will let the user acquire data from a variety of data sources, including SSRS reports (SSRS KJ will expose reports as data feeds) and SharePoint lists, and load them in an Excel spreadsheet. The tool crunches data very fast even on a modest computer (the demo showed a notebook computer working with millions of rows) thanks to its ability to compress column-level data. This works because a dataset column would typically contain redundant data.

Once data is loaded in Excel, the tool will attempt to automatically determine the relationships between datasets (loaded in separate spreadsheets) to create a hidden dimensional model consisting of fact and dimension in-memory tables. The user will be able to manually specify the dataset relationships by telling the tool which column will be used to join the datasets (very much like joining relational tables). Moreover, the user will be able to define calculated columns using Excel-style formulas. Finally, as the add-in builds behind the scenes an in-memory cube, the user will be able to slice and dice data in a Pivot table report. So, no Analysis Services is needed if all the user wants is manipulating data on the desktop.

Where things are getting more interesting is deploying the models on the server. To do so, the end user would deploy the Excel spreadsheet to the MOSS Report Library. Note that MOSS is required for server-side deployment. When other users request the spreadsheet, an Analysis Services redirector will understand that this is a Gemini model and service the requests from a server cube. At this point is not clear how exactly the server cube will be built and whether it could be managed in SSMS. Once the cube returns data, Excel Services will kick in to return data in HTML. A Reporting Services client can also connect to the server cube by its URL. This is no different than connection to a regular cube as Reporting Services will launch the familiar MDX query designer.

So, where is the IT in the new Gemini world? IT will use a cool MOSS dashboard to understand who’s deployed what model and how the models are used, such as when the datasets were refreshed, what are the most popular models, what resources these models took on the server, etc.

What’s my personal take on Gemini? It’s not up to me to decide how useful it is since it’s a business-oriented tool, such as Report Builder 2.0. Business users will have the final word. Based on my personal experience though, the data analytics problems that I need to solve with traditional Analysis Services cubes surpass the Gemini capabilities by far. So, don’t throw your MDX knowledge out of the door yet. In my line of work, I can see Gemini being useful as a cube prototyping tool, especially in the early stages of requirement gathering where data can be typed in Excel and I can demonstrate to users what a cube can do for them. Of course, Microsoft plans for Gemini are much more ambitious than that. In the ideal world, all business users would upgrade to Office 2010 and create cool Gemini models to give IT folks a long-deserved break ;-). Or, so the fairytale goes….

To wrap up the day, I attended What’s New in Microsoft SQL Data Services presentation by Rick Negrin to find out that SQL Data Services is nothing more than SQL Server running on Microsoft data centers. SQL Data Services will support two application connectivity modes: a “code near” mode where the application (typically a web application) is deployed to Azure and “code far”, where the application will connect to SQL Server over Internet using the TDS protocol. Microsoft role is to provide scalability and failover. Not all SQL Server features will be available in version 1. For example, CLR will not make the cut.

A long and tiring day. I am off to the party now.

SQL Server 2005 Service Pack 3 is Out

Yesterday, Microsoft released SQL Server 2005 Service Pack 3. Reporting Services Report Builder added support with Teradata. Note that some manual configuration is required to turn things ‘on’ as well as you need to install the .Net provider from Teradata.