Maintaining State in Reporting Services 2008

Sometimes, more advanced reporting needs may require maintaining state using custom code. Recently I had to implement a rather involved report consisting of two sections: a detail section that used a recursive sum, Sum(Fields!SomeField.Value, , True), and a summary section with additional calculations which would reference some of the aggregated values in the detail section. Since Reporting Services is not Excel, you cannot reference arbitrary fields on a report. So, I had to cache the values from the detail section in a hashtable using custom code so the summary section could obtain these values when needed. The following embedded code gets the job done:

Friend Shared _hashtable As New System.Collections.Hashtable()

Function GetKey(ByVal id as Long, ByVal column as String) as String

return Report.User!UserID & id & column

End Function

Function AddValue(ByVal id as Long, ByVal column as String, ByVal value As Object) As Object

Dim key as String = GetKey(id, column)

If _hashtable.ContainsKey(key)=False Then

            _hashtable.Add(key, value)

End If

Return value

End Function

Function GetValue(ByVal id as Long, ByVal column as String) As Object

Dim key as String = GetKey(id, column)

If _hashtable.ContainsKey(key) Then

    Return _hashtable(key)

End If

Return Nothing

End Function

Protected Overrides Sub OnInit()

Dim keys As String() = New String(_hashtable.Keys.Count – 1) {}

_hashtable.Keys.CopyTo(keys, 0)

For Each key as String In keys

         If key.StartsWith(Report.User!UserID) Then

         _hashtable.Remove(key)

        End if

Next key

End Sub

In my scenario, tablix cells in the detail section would call the AddValue function to cache the dataset field values. Then, cells in the summary section would call GetValue to obtain these cached values. This technique is not new and you’ve probably used it to solve similar challenges.

What’s new is that moving to Reporting Services 2008 you may need to revisit your custom code and make changes. First, in prior version of Reporting Services, you probably declared the hashtable variable as an instance variable and that worked just fine. However, Reporting Services 2008 introduced a new on-demand processing engine where each page is processed separately. Consequently, you may find that the report “loses” values. Specifically, if the report fits on one page, everything will work as expected. But if the report spills to more pages, the hashtable collection will lose the values loaded on the first page. The solution is to declare the hashtable object as a static (Shared in Visual Basic) object so it survives paging.

Friend Shared _hashtable As New System.Collections.Hashtable()

But because a static object is shared by all users running the report, you need to make the hashtable key user-specific, such as by using the User!UserID variable which returns the user identity.

Function GetKey(ByVal id as Long, ByVal column as String) as String

return Report.User!UserID & id & column

End Function

Are we done? Well, there is a nasty gotcha that you need to be aware of. If the report takes parameters, you will surprised to find out that the custom code returns the cached values from the first report run and changing the parameters (report data) doesn’t change the values in these cells that call the GetValue function. To make the things even more confusing, testing the report in the BIDS Report Designer or Report Builder 2.0 will work just fine. But you will get the above issue after you deploy and run the server report, such as when you request the report in Report Manager or SharePoint.

What’s going on? As it turns out, the hashtable object survives report requests as a result of parameter changes. Consequently, the code that checks if the hashtable key exists will find the key when the report is reposted and it will not add the new value.

If _hashtable.ContainsKey(key)=False Then

_hashtable.Add(key, value)

End If

The solution is to clear the user-specific hashtable items each time the report is run. This takes place in the OnInit method. The OnInit method is a special method which gets executed before the report is processed. You can use this method to do initialization tasks or, in this case, clear state held in global variables.

Are we there yet? Almost. As the reader samsonfr pointed out in a blog comment, we’ll need to make this code thread-safe because the chances are that multiple users may be running the report at the same time so concurrent threads may need to write and read to/from the static hashtable variable at the same time. As the Hashtable documentation explains “Hashtable is thread safe for use by multiple reader threads and a single writing thread. It is thread safe for multi-thread use when only one of the threads perform write (update) operations, which allows for lock-free reads provided that the writers are serialized to the Hashtable. To support multiple writers all operations on the Hashtable must be done through the wrapper returned by the Synchronized method, provided that there are no threads reading the Hashtable object. Enumerating through a collection is intrinsically not a thread safe procedure. Even when a collection is synchronized, other threads can still modify the collection, which causes the enumerator to throw an exception. To guarantee thread safety during enumeration, you can either lock the collection during the entire enumeration or catch the exceptions resulting from changes made by other threads.”

Although the code adds and removes user-specific values only (the collection key uses User!UserID), Robert Bruckner from the SSRS dev team clarified

“Everytime you have code like this in a multi-threaded environment, there is potential for a race-condition:

if (!_hashtable.ContainsKey(“abc”)) _hashtable.Add(…)

In this case you have to take a lock to ensure ContainsKey and Add are run as an atomic piece of code.”

Keeping this in mind, my proposal for making the code thread-safe code follows (concurrent experts, please comment if you find anything substantially wrong with my attempt to make a hashtable access thread-safe).

Friend Shared _hashtable As New System.Collections.Hashtable()

Dim _sht as System.Collections.Hashtable = System.Collections.Hashtable.Synchronized(_hashtable)

Function GetKey(ByVal id as Long, ByVal column as String) as String

return Report.User!UserID & id & column

End Function

Function AddValue(ByVal id as Long, ByVal column as String, ByVal value As Object) As Object

If id = -1 or id = -2 or id=-3 Then

Dim key as String = GetKey(id, column)

If _sht.ContainsKey(key)=False Then

    _sht.Add(key, value)

End If

End If

Return value

End Function

Function GetValue(ByVal id as Long, ByVal column as String) As Object

Dim key as String = GetKey(id, column)

If _sht.ContainsKey(key) Then

    Return _sht(key)

End If

Return Nothing

End Function

Protected Overrides Sub OnInit()

SyncLock _hashtable.SyncRoot

Dim keys As String() = New String(_hashtable.Keys.Count – 1) {}

_hashtable.Keys.CopyTo(keys, 0)

For Each key as String In keys

     If key.StartsWith(Report.User!UserID) Then

         _hashtable.Remove(key)

    End if

Next key

End SyncLock

End Sub

Suppressing Auto-generation of MDX Parameter Datasets

There is an unfortunate issue with the MDX Query Designer in SSRS 2008 Report Designer and Report Builder 2.0 where changing the main dataset overwrites the parameter datasets. This is a rather annoying issue because often you need to make manual changes to the parameter datasets. However, when you make a change to the main dataset, the MDX Query Designer wipes out the manual changes and auto-generates the parameter databases from scratch. According to the feedback I got from Microsoft, the issue will be fixed in SQL Server 2008 R2.

Meanwhile, there is a simple workaround which requires manually changing the dataset definition in RDL. Basically, you need to add a SuppressAutoUpdate designer switch to the parameter dataset, as follows:

<Query>

<DataSourceName>Viking</DataSourceName>

<CommandText> …

<rd:SuppressAutoUpdate>true</rd:SuppressAutoUpdate>

<rd:Hidden>false</rd:Hidden>

</Query>

TechEd 2009 North America BIN304 Slides and Code Uploaded

I’ve uploaded the slides and code from my Reporting Services 2008 Tips, Tricks, Now and Beyond breakout presentation delivered on May 13th 2009 at TechEd 2009 North America.

TechEd 2009 BI Power Hour Demos

The TechEd 2009 BI Power Hour demos, which I blogged about before, are posted on the Microsoft BI Blog. Robert Bruckner has also posted the SSRS demo from the TechED USA 2008 Power Hour, which he renamed to Sea Battle.

Transmissions from TechEd USA 2009 (Day 4)

I got the final evaluation results from my session. Out of some 15 Business Intelligence sessions, mine was the fourth most attended session with 169 people attending. Based on 35 evaluations submitted, it was ranked as the third most useful session with an average satisfaction score of 3.46 on the scale from 1 to 4. For someone who does presentations occasionally, I’m personally happy with the results. Thanks for all who attended and liked the session!

I took it easy today. I attended the Scott Ruble’s Microsoft Office Excel 2007 Charting and Advanced Visualizations session in the morning. This was purely Excel-based session with no BI included. It demonstrated different ways to present information effectively in Excel, such as with conditional formatting, bar charts, sparklines, etc. Next, I was in the Learning Center until lunch.

In the afternoon, I decided to do some sightseeing and take a tour since I’ve never been to LA. I saw Marina Del Ray, Santa Monica, Venice Beach, Beverly Hills, Sunset Strip, Hollywood (and the famous sign of course), Mann’s Chinese Theatre, and Farmer’s Market. Coming from Atlanta and knowing the Atlanta traffic, I have to admit that the LA traffic is no better. In Atlanta, the traffic is bad during peak hours. In LA, it seems it’s bad all the time. Movies and premieres make the situation even worse. There was a huge movie premiere at 7 PM in the Chinese Theater with celebrities arriving with limos. This jammed the entire area. There were at least two movies being shot in different parts of the cities. But the rest of tour was fun. LA is one of the few cities in the world where almost every building has a famous story behind it.

This concludes my TechEd USA 2009 chronicles. Tomorrow, I’ll have time only for a breakfast and packing my luggage. I’m catching an early flight as it would take me four hours to fly to Atlanta. With three hours time difference, I’ll be hopefully in Atlanta by 9 PM and at home by midnight.

Transmissions from TechEd USA 2009 (Day 3)

Today was a long day. I started by attending the Richard Tkachuk’s A First Look at Large-Scale Data Warehousing in Microsoft SQL Server Code Name “Madison”. Those of you familiar with Analysis Services would probably recognize the presenter’s name since Richard came from the Analysis Services team and maintains the www.sqlservernanalysisservices.com website. He recently moved to the Madison team. Madison is a new product and it’s based on a product by DATAllegro which Microsoft acquired sometime ago. As the session name suggests, it’s all about large scale databases, such as those exceeding 1 terabyte of data. Now, this is enormous amount of data that not many companies will ever amass. I’ve been fortunate (or unfortunate) that I never had to deal with such data volumes. If you do, then you may want to take a look at Madison. It’s designed to maximize sequential querying of data by employing a shared-nothing architecture where each processor core is given dedicated resources, such as a table partition. A controller node orchestrates the query execution. For example, if a query spans several tables, the controller node parses the query to understand where the data is located. Then, it forwards the query to each computing node that handles the required resources. The computing nodes are clustered in a SQL Server 2008 fail-over cluster which runs on Windows Server 2008. The tool provides a management dashboard where the administrator can see the utilization of each computing node.

Next, I attended the Fifth Annual Power Hour session. As its name suggests, TechEd has been carrying out this session for the past five years. The session format was originally introduced by Bill Baker who’s not longer with Microsoft. If you ever attended one of these sessions, you know the format. Product managers from all BI teams (Ofice, SharePoint, PerformancePoint, and SQL Server) show bizarre demos and throw t-shirt and toys to everything that moves (OK, sits). The Office team showed an Excel Services demo where an Excel spreadsheet ranked popular comics characters. Not to be outdone, the PerformancePoint team showed a pixel-based image on Mona Lisa. Not sure what PerformancePoint capabilities this demonstrated since I don’t know PerformancePoint that well but it looked great.

The Reporting Services team showed a cool demo where the WinForms ReportViewer control would render a clickable map (the map control will debut in SQL Server 2008 R2) that re-assigns the number of Microsoft sales employees around the US states. For me, the coolest part of this demo was that there was no visible refresh when the map image is clicked although there was probably round tripping between the control and the server. Thierry D’Hers later on clued me in that there is some kind of buffering going on which I have to learn more about. This map control looks cool! Once I get my hands on it with some tweaking maybe I’ll be able to configure it as a heat map that is not geospatial.

Finally, Donald Farmer showed another Gemini demo which helped learn more about Gemini. I realized that 20 mil+ rows were compressed to 200 MB Excel file. However, the level of compression really depends on the data loaded in Excel. Specifically, it depends on the redundant values in each column. I learned that the in-memory model that is constructed in Excel is implemented as in-process DLL whose code was derived from the Analysis Services code base. The speed of the in-memory model is phenomenal! 20 mil rows sorted within a second on the Donald’s notebook (not even laptop, mind you). At this point Microsoft hasn’t decided yet how Gemini will be licensed and priced.

As usual, after lunch I decided to hang around in the BI learning center and help with questions. Then, it was a show time for my presentation! I don’t why but every TechEd I get one of these rooms that I feel intimidated just to look at them. How come Microsoft presenters who demo cooler stuff than mine, such as features in the next version, get smaller rooms and I get those monstrous rooms? It must be intentional; I have to ask the TechEd organizers. The room I got was next to the keynote hall and could easily accommodate 500-600 people, if not more. Two years ago, I actually had a record of 500+ people attending my session which was scheduled right after the keynote.

This year, the attendance was more modest. I don’t have the final count yet, but I think about 150+ folks attended my session so there was plenty of room to scale up. I think the presentation well very well. The preliminary evaluation reports confirm this. I demoed report authoring, management, and delivery tips sprinkled with real-life examples. We had some good time and I think everyone enjoyed the show.

It’s always good to know that your work is done. I look forward to enjoying the rest of TechED and LA.

Transmissions from TechEd USA 2009 (Day 2)

I started the day by attending the Donald Farmer and Daniel Yu’s session “Creating the Right Cubes for Microsoft Excel and Excel Services” hoping that I’ll get a sneak preview of Excel 2010. Alas, it was all about refining the cube definition with display folders, perspectives, hierarchies, etc. so it appears more user-friendly in Excel. Later, I learned that Office 2010 (or whatever it will be called) is under strict NDA which explains the lack of demos. The most interesting thing about that session was that I finally understood why the SSAS team decided to scale down the cube wizard in SSAS 2008 to generate basic dimensions only. The reason was performance. You see, the BIDS 2005 cube wizard would oftentimes suggest non-optimal dimension hierarchies and the modeler wouldn’t revise the design leading to bad performance.

Next, I attended the Thierry D’Hers Top Ten Reasons for Using Microsoft SQL Server 2008 Reporting Services. It’s always good to watch RS-related sessions. Thierry listed the community support as one of the reasons for organizations to consider moving to SSRS. I agree with this and the great community support is applicable to all MS technologies. Speaking about BI only, a few years ago there wasn’t a single book about Cognos for example. Granted, the last I look on Amazon there was one book. In comparison, Microsoft has a vibrant community of book authors, MVPs, trainers, etc. Almost every publisher has a book about SSRS 2008. This is of course good for the community and so good about authors as the competition is tough J

Thierry listed the Report Builder 2.0 as #1 reason to move to SSRS 2008, followed by data visualization, and tablix. Based on my real-life projects, I’d personally have listed them in the reverse order, with tablix being #1. This session officially announced that the map control will make to the Kilimanjaro release (now officially called SQL Server 2008 R2). Later on, Thierry was kind enough to show me a demo of the map pre-release bits. One of the data modes is using the SQL Server 2008 geospatial data types which would let you map any region in the world. Thierry showed a cool report showing the worldwide sales of SQL Server, where each country had a different color gradient based on the sales volume.

After lunch, I hang around the BI area in the learning center to answer questions and rub shoulders with Microsoft employees and peers. I was surprised to learn that there are only two Reporting Services-related sessions for the entire TechEd and one of them is mine! All of a sudden, I felt 2″ taller. Later on, I felt adventurous to learn something completely new and attended a SQL Server 2008 Failover Clustering only to realize how much I don’t know about it since I’ve never used it. BTW, there are great advancements in the SQL Server 2008 failover clustering, such as ability to upgrade or patch a cluster node without stopping it.

Transmissions from TechEd USA 2009 (Day 1)

Day 1 of TechEd 2009 is almost over with the exception of the Community Influencers Party tonight. I heard that this year they expect 7,000 attendees. This is a huge scale-down from previous years. For instance, we had 16,000 attendees at TechEd USA 2007. Economy is hitting everything hard.

I thought the keynote was kind of lame. Judging by it, Microsoft has only three products: Windows 7 (officially announced to ship around holidays although Microsoft didn’t say which holidays), Windows Server 2008 (the buzz is now the forthcoming R2 release), and Exchange Server 2010. Unlike previous TechEds, there wasn’t a single announcement about other products. SQL Server KJ, Office 2010, Azure, dev tools? Nope, apparently not worth mentioning. Sure Mark Russinovich, whom I respect very much, did some cool Windows 7 demos but there were not enough to pique my interest. I understand that OS and Exchange Server are bedrock for every business and after the sad Vista saga, we have to show the world that now we’ll do things right with Windows 7, but the BI soul in me was thirsty for more.

After lunch, I hang around the BI area of the Learning Center, where I answered questions and met with other peers, including Nick Barclay (MVP) whom I wanted to meet personally for a while. Then, I attended the excellent Donald Farmer and Kamal Hathi ‘s Microsoft Project Code Name “Gemini: Self Service Analysis and the Future of BI and I had the chance to see the Gemini, which I blogged briefly about before without knowing too much, for the first time in action and gain more in-depth knowledge.

The Gemini is an end-user oriented Excel add-in that will let the user acquire data from a variety of data sources, including SSRS reports (SSRS KJ will expose reports as data feeds) and SharePoint lists, and load them in an Excel spreadsheet. The tool crunches data very fast even on a modest computer (the demo showed a notebook computer working with millions of rows) thanks to its ability to compress column-level data. This works because a dataset column would typically contain redundant data.

Once data is loaded in Excel, the tool will attempt to automatically determine the relationships between datasets (loaded in separate spreadsheets) to create a hidden dimensional model consisting of fact and dimension in-memory tables. The user will be able to manually specify the dataset relationships by telling the tool which column will be used to join the datasets (very much like joining relational tables). Moreover, the user will be able to define calculated columns using Excel-style formulas. Finally, as the add-in builds behind the scenes an in-memory cube, the user will be able to slice and dice data in a Pivot table report. So, no Analysis Services is needed if all the user wants is manipulating data on the desktop.

Where things are getting more interesting is deploying the models on the server. To do so, the end user would deploy the Excel spreadsheet to the MOSS Report Library. Note that MOSS is required for server-side deployment. When other users request the spreadsheet, an Analysis Services redirector will understand that this is a Gemini model and service the requests from a server cube. At this point is not clear how exactly the server cube will be built and whether it could be managed in SSMS. Once the cube returns data, Excel Services will kick in to return data in HTML. A Reporting Services client can also connect to the server cube by its URL. This is no different than connection to a regular cube as Reporting Services will launch the familiar MDX query designer.

So, where is the IT in the new Gemini world? IT will use a cool MOSS dashboard to understand who’s deployed what model and how the models are used, such as when the datasets were refreshed, what are the most popular models, what resources these models took on the server, etc.

What’s my personal take on Gemini? It’s not up to me to decide how useful it is since it’s a business-oriented tool, such as Report Builder 2.0. Business users will have the final word. Based on my personal experience though, the data analytics problems that I need to solve with traditional Analysis Services cubes surpass the Gemini capabilities by far. So, don’t throw your MDX knowledge out of the door yet. In my line of work, I can see Gemini being useful as a cube prototyping tool, especially in the early stages of requirement gathering where data can be typed in Excel and I can demonstrate to users what a cube can do for them. Of course, Microsoft plans for Gemini are much more ambitious than that. In the ideal world, all business users would upgrade to Office 2010 and create cool Gemini models to give IT folks a long-deserved break ;-). Or, so the fairytale goes….

To wrap up the day, I attended What’s New in Microsoft SQL Data Services presentation by Rick Negrin to find out that SQL Data Services is nothing more than SQL Server running on Microsoft data centers. SQL Data Services will support two application connectivity modes: a “code near” mode where the application (typically a web application) is deployed to Azure and “code far”, where the application will connect to SQL Server over Internet using the TDS protocol. Microsoft role is to provide scalability and failover. Not all SQL Server features will be available in version 1. For example, CLR will not make the cut.

A long and tiring day. I am off to the party now.

SQL Server 2008 Business Intelligence Development and Maintenance Toolkit Available

Today was supposed to be the birthday of the SQL Server 2008 Business Intelligence Development and Maintenance Toolkit by Microsoft Press but it’s not available with retailers, such as Amazon, yet. I guess a couple or so more days as the book is finding its way to resellers.

As with the 2005 version, I was privileged to work together with Erik Veerman and Dejan Sarka (all SQL Server MVPs) on the new revision. My part was the four Analysis Services chapters. Besides updating the book for SQL Server 2008, we re-worked the entire material to flow more logically and make this resource even more useful to help you prepare for the corresponding 70-448 exam.

I hope you’ll find the toolkit useful and pass the exam to certify!

Overwriting Parameter Defaults

Issue: An interesting question popped today on the discussion list about overwriting the default of a cascading parameter. In this particular case, the report has two parameters: Frequency and HireDate. The HireDate parameter is dependent on the Frequency parameter. Specifically, when the user changes the Frequency parameter, custom code changes the default of the HireDate parameter. Sounds easy, right? Well, not so fast.

This requirement calls for smashing the HireDate default. However, smashing values that the user selected (either explicitly or implicitly by not changing the default) is usually considered to be a bad thing. There exist uncommon cases such as this one where the author of the report can reasonably know the user would expect their previously selected value will be overridden. Unfortunately, Reporting Services doesn’t not currently have a feature which would allow the report author to distinguish the common case from the uncommon case. As a result, the current design handles the common case. (Note: In SQL 2000, SSRS defaulted to smashing the user’s selection with the default when an upstream parameter value changed. And, this resulted in a large volume of customer complaints.

Solution: So, what’s the solution? One option you may want to explore, which the sample report demonstrates (attached), if the Valid Values list changes (as a result of the upstream parameter value changing) and it no longer contains the user’s selection, SSRS is forced to re-evaluate the default. Careful use of the Valid Values list can in many cases simulate the desired behavior of always overriding the user’s selection back to the default. This is I set the HireDate available values to the same expression, so its available value always changes.

[View:/CS/cfs-file.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/blog/parameter_5F00_default.zip]