Book Review – Microsoft SQL Server Reporting Services Recipes

This is a review of the latest addition to the long repertoire of Reporting Services books – Microsoft SQL Server Reporting Services Recipes (ISBN: 978-0-470-56311-3). When you learn a new technology you suggest you pick up several books covering this product because every author writes from his own experience. Thus, even books covering the same technology are not competing by completing each other. Some books target novice users, others are more advanced; some have a strong practical connotation while others are designed to be used more as a reference.

In my opinion, this book will benefit mainly readers who have worked with Reporting Services and have already some report authoring experience under their belt. If you fit this profile, you probably find yourself occasionally wanting a report sample that addresses a specific requirement, a tip for better implementation, or a trick to get around a given limitation. Continuing this line of thought, another title for this book might have been “Reporting Services Tips and Tricks”.

Two author names appear on the book cover – Paul Turley and Robert Bruckner – and both names should be familiar to you. Both authors have been heavily involved with Reporting Services since its inception. Paul is a SQL Server MVP and BI architect for Hitachi Consulting. Robert is a technical lead with the Reporting Services team and when he speaks I take a note. Both authors have helped the community tremendously by answering questions on the Reporting Services discussion list, writing and blogging about Reporting Services, and speaking at industry events. Some of the book material has been contributed by other Hitachi employees and SQL Server experts. The foreword is written by Thierry D’hers – Group Program Manager with the Reporting Services team.

The good thing about this book is that it’s not tied to a particular release of Reporting Services. What I particularly liked is that in some cases the authors have provided examples that work with previous editions of Reporting Services, coupled with versions that benefit from the latest features found in 2008 or R2. The book is organized in the following parts:

  • Introduction (80 pages) – Provides the necessary foundation for report authoring.
  • Part 1: Columnar and Grouped Reports (50 pages) – This part covers features that every report author should know, including alternative row colors, dynamic groups, conditional column visibility, and resetting page numbers.
  • Part 2 : BI Dashboards and Elements (30 pages) – This is where you would learn to work with indicators, sparklines, charts, and other elements to author dashboard pages that are becoming increasingly popular. You will also learn interesting tips to integrate your reports with Analysis Services.
  • Part 3: Chart and Gauge Reports (60 pages) – Think of this part as how to get the most out of the Reporting Services data visualization features. It walks you through the cornucopia of chart types that SSRS 2008 made possible, such as histogram, Peretto, bullet graphs, and gauges. It teaches you how to enhance these charts with custom color palettes and exception highlighting.
  • Part 4: Interactive Reporting (40 pages) – Reporting Services has supported interactive features since the beginning, such as drillthrough, toggled visibility, links, bookmarks, etc. This part takes the interactive features to the next level by showing you how to implement conditional linking, breadcrumbs, dynamic pivoting and document maps.
  • Part 5: Integrated Reporting Applications (35 pages) – This part shows you how to build report solutions that go beyond a single report, such as conditional subreports, changing the data behind the report, and embedding reports in .NET applications.
  • Part 6: Enhanced Report Content (100 pages) – This part covers advanced report authoring techniques, including mailing labels, barcodes, custom aggregation, dynamic page breaks, external images, checkbox list and mapping (a new feature of R2).
  • Part 7: Filtering and Parameterization (70 pages) – Report parameters is one area that takes a lot of criticism from the community and it’s one of the weakest links of Reporting Services. The authors present several workarounds for common requirements and limitations, such as advanced filtering, handing multi-valued parameters, top ranking, and custom sorting.
  • Part 8: Custom and Dynamic Data Sources (30 pages) – This section shows advanced techniques for data retrieval, such as obtaining data from a web service, SharePoint lists (2008 and R2), and Dynamics AX.
  • Part 9: Games (15 pages) – Now that you’ve learned all these cool tricks, it’s time to chill out. If Xbox is not cool anymore, Reporting Services is here for you to play and delight your users with games such as hangman, and sea battle. Hallo is coming soon… just kidding.

In a nutshell, pick up this book. You’ll find yourself reaching for it on a regular basis.

043010_2244_BookReviewM1

Scope Assignments and Missing Members

The Analysis Services server parses the cube script on deploy and complains in case of missing dimension members. You can use the IsError() function to handle missing members, such as Iif(IsError([Account].[Accrual Basis].&[30/360]), null, ([Account].[Accrual Basis].&[30/360]). However, sometimes you might want to ignore the default missing member behavior. For example, an ISV deploying a cube to different customers might prefer to ignore missing members if there are many member references in the cube. In this case, you can see the ScriptErrorHandlingMode cube-level property to IgnoreAll. The unfortunate side effect of doing so is that any scope assignment that references a missing member will be invalidated. For example, suppose you have the following scope assignment:

Scope

    (

        [Account].[Account].[Account].Members,

        {[Account].[Status].&[Open], [Account].[Status].&[Pending]},

    );        

    this =  … ;

End
Scope;

This assignment changes the cube space for all accounts with Open and Pending status. But what happens if the [Account].[Status].&[Pending] member doesn’t exist? The scope assignment simply doesn’t work. Since in this case, a set is used to specify the account status, you cannot use IsError(). One workaround is to use the Filter() function as follows:

Scope

    (

        Filter([Account].[Account].[Account].Members, [Account].[Account].Properties(“Status”) = “Open” OR [Account].[Account].Properties(“Status”) = “Pending”)

   );        

    this =  … ;

End
Scope;

Reporting From OData Services

Open Data Protocol (OData), previously known as ADO.NET Data Services, is quickly gaining popularity as a web protocol for querying and updating data. A few Microsoft products already expose their functionality as OData services, including SharePoint 2010, SQL Azure, and Reporting Services (see my blog about reports as data feeds). Therefore, chances are that you may need to report off OData services. As it stands, Reporting Services doesn’t include a native support for OData but the XML Data Provider, which has been around since version 2005, may get the job done.

The OData report demonstrates how you can use the XML Data Provider to query OData services. It has two datasets. The Customers dataset retrieves data from the sample Northwind OData service. The CompanySales dataset queries the Adventure Works Company Sales report as a data feed.

Northwind OData Service

The Northwind datasets shows a list of Northwind customers. I implemented the report as follows.

  1. Set up the Northwind data source that uses the XML Data Provider and whose connection string points to the Northwind Customers service.
    http://services.odata.org/Northwind/Northwind.svc/CustomersAs you’ve probably guessed it, you need to set up a data source for each OData service you want to query. If you open the above link in IE with the feed view turned off (Internet Options, Content tab, Settings, uncheck Turn On Feed Reading View), you will see the output of the service.

    <?xml version=”1.0″ encoding=”utf-8″ standalone=”yes” ?>

<feed xml:base=”http://services.odata.org/Northwind/Northwind.svc/
xmlns:d=”http://schemas.microsoft.com/ado/2007/08/dataservices
xmlns:m=”http://schemas.microsoft.com/ado/2007/08/dataservices/metadata xmlns=”http://www.w3.org/2005/Atom“>

  <title type=”text“>Customers</title>

  <id>http://services.odata.org/Northwind/Northwind.svc/Customers</id>

  <updated>2010-04-25T15:07:07Z</updated>

  <link rel=”self title=”Customers href=”Customers” />

<entry>

  <id>http://services.odata.org/Northwind/Northwind.svc/Customers(‘ALFKI’)</id>

  <title type=”text” />

  <updated>2010-04-25T15:07:07Z</updated>

<author>

  <name />

  </author>

  <link rel=”edit title=”Customer href=”Customers(‘ALFKI’)” />

  <link rel=”http://schemas.microsoft.com/ado/2007/08/dataservices/related/Orders type=”application/atom+xml;type=feed title=”Orders href=”Customers(‘ALFKI’)/Orders” />

  <link rel=”http://schemas.microsoft.com/ado/2007/08/dataservices/related/CustomerDemographics type=”application/atom+xml;type=feed title=”CustomerDemographics href=”Customers(‘ALFKI’)/CustomerDemographics” />

  <category term=”NorthwindModel.Customer scheme=”http://schemas.microsoft.com/ado/2007/08/dataservices/scheme” />

<content type=”application/xml“>

<m:properties>

  <d:CustomerID>ALFKI</d:CustomerID>

  <d:CompanyName>Alfreds Futterkiste</d:CompanyName>

  <d:ContactName>Maria Anders</d:ContactName>

  <d:ContactTitle>Sales Representative</d:ContactTitle>

  <d:Address>Obere Str. 57</d:Address>

  <d:City>Berlin</d:City>

  <d:Region m:null=”true” />

  <d:PostalCode>12209</d:PostalCode>

  <d:Country>Germany</d:Country>

  <d:Phone>030-0074321</d:Phone>

  <d:Fax>030-0076545</d:Fax>

  </m:properties>

  </content>

  1. Set up a dataset query.

    <Query>

        <ElementPath IgnoreNamespaces=”true”>

            feed{}/entry{}/content{}/properties

        </ElementPath>

    </Query>

     

    The IgnoreNamespaces attribute instructs the provider to ignore the XML namespaces. The ElementPath text specifies a path to the element whose nodes you want to display. In this case, the element is properties which is the element that contains the interesting OData stuff. The empty curly braces for the preceding nodes are used to ignore information from the preceding nodes in the path.

CompanySales

Since starting with R2, reports are available as data feeds, you may also implement a quick and easy way to display data from one report in another. In this blog, I provided more details about the query syntax. In the case of the Company Sales, the data source connection string is as follows:

http://localhost/ReportServerR2?/Adventure Works Sample Reports/Company Sales 2008&ShowDescription=False&StartDate=01/01/2002 00:00:00&EndDate=12/31/2003 00:00:00&rs:ParameterLanguage=&rs:Command=Render&rs:Format=ATOM&rc:DataFeed=xAx0x2xCx0xMx0

The dataset query is the same because all OData services return data in the same format.

Moving forward, it will be nice if a future release includes a native OData provider that handles parameters and supports methods, as the XML Data Provider does for SOAP web methods, so you don’t have to set up a data source for each OData service.

Debugging SSIS Script Tasks on x64

A coworker of mine complained that Integration Services doesn’t let you debug script tasks on x64. I found an easy workaround to disable the x64 runtime.

  1. Right-click the project node and click Properties.
  2. Switch to the Debugging tab and change the Run64BitRuntime to False.

042410_0014_DebuggingSS1

Hit F5 and breakpoint will be hit.

SQL Server 2008 R2 Released to Manufacturing

Robert Bruckner announced that SQL Server 2008 R2 was released to manufacturing today. Robert summarized the most important most features. I covered them in my blog as well. I’ll update this blog once I find out when R2 will be available on MSDN.

UPDATE

According to this Microsoft blog, R2 will be available for MSDN subscribers on May 3rd.

Using XPerf to Test Analysis Services

I have to admit that I am a performance buff. In a previous blog, I mentioned that the Xperf utility can help you gain additional understanding about the Analysis Services performance. Akshai Mirchandani from the SSAS team was kind enough to share his performance testing experience and recommended this utility. It turns out he was right – it is a real gem! Just after writing that blog, an opportunity presented itself. A business user complained that drilling down a member in an Excel PivotTable report takes about 20 seconds on a virtual test server. This inspired me to take XPerf for a spin. In this case, the database was relatively small but the user has requested two parent-child dimensions on the report with some 3,500 and 300 members each. As you probably know, parent-child dimensions are slow by nature. To make things worst both dimensions had many-to-many relationships. The SQL Profiler would show that almost the entire time was spent reading data from partitions. However, I was interested to find out how much time was spent in actual I/O vs. aggregating data to make specific performance recommendations. The findings were interesting. Here are the steps I followed to test the performance using XPerf on the virtual server running Windows Server 2008:

  1. I downloaded the Microsoft Windows SDK for Windows 7. Although it’s for Windows 7, it can run on Windows Server OS as well. When I ran the web install, I selected the Win32 Development Tools option only to reduce the size of the download.
  2. Once installed, I ran Install Windows Performance Tool Kit (x64) found in the Microsoft Windows SDK v7.0 -> Tools program group and accepted the default options.
  3. I ran the command prompt with elevated privileges (Run As Administrator) and started a trace with the default configuration.
    C:\Program Files\Debugging Tools for Windows (x64)>xperf -on DiagEasy
  4. In Excel, I drilled down to expand a member of one of the parent-child dimensions that was particularly slow.
  5. Once Excel was done executing the query, I stopped the trace and saved the results in a file mytrace.etl.
    C:\Program Files\Debugging Tools for Windows (x64)>xperf -d mytrace.etl
  6. Double-clicked the mytrace.etl file which was saved in the C:\Program Files\Microsoft Windows Performance Toolkit folder to open in the Windows Performance Analyzer. This is where the fun starts!

    041610_2234_UsingXPerft2

  7. Since I was interested in disk utilization and CPU usage, I clicked on the Flyout glyph and checked these options to have two graphs only.
  8. On each graph, I expanded the Processes panel and left only msmdsrv.exe checked to filter the trace results for Analysis Services server only.

    At this point, I couldn’t believe my eyes! As you could see, the server spent a fraction of a second to read from disk. The remaining time was spent in number crunching to aggregate the results. To see the actual metric, I selected the I/O peak, right-clicked and clicked Summary Table. This showed that the server spent 29ms to read only 118,784 bytes from disk which makes sense (as I said the database is small). In summary, this particular case could benefit much more from more CPU power and potentially more CPUs since the storage engine is highly parallel.

    041610_2234_UsingXPerft3

    As you can see, Xperf can help you gain important performance details that are not available in SQL Profiler. In this blog, I only scratched the surface of what XPerf can do for you. One very interesting feature is stack walking that gives you a breakdown in time spent per function call assuming you have the debug symbols loaded. The stack waking feature is explained in more details in this Windows Performance Toolkit – Xperf blog. Of course, it could have been nice if the SQL Profiler gives you more detailed tracing information so you don’t have to rely on Xperf, e.g. Reading Data From Disk Begin/End, Data Cache Population Begin/End, Storage Engine Work Begin/End, Formula Engine Work Begin/End. So, vote here.

Speaking at Atlanta.MDF

I will be speaking at the Atlanta Microsoft Database Forum group (Atlanta.MDF) on May 10th in Atlanta, GA . The topic will be What’s New in Reporting Services 2008 R2. Given that SQL Server 2008 R2 (aka SQL Server 2010) will be released in May, timing is good. This is your chance to learn about and see the cool new features in action, ask tough questions and literally put me on the spot. Save the date in your calendar.

Date: 5/10/2010

Time: 6:30 PM – 9:00 PM

Place:Microsoft, 1125 Sanctuary Pkwy., Suite 300, Alpharetta, GA 30004

Analysis Services I/O Time

A couple of years ago I was working on a KPI dashboard project where we had to display a bunch of KPIs produced by counting (distinct count) number of customers, accounts, etc. Even with SSAS 2008 we couldn’t much to optimize the queries to render the page in a few seconds because we found that 50% of the execution time was spent in the storage engine. We flirted with the idea of using solid state disks (SSD) but back then there were not that popular.

A recent feedback from other MVPs and members of the SSAS team suggests that in general SSD are favorable for optimizing the SSAS I/O performance. As you would imagine, the more time the Storage Engine (SE) spends in reading data from disk, the more performance improvement you would expect by switching to faster disks. But then the question becomes how much time SSAS really spends reading data from disk. As it turns out, the SQL Profiler doesn’t give you the answer. That’s because the Started/Finished reading data from the partition events include also the time spent in aggregating data and this time may be significant especially in the case of parent-child or many-to-many dimensions. So, judging from the SQL Profiler alone, you may think that most of the query time is spent in I/O where faster disks may help while it may turn out that you may need more CPU power.

Since as it stands the SQL Server Profiler doesn’t break down the SE time, you need to rely on Windows performance counters to find how much time was spent in actual I/O. Event Tracing for Windows (ETW) can give you such information. For example, on Windows 2008 and Windows 7 you can use the Xperf tool from the Windows Performance Toolkit. The following resources should get you started with Xperf:

Windows Performance Toolkit – Xperf

Using Xperf to take a Trace

Two Minute Drill: Introduction to XPerf

As you will find out, the information emitted by XPerf could be overwhelming to answer a simple question. I posted a feedback on connect for a future release of SSAS to include more detailed tracing about the SE inner workings. This information should be available in SE so it shouldn’t be that difficult to break down the Progress Begin/End reports so customers know if they should scale up in terms of CPU or I/O. Please vote for it if you find it useful.

http://connect.microsoft.com/SQLServer/feedback/details/550431/provide-more-detailed-strorage-engine-trace

Feeding Reports by Code

In a previous blog, I introduced one of the new SSRS 2008 R2 feature: reports as data feeds. As I said, the R2 release will include a new Atom Data Feed renderer to produce an Atom service document (*.atomsvc) which defines a feed(s) per data region. I also said that the main scenario for report feeds was to let the PowerPivot add-in for Excel (previously known as Gemini) consume report data. Recently, I’ve watched the interesting Pablo Castro’s “ADO.NET Data Services: What’s new with the RESTful data services framework” presentation which got me inspired to find a way to consume a report data feed programmatically outside PowerPivot. Since the Atom feed format is pre-defined, a custom application could benefit from this scenario to retrieve and manipulate the report data. In this case, an atom feed may be preferable than other format, such as CSV, because data is exposed as properties instead of columns.

So, I embark on a journey to write a small Silverlight app, RSFeedReader, that reads and displays a report feed. You will need Visual Studio 2010 to open the RSFeedReader source code. Before running the application, you may need to deploy the clientaccesspolicy.xml file included in the source to the IIS root, such as c:\inetpub\wwwroot or C:\inetpub\wwwroot\wss\VirtualDirectories\80 if you have SharePoint installed on port 80, to grant the application rights to connect to Reporting Services. If you don’t, Silverlight would treat the application URL, which runs under the local ASP.NET IIS server, and Report Server URL as two different domains and it will throw a security exception.

You can enter a report feed in the textbox and hit Fetch. The data feed results are shown in the grid. Similar to PowerPivot, each row represents a row in the report data region but the properties are stacked vertically instead of horizontally. The first column shows the entry identifier which identifies the row in the data region and the second column shows the values in that row. The Generating Data Feeds from Reports BOL topic covers the technical details of report feeds pretty well. Basically, the feed URL must include a unique identifier which you obtain from the Atom service document when you export the report to the Atom Data Feed format. The default feed URL points the matrix region in the Company Sales report although you can’t tell this from the feed URL. Again, you need to get the DataFeed identifier for the region from service document after exporting to Atom. If you request the report data feed in the browser, you will see the resulting XML. The most interesting information is included in the m:properties element.

Unlike SharePoint, Reporting Services doesn’t provide a WCF endpoint, such as ListData.svc, so you can’t generate a Visual Studio proxy (unless you do it manually). Instead, RsFeedReader, uses WebClient to submit the URL request to Reporting Services to obtain the feed.

protected void LoadFeed(string uri)

{

WebClient wc = new WebClient();

wc.OpenReadCompleted += new OpenReadCompletedEventHandler(wc_OpenReadCompleted);

Uri feedUri = new Uri(uri, UriKind.Absolute);

wc.OpenReadAsync(feedUri);

}

private void wc_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)

{

if (e.Error != null)

{

txtFeedLoc.Text = “Error load report data feed. “ + e.Error.Message;

return;

}

using (Stream s = e.Result)

{

SyndicationFeed feed;

 

using (XmlReader reader = XmlReader.Create(s))

{

// read the feed

feed = SyndicationFeed.Load(reader);

var posts = from item in feed.Items

select new FeedEntry(item.Id, (XmlSyndicationContent)item.Content);

 

// show feeds in grid

dGrid.ItemsSource = posts;

dGrid.Visibility = Visibility.Visible;

}

}

}

 

Once the asynchronous call completes, RsFeedReader loops through the feed entry elements and populates a collection of FeedEntry objects which I bind to the grid. The FeedEntry class parses the content element of the entry (where the properties are located) and exposes them as a ContentProperties object (not shown) which I bind to the nested grid in the LoadingRow event of the outer grid.

It probably won’t be long before various custom applications start popping up that lets end users subscribe to feeds, such as to let the end user get notified when report content changes.

Remembering Report Parameters

Scenario: You want to let end users view a report in Report Manager or SharePoint and “remember” the report parameter selection. You find that you cannot just tell the users to add the report URL to the browser favorites because the parameters are not exposed on the URL. This is by design and cannot be changed. Further, you discover that for some reason end users prefer to play Solitaire instead of fiddle with the URL syntax to construct the report URL manually.

Solution: The Perimeter Persistence report demonstrates an implementation approach that Reporting Services 2008 makes possible. The user can click on the Add Favorite hyperlink. This prompts the user to add a favorite. When the user clicks the Add button, the report URL including the parameter selection is added to the browser favorite.

032610_0216_Remembering1

 

Implementation: I followed these steps to implement the Add Favorite hyperlink that injects some JavaScript into the hosting page to get the job done:

1.  Add a textbox that uses the following expression:

= Iif (NOT Globals!RenderFormat.IsInteractive, Nothing,

“<a href=””javascript:addFavorite(‘” & Globals!ReportServerUrl & “?” & Globals!ReportFolder
& “/” & Globals!ReportName
& “&ProductCategory=” & Parameters!ProductCategory.Value
& “&ProductSubcategory=” & Join(Parameters!ProductSubcategory.Value, “&ProductSubcategory=”)
& “‘,'” & Globals!ReportName & “‘)””>Add Favorite</a>”

The script calls a JavaScript function called addFavorite which you will find in the script.js file. The expression passes a valid report URL, such as:

http://localhost/ReportServerR2?/TechEd/Perimeter Persistence&ProductCategory=1&ProductSubcategory=2

Unfortunately, you cannot make the link “generic” because as it stands Reporting Services doesn’t let you loop through the Parameters collection. If you use SQL Server 2008 R2, you can use the new Globals!RenderFormat.IsInteractive built-in field to check the requested export format and disable the link if the renderer doesn’t support interactive features. Use the Join function if you have a multivalue parameter.

2.  Select the entire expression in the textbox, right-click and choose Placeholder Properties and click the HTML – Interpret HTML Tags as Styles to format the resulting text as a hyperlink. Isn’t SSRS 2008 rich formatting nice?

3.  Copy the content of script.js and append to the end of the source of the following pages.

\Reporting Services\ReportManager\Pages\Report.aspx
\Reporting Services\ReportServer\Pages\ReportViewer.aspx

Why you need to add the script to two pages? Because when viewing the report in Report Manager, the hosting page is Report.aspx but when the user clicks on the saved link, the URL points directly to the server and the ReportViewer page is used. If you prefer the link to point back to the Report Manager, change the expression to request the Report.aspx page. If you target SharePoint, the URL should point to the RSReportViewer.aspx page using the syntax explain in this blog by Prash Shirolkar.