When (null) is not NULL

Here is an interesting SSRS-SSAS integration gotcha that got me recently. Suppose that you have the following MDX query.



Needless to say, this query is most likely wrong to start with. It brings the members of three dimensions even if they don’t exist with each other. The correct version should probably be to cross-join the attribute hierarchies which is what the MDX Query Designer does behind the scenes when you place attribute hierarchies side by side. As you could see, as it stands, this MDX query brings 12 rows. However, much to your surprise, when you bind this dataset to a table region, you will see only three rows (the ones that don’t have (null)). So, what’s going on?


It turns out that the rows with (null) are server aggregates. The (null) in this case in this case indicates that this is a subtotal row. Since such a row is not a detail row, the SSRS Analysis Services data extension drops the row when you bind the dataset. If you add groupings on the table (group1: AgeGroup, group2: Year, group3: Month), and use the Aggregate() function accordingly in the group headers/footers, the table will show the appropriate aggregated measure values from the AS flattened rowset in the correct scopes.


What if you want to show all 12 dataset rows as detail rows on your report? The only way to do this is to switch the data source from “Analysis Services” to “OleDB” and select the MSOLAP.3 (OleDb for OLAP 9.0) or MSOLAP.2 (OleDB for OLAP 8.0) provider. However, once you switch to the OleDB provider, new features such as the graphical MDX query designer, query parameters for MDX queries, etc. are no longer available. You have to use the text-based generic query designer. You could design the query with the graphical designer as usual and then copy & paste the non-parameterized MDX query string to use it with the old OleDB provider.


One cautionary note: As I noted, most likely the original MDX query is wrong. While changing the data extension will give you all the rows, make sure that this is what you want because the report now will show duplicate rows that don’t exists within the cube space. This will probably skew your report results and confuse the end users.


The SSRS team is considering extending the RDL schema to so that server aggregate rows (currently determined by if a data extension implements the IDataReaderExtension interface) can be explicitly interpreted as real server aggregates (behavior of the AS 2005 AdoMd data extension) or as detail rows (behavior of the AS 2000 OleDB provider).

Building Report-enabled Applications with the New ReportViewer Controls (Part 1 of 2)

DevX has just published Part 1 of my article Building Report-enabled Applications with the Windows Forms ReportViewer. Part 2 (ASP.NET ReportViewer) should be out by the end of February.

Reporting is an integral part of every complete application. The Report Viewer controls greatly reduce the development effort required to report-enable .NET applications. Part one of this two-part article shows you how you can leverage the Windows Forms ReportViewer to integrate your Windows Forms .NET applications with Reporting Services 2005.  

Logging Report Builder Queries

As you probably know, the RS 2005 Report Builder lets business users create ad-hoc reports. Behind the scenes, the Query Translator converts the report metadata to the native queries supported by the relational data source (SQL Server and UDM with RS 2005 only). If you want to see the underlying queries, you have two choices:


1. You can use the SQL Server Profiler and examing its trace.


2. You can log the queries to the RS log files or watch them realtime by attaching a trace listener (e.g. the SysInternals DbgView). To enable this option, open the Report Server web.config file and change the Components element as follows:


<add name=”Components” value=”all,RunningJobs:3,SemanticQueryEngine:2,SemanticModelGenerator:2″ />


<add name=”Components” value=”all,RunningJobs:3,SemanticQueryEngine:3,SemanticModelGenerator:2″ />


Alternatively, you can omit the SemanticQueryEngine element whatsoever. The net effect of doing so will be escalating the trace switch to 4.


You may find this article useful in case you need to learn more about the Report Builder architecture.

TechEd 2006 Europe Announced

TechEd 2006 Europe will take place in Barcelona, Spain (7 -10 November 2006). Barcelona is one of the cities I always wanted to visit and I hope I will be able to make it. Summer could have been much better for a fiesta, of course …

Applied Microsoft Analysis Services 2005 – Bestseller on Amazon!

My book “Applied Microsoft Analysis Services 2005” book is a bestseller on Amazon! It is currently ranked No 94 for Computers and Internet (as of Jan 23). The book is now in its second printing (it’s been available on Amazon since 12/15/2005).


Thank you, thank you, thank you! 



 

Implementing Smart Reports with the Microsoft Business Intelligence Platform

This article presents an excerpt from Teo’s book Applied Microsoft Analysis Services 2005. Learn how to author “smart” reports using Reporting Services (SSRS), Analysis Services (SSAS), and SQL Server CLR stored procedures. Get the reports demonstrated in this article by downloading the sample code.

Implementing Smart Reports Article Available

Microsoft has just published an excerpt from my book “Applied Microsoft Analysis Services 2005” on TechNet. Learn how to integrate SSRS and SSAS to build smart reports using data mining and CLR stored procedures.

When Security and Developers Collide

One of the most outstanding SQL Server 2005 feature is CLR stored procedures. A CLR stored procedure can encapsulate complex .NET code, process data, and pipe the results back to your report. The advantage of using CRL stored procedures is that from a reporting perspective things are very simple. The report just needs to call the CLR stored procedure, as it would call a regular SQL Server stored procedure.


One of my reporting requirements called for passing some XML metadata in the form of a serialized-to-XML object. The stored procedure had to deserialize the object before it can use it. In my case, the object used a custom XML serialization/deserialization by implementing IXmlSerializable interface. My first attempt to deserialize the object from the CLR stored procedure failed abysmally with the following exception:


Cannot load dynamically generated serialization assembly. In some hosting environments assembly load functionality is restricted, consider using pre-generated serializer. Please see inner exception for more information. —> System.IO.FileLoadException: LoadFrom(), LoadFile(), Load(byte[]) and LoadModule() have been disabled by the host.


Initially, I couldn’t understand what was going on since I’ve given my assembly UNSAFE rights, which means that it should be able to do anything. Luckily, googling on this exception brought me to the Vineet’s post which hinted that for security and reliability reasons, SQL Server does not support loading assemblies dynamically. After some experimenting, I was able to fix the issue by following these steps:


1. Sgen the object assembly to produce a pre-generated XmlSerializers assembly.
2. Deploy both the object and XmlSerializers assembly to SQL Server with UNSAFE access. Strangely, I had to deploy also System.Drawing .NET assembly because SQL Server was asking for it.
3. In my CLR database project in VS.NET, add a reference to both the object assembly and the XmlSerializers assembly.
4. Inside the CLR stored procedure, instead of deserializing the object directly, I had to use the XML serializer from the XmlSerializers assembly.


    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void csp_MyStoredProcedure(SqlXml metadata)
    {
       
// Deserialize the object using the XmlSerializers serializer
        FooSerializer serializer = new FooSerializer ();
        Foo f = (Foo)serializer.Deserialize(metadata.CreateReader());
        . . .
    }


So, you need to go through a few development, deployment, and management hoops just to deserialize an object in a CLR stored procedure. Not sure if security is a good justification for this effort though. Moreover, reflecting on the XmlSerializers assembly revealed that it is just a thin wrapper on top of the object assembly. One would say that you never have too much secuirty although, in this case, it certainly feels that way. Are we getting paranoid with security?

Reporting from XML documents in RS 2005

Recently, I’ve come across a question on the RS public newsgroup about how to use the new RS 2005 XML extension to report off relative nodes in an XML document. The attached sample demonstrates this scenario. For example, given this document:


<bookstore xmlns=”http://example.books.com“>
  <book genre=”autobiography” publicationdate=”1981″ ISBN=”1-861003-11-0″>
    <title>The Autobiography of Benjamin Franklin</title>
    <author>
      <first-name>Benjamin</first-name>
      <last-name>Franklin</last-name>
    </author>
    <date>09-06-1956</date>
    <samplechapters>1 3 4</samplechapters>
    <price alternative=”discount”>5.99</price>
    <price>8.99</price>
  </book>
  <book genre=”autobiography” publicationdate=”1972″ ISBN=”0399105573″>
    <title>The Moon’s a Balloon</title>
    <author>
      <first-name>David</first-name>
      <last-name>Niven</last-name>
    </author>
    <date>09-06-1974</date>
    <samplechapters>4 5</samplechapters>
    <price alternative=”discount”>1.94</price>
    <price>2.57</price>
  </book>
</bookstore>


The following query fetches the book/title, book/author/first-name, and book/date elements, as well as book/@publication date.


bookstore/book {title, author{first-name}, @publicationdate, date}


In my opinion, it could have been nice if you were able to use XPATHs to bring the elements needed instead of this pseudo syntax that relies on curly brackets. Another potential area of improvement is supporting XML schemas to cast the column types automatically according to the defined schema types. As it stands, the XML extension returns all columns as of string date type. However, you can explicitly cast the column type to another type. For example, assuming that the publication attribute returns a valid date:


bookstore/book {title, author{first-name}, @publicationdate(date), date}


will cast the publicationdate column to a date data type.

Excel-based Reporting Services, anyone?

Perhaps, you’ve heard that Excel 12 will support publishing and executing Excel spreadsheets on the server through a server-based technology (currently dubbed Excel Services). My first reaction when I’ve heard about this was “who on earth would ever want to do this?” However, after reading the David Gainer’s excellent blog (will anyone compile these great notes into a book?) I got my “Eureka” and I have to report a mind shift. David is a Group Program Manager for Microsoft Excel. It is always great to see Microsoft engineers actively sharing first-hand knowledge so we can get the news straight from the horse’s mouth.


Here is what I think about Excel Services based on the David’s notes (I still have to try out my hypothesis). I find the Excel Services model very similar to the Reporting Services architecture. The Excel user authors the spreadsheet using Excel. Then, she can publish the spreadsheet manually or programmatically to the server (through Web service interfaces). The Excel team took the easy way out and decided to leverage SharePoint as a server repository which in this case fulfills the role of the RS report catalog. A published spreadsheet can be viewed in the browser. A special renderer is used on the server side to render the spreadsheet in HTML (IE 6+ only), just like a RS report can be rendered in HTML. Although the spreadsheet is rendered read-only, some interactivity is supported, i.e. the end user can change the PivotTable filter. Behind the scenes, when the user interacts with the spreadsheet, Excel calls down to a Web service to refresh the data (assuming data is retrieved from a data source).


Similar to RS, the user can pass parameters to the published spreadsheet to control the results of the calculations. Finally, a spreadsheet (or portion of it) can be secured using the SharePoint Windows-based security. So, in a nutshell, we are going to get Excel-based Reporting Services.


Is this a good thing? I think so, at least for a couple of reasons:


1. You can integrate server-based spreadsheets into SharePoint-based dashboards.
2. More interestingly, you could publish PivotTable spreadsheets that source data from SSAS 2005 cubes and disseminate them across the enterprise.


The second scenario needs more clarification. The Excel author could manually or programmatically create a PivotTable report connected to an SSAS cube. Then, she could publish the spreadsheet to the server (think of it as a canned report). Once this is done, the end user could request the spreadsheet in the browser. The end-user will be able to interact with the report or open it locally in Excel 12 to gain access to all PivotTable features (subject to security restrictions). Unfortunately, Excel Services will not support full interactivity. For example, the users cannot add or remove PivotTable fields (what a bummer?). In addition, not all SSAS features will be supported, e.g. actions and server drillthrough are not supported.


Despite its limitations, I believe Excel Services will enable new integration scenarios especially in the area of financial reporting. This comes to no surprise considering the fact that one of the most common requirements for financial reporting I hear nowadays is “make it work like Excel” 🙂