After retiring Office Web Components (OWC), which can hardly can pass the web test as it is implemented as an ActiveX control, Microsoft left developers with no web-based browser for Analysis Services. True, for those who can afford the hefty price tag, Microsoft Office SharePoint Server (MOSS) supports server-side Excel spreadsheets that render to HTML. However, while Excel rules the Windows-based OLAP browser space, HTML-based Excel spreadsheets with limited filtering can hardly meet the demand for web-based interactive reporting. Then, there is an option to author Reporting Services OLAP reports but outside limited interactive features their layout is pretty much fixed at design time.
What’s really needed is a Silverlight Analysis Services control that ships with Visual to let developers embed an Excel-like Analysis Services browser into their ASP.NET applications. You need this control and I need it but it’s not coming anytime soon. Meanwhile, third-party vendors rush in to fill the gap.
In a recent project, we used Dundas OLAP Chart for Analysis Services, which I mentioned in one of my previous blogs. Dundas has just released version 7 of this control and I really like what I see. It’s currently the best OLAP-based chart in the industry. The Dundas OLAP Chart for Analysis Services is a regular ASP.NET control with AJAX features for improved user experience. With Silverlight establishing as the web platform of choice for web developers, you may be looking for a Silverlight-based Analysis Services browser. This is where the Intelligencia for Silverlight comes in.
I blogged about Intelligencia for Reporting Services before and pointed out where I think it surpasses the MDX Query Designer included in Reporting Services. What IT-Workplace has done this time is bringing their product to the web and it has wisely decided to use Silverlight. The moment I saw the new version, it grabbed my attention. Users familiar with Excel PivotTable reports would find the Intelligenca metadata pane (Field List) very similar. Users can create a report by dragging and dropping objects to the report pane on the right. Actually, I would love to see IT-Workplace adding the Filter, Columns, Rows and Filter panes below it just like Excel. I’d welcome also in-place filtering just like Excel. You got the idea. For me and my users, the more Excel-like the browser is the better. This lowers the learning curve and unifies the desktop vs. web experience.
But Intelligencia for Silverlight is more ambitious than just bringing Excel-like reporting to the web. The control has scriptable interface and a filter control which allows management dashboards to be created by linking grids and filters, as the first link on the web page demonstrates.
In summary, while still rough around the edges (it’s currently in a beta phase), I think Intelligencia for Silverlight has a lot of potential and is positioned to fill in a wide gap left by Microsoft by letting web developers embed an Analysis Services OLAP browser in their applications. Powered by Silverlight, Intelligencia for Silverlight could bring Excel PivotTables to the web. Visit the Intelligencia for Silverlight web page, download and test the control, and provide feedback to improve the control
Relax…this blog is all about Analysis Services and not about polygamy or something worse.
SSAS 2005 went beyond the “classic” OLAP model by introducing flexible dimension-to-measure group relationships, including many-to-many, referenced, fact relationships. M2M relationships can solve some nagging business problems. For example, in a recent project where we had to implement a financial cube, there was a requirement that a G-L account can belong to multiple account groups. This is a classic M2M scenario which can be elegantly solved by introducing a M2M relationship. Adventurizing this, let’s take a look at this schema.
Here, I changed the Adventure Works schema by adding a DimGroup and DimGroupEmployee tables. The DimGroup dimension is a parent-child dimension that lets the end user slice data by territory. An employee can be associated with one or more territories (think of a sales manager that covers multiple territories. This M2M relationship (groups-territories) is resolved via the DimGroupEmployee table. In the cube, the DimGroup is configured to join the Reseller Sales measure group via a Group Employee factless measure group that is bound to the DimGroupEmployee table. This lets us browse the reseller measures by group.
So, everything is cool, right? Yes, at least until I realized that the aggregated totals need to be adjusted for some account groups. To demonstrate this, suppose that you need a scope assignment that overwrites the Georgia’s total.
Do you see the problem? The Group1 total didn’t budge. However, if you do the same on a parent-child dimension that joins the measure group via a regular relationship, the total would change. Microsoft declared this behavior with M2M by design. When you come to think about it, it makes sense. To aggregate the measures, the server must go via the bridge table (DimGroupEmployee) to find the qualifying members (DimEmployee) that belong to the member on the one side of the relationship. So, when you make an assignment, you are essentially overwriting this behavior.
If a calculation has a slice in a M2M dimension, it applies to a cell only if the cell coordinate matches the M2M slice. When a cell is at a higher level for the M2M attribute, the calculation does not apply and will not be rolled up to the parent level. Intuitively, M2M attribute is non-aggregatable. In this case, when calculating grand total, the SSAS calculation engine will not sum up the subtotals from each group. However, if the assignment is made on many side (DimEmployee in our case), then the aggregating on DimGroup will work as expected.
Now, of course as a modeler, you would hope of a magic switch to let you roll up the assigned values to align the M2M behavior with regular relationships but for now we have to deal with the “by design” behavior. Microsoft suggested a workaround to introduce a scope assignment that forces the M2M members to aggregate, something like:
This works but it may introduce a performance penalty with larger dimensions. Instead, in our case, we decided not to use an assignment at all. We introduced a new helper measure to the measure group (FactResellerSales in this case) that would perform the required calculations. Since the measure values are mapped multiple times to the many side, we use simple assignments to zero out the measure for the groups that it doesn’t apply.
One year after submitting this small article, SQL Server Magazine has finally published it. This article demonstrates how you can use table-value parameters (TVPs), a new feature in SQL Server 2008, to let you pass multivalued parameters from a report to a stored procedure.
The code can be downloaded from the SQL Server Magazine site or from here.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2009-09-11 16:14:022021-02-16 03:46:07Passing Multivalued Parameters in SQL Server 2008
Continuing my intrepid journey through the new Reporting Services R2 enhancements, in this blog I’ll demonstrate some of the cool map features. As you’ve probably heard, R2 brings a brand new map control that lets you visualize spatial data on your reports. Since mapping is the one of the major enhancements in R2, there will be plenty of resources to cover it in details. For example, Robert Bruckner has written a great blog to get you started with mapping. The SSRS R2 forum adds more resources.
But what if you don’t need to visualize geospatial data, such as restaurants in the Seattle area? You shouldn’t bother with the map, right? Not so fast. What’s interesting is that the map supports the two spatial data types in SQL Server: geography and geometry. The latter lets you visualize everything that can be plotted on the planar coordinate system. That’s pretty powerful when you get to think about it. If you have the item coordinates, you can map pretty much everything. A few weeks ago, for example, I saw a Microsoft sample map report that plotted floor layout. Today, I’ll show you a report that I whipped out during the weekend that demonstrates how to visualize a heat map.
The Heat Map Report
A heat map is a graphical representation of data where the values taken by a variable in a two-dimensional map are represented as colors. The most basic variation of a heat map is a tree map that lets you visualize data trends presented as nested rectangles. In our case, the sample report shows the Adventure Works reseller sales. As shown in the screenshot, the larger the rectangle is the more sales were contributed by that reseller. The same can be observed by analyzing the rectangle color which ranges from green (most sales) to red (less sales). The greener the color is, the more sales that reseller has. I chose to use the sales as a color variable although I could have used another measure, such as number of employees, profit, etc. You can download the source code from here.
Limitations
I’ll be quick to point out the following limitations of this reporting solution:
As it stands, producing the report requires a two-step approach. First, you need to prepare the dataset and save it in a SQL Server table. Second, you run the report from the saved data. Unfortunately, the ReportViewer control doesn’t support RDL R2 which means it doesn’t support maps, so you cannot bind the dataset and run the report in one shot. While we are waiting for Microsoft to update ReportViewer, you can use a server-side custom data extension that exposes the ADO.NET dataset with the spatial data as a report dataset. This approach will et you bind the ADO.NET dataset to a server report. Chapter 18 in my book source code includes such an extension.
The performance of the algorithm that I use to calculate the coordinates of the nested rectangles degrades after a few hundred rows in the dataset. If you need to map more results, you may need to revisit the algorithm to see if you can optimize it.
Understanding Rectangle Geometry
SQL Server describes polygons in Well-Known Text (WKT) standard sponsored by Open Geospatial Consortium. For instance, a polygon is described as five (surprise) points on the coordinate system, as shown below. The fifth point is the same as the first point.
You can convert WKT to the SQL Server geometry data type by using the STPolyFromText method, such as:
To calculate the nested rectangle coordinates, I used the excellent Squarified Treemaps algorithm by Jonathan Hodgson. Jonathan explains in details how the algorithm works. My humble contribution added the following changes:
I decoupled the algorithm from his Silverlight solution to a C# console application.
Instead of loading the sample dataset from a XML file, I load it from the Analysis Services Adventure Works cube. Specifically, the MDX query returns the top fifty resellers order by Reseller Sales.
I added Console.WriteLine statements to output insert queries which you can execute to populate the Test table.
CREATE TABLE [dbo].[Test](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NULL,
[Shape] [geometry] NOT NULL,
[Size] [decimal](18, 5) NULL,
[Area] AS ([Shape].[STArea]()),
CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
Once you execute the INSERT statements to populate the Test table, you can use the SQL Server spatial data visualizer to view the results:
Authoring the Report
Once you have the data, authoring the report is easy.
Create a new report in Report Builder 3.0 Add a dataset that uses the following query: SELECT * FROM Test
Click the Map icon to start the Map wizard.
On the Choose a Source for Spatial Data step, choose the SQL Server Spatial Query option and click Next.
On the Choose the Dataset step, select the Dataset from step 1.
On the Choose Spatial Type, the map wizard should identify the Shape column as a geometry data type. At this point, the pre-release Report Builder 3.0 shows an empty map because it doesn’t recognize the coordinate system as Planar. Don’t worry, we’ll fix this later.
On the Choose Map Visualization, select the Color Analytical Map option.
On the Choose the Analytical Dataset step, select the same dataset because it has a Size column (reseller sales) that we will use for the color visualization.
On the Choose Color Theme step, choose =Sum(Fields!Size.Value) for a field to visualize and Red-Green as a color rule because the resellers with less sales will be coded red. Click Finish to exit the wizard.
In design mode, select the map and change the Viewport CoordinateSystem property to Planar.
Once you’ve changed the coordinate system, the polygons should magically appear. I’ve made a few more changes but you should be able to understand them by examining the map properties.
During my Reporting Services Tips and Tricks TechEd 2009 presentation, one of my most important tips was to encourage you to write more reports and less custom code. I showed a cool SharePointb-ased dashboard that we built by assembling report views. Following this line of thought, consider report maps instead of writing custom code when you need to visualize spatial data of any kind.
Yet another relative dates implementation in an Analysis Services cube. So far, I’ve implemented two. But I’m now starting a project to implement an accounting (G/L) cube and the traditional relative date approach where each relative date is calculated for the current member in the Date dimension doesn’t fly anymore. Heck, the business analyst even found the excellent “Translating Cognos PowerPlay Transformer Relative Date Functionality into SQL Server Analysis Services (SSAS)” whitepaper by David Greenberg and told me that’s exactly what they need. That’s what happens when you have smart users. Power Pivot anyone? While the business users are not empowered yet, let me share how I implemented this approach.
What’s Needed?
The difference now is that we want to drill down from each relative date to the periods that the relative date spans. Let’s take a look at the screenshot below.
The cube sets the default member of the Date hierarchy to the accounting closing period, which is May 2008 in this case. The Current Date relative date shows the actual measure values for May 2008 only. The Current QTD shows the actual measure values for April and May 2008. The current QTR shows the actual measure values for April, May, and June, etc. In other words, when the user slices by Relative Dates and Date dimension, the system shows the actual measure values for the periods that belong to that relative date. Consequently, the only way for the user to get the calculated relative dates values, such as the Current YTD aggregate is to force the Date dimension to the [All] member but that’s fine with the business users.
Why We Disagree?
My implementation differs from the above-mentioned whitepaper in the following ways:
I use a regular Relative Dates dimension instead of dimension calculated members that the SSAS BI Wizard and the whitepaper use. It looks to me that OLAP clients compete with each other to butcher dimension calculated members the most. Excel, for example, doesn’t retrieve calculated members by default and it doesn’t let you select individual calculated members. Other tools ignore them whatsoever. To avoid such pitfalls, I defined the Relative Dates dimension as a regular dimension.
The whitepaper defines extra columns in the DimDate table that flag the qualifying dates. My implementation doesn’t do this because it’s not needed.
The whitepaper produces the relative dates results by aggregating the measure across the qualifying date members. My approach uses simple scope assignments as you will see in a moment.
How This Got Implemented?
The implementation of this Relative Dates approach is remarkably simple.
Implement a SQL View on which the Relative Dates dimension will be based that hardcodes the Relative Dates members with SELECT…UNION statements. Note that first row on which will join the Date dimension doesn’t have a name because we will hide it in the next step.
Set up a Relative Dates dimension. Here I use a little trick to hide the first member because the user shouldn’t see it. Specifically, I set the AttributeHierarchyVisible of the Relative Date dimension key to False. Then, I created a Relative Dates hierarchy and set the HideMemberIf property of the Relative Dates level to NoName.
I added a named calculation called RelativeDateKey to the DimDate table in the data source view with the expression value of 1 so I can relate the Relative Dates and Date dimensions.
Next, we relate the Relative Dates dimension to all associated measure groups with a referenced relationship via the Date dimension.
Time for some MDX programming. We use script assignments to populate the Relative Dates cells. By default, all cells intersecting with all Relative Dates members (except for the RelativeDateKey member) are empty. So, we just need to “carry over” the measure values from the qualifying members of the Date dimension. Here is what the first three assignments look like:
Prior to these assignments, the cube script sets the default member of the Date hierarchy to the accounting closing period. Each assignment scopes on the qualifying members of the Date dimension and sets the scope to [Relative Dates].&[1] , which as we’ve seen is the first member of the Relative Dates dimension.
Why Does This Work?
The script carries over the measure values to the Relative Date members that intersect with the members of the Date attribute hierarchy. From there on, the measure aggregation function kicks in to roll up the values, e.g. by month, year, and quarter.
There is still time to sign up for our Applied Reporting Services 2008 class with Teo Lachev. This three-day intensive event teaches you the knowledge and skills you need to master Reporting Services to its fullest. No travel, no hotel expenses, just 100% in-depth SSRS training delivered right to your desktop!
Applied Reporting Services 2008 Online Training Class with Teo Lachev Date: August 31 – September 2, 2009 Time: Noon – 4 pm EDT; 9 am – 1 pm PDT 12 training hours total for only $799!
Attend this class for only $799 and get a free paper copy of the book Applied Microsoft SQL Server 2008 Reporting Services by Teo Lachev!
For more information or to register click here!
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2009-08-17 17:57:002021-02-16 03:45:30Reporting Services 2008 Online Training Class with Teo Lachev
Microsoft released SQL Server 2008 R2 August Community Technology Preview which includes the Report Builder 3.0 R2 August CTP redistributable.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2009-08-14 13:23:472016-02-17 09:14:10SQL Server 2008 R2 August Community Technology Preview Available
Reporting Services SQL Server 2008 R2 features reports as Atom data feeds. This is accomplished with a new Atom Data Feed extension, as shown on this screenshot.
This extension produces an atom service document (an *.atomsvc file). It lists at least one data feed for each data region in a report. In fact, some regions, such as tablix with column groups (cross-tab report layout) may generate multiple feeds.
At this point, you probably imagine subscribing to reports and receiving feeds in Outlook when report data is changed. Unfortunately, you won’t get very far doing so. As it turns out, atom data feeds are different from Atom feeds that most applications, such as IE and Outlooks understand directly. So even if you extract the Url out of the service document and display the .atom rendering of a report in IE, the browser won’t show useful information because it doesn’t understand that the data rows in the data feed are the important part.
So, how are report data feeds useful then? At this point, the design goal for reports as data feeds was to support Gemini – the new Excel 2010 extension that lets end users build in-memory cubes. Gemini can acquire data from variety of data sources, including data feeds and SSRS reports. Here is what it takes to subscribe to a report feed.
In the Gemini add-in, click the From Data Feed button.
On the Connect to Data Feed step of the Table Import Wizard, browse to the data feed file you exported from the report. In this case, I am pointing to the atomsvc file exported from Company Sales 2008 report. On this step, you can set up advanced options, such as timeout, authentication, etc, and test the feed.
On the Select Tables and Views step, select the regions on the report whose feeds you want to subscribe to. Since Company Sales 2008 has only one region, only one data feed is exposed. You can click the Preview button to preview the feed data. In the preview window, you can select which columns to import and set up a row filter (doesn’t seem to be working in the August CTP).
Reporting Services, Analysis Services, Microsoft Office, SQL Server 2008 R2
On the Import Summary step, preview the feed options and click Finish to import the data.
This imports the feed data into the Excel spreadsheet. From there, you can use Gemini to build the in-memory model and analyze the report data.
https://prologika.com/wp-content/uploads/2016/01/logo.png00Prologika - Teo Lachevhttps://prologika.com/wp-content/uploads/2016/01/logo.pngPrologika - Teo Lachev2009-08-14 02:07:002021-02-16 03:45:56Reports as Data Feeds
One new SSRS SQL Server 2008 R2 feature that caught my attention is lookup functions. When reading the Reporting Services discussion list, I come frequently across questions that ask if it is possible to look up a value from a dataset other than the dataset the region is bound to. Excluding getting the first or last value (First and Last functions) or aggregate functions that work on the entire dataset, the only solution was to cache the dataset in a report code-behind code and use custom code to perform the navigation.
R2 will change this by introducing Lookup, LookupSet, and Multilookup functions. The lookup functions are best understood with an example. Take a look at this report (attached) which is based on the Sales Order Detail 2008 report that ships with the Adventure Works sample reports.
I added a couple of textboxes to the report. The order details section includes a Sales Person field that displays the sales person’s name. What’s interesting is that the sales person name is retrieved from the SalesOrder dataset that feeds the order header section. Just to address the question that I can almost hear you asking, no, R2 will not support joining datasets at a report level. Instead, I use the Lookup function which performs on-the-fly search, as follows:
I pass the field of the SalesOrderDetail dataset (Fields!SalesOrderID.Value) on which the lookup will be based. I pass the field of the SalesOrder dataset that will be matched. As a third argument, I pass the field (Fields!SalesPerson.Value) of the SalesOrder dataset that I need. Finally, I pass the name of the dataset (“SalesOrder”) that will be searched. The Lookup function is executed for each row in the source dataset.
In the order header section, I display the third item total from the SalesOrderDetails dataset by using the LookupSet function, as follows:
You call LookupSet in the same way you call Lookup. The difference is that Lookupset returns a variant array of all values that match. I decided to make the demo more interesting and let you specify the row index needed by adding a simple function to the report code.
Function GetValueByRowNumber(ByVal items As Object(), ByVal pos as Integer) As Object
If items Is Nothing Then
Return Nothing
End if
If pos > items.Length -1 Or pos<0
Return Nothing
End If
Return items(pos)
End Function
The expression passes the variant array returned by LookupSet and the row index to the GetValueByRowNumber. The GetValueByRowNumber returns the field at that row.
The Multilookup function, which I don’t demonstrate, accepts an array of source values to match and returns an array of the matched values that have 1:1 relationship to the source values, such as:
In this and future blogs I’ll cover the most important new features in the forthcoming release of Reporting Services which will be included in SQL Server 2008 R2 (code-named Kilimanjaro or KJ for short). Let’s start with the most obvious new feature: the all new Report Manager UI.
SharePoint-like UI
The moment you open Report Manager, you will notice that significant work was done to make the UI more intuitive, a la SharePoint.
For example, similar to SharePoint, a drop-down context menu pops up when you hover on a report item that lets you quickly access relevant tasks.
Simplified UI
The management pages also received facelift. Microsoft has reduced the number of buttons and changed the tab styling so you are sure which tab is selected.
Although seemingly minor, I am sure you’d love the new look and feel. For example, I like the fact that I don’t have run a report or switch to Details mode just to get to the report properties.
Report Builder 3.0 ClickOnce
Another welcome enhancement is that opening a report in Report Builder, launches Report Builder 3.0 (more on it in another post) by default. Report Builder 1.0 is still installed but it’s not the default reporting client. You see the writing on the wall. Report Builder 1.0 days are counted and you better embrace the future.
AJAX Report Viewer
Another major enhancement is the debut of an AJAX-enabled ReportViewer control (developers will get the Visual Studio counterpart in Beta 2 of Visual Studio 2010). The control eliminates visible page reposts when interactive actions, such as drilldown, are initiated. The page fades out but it stays in place and a spinny lets the user know that an interactive action is taking place.
All in all, end users and administrators will enjoy the new Report Manager. You won’t have to switch to SharePoint integration mode just to get a SharePoint-like UI.