Posts

Web API CORS Adventures

I’ve been doing some work with ASP.NET Web API and I’m setting a demo service in the cloud (more on this in a future post). The service could be potentially accessed by any user. For demo purposes, I wanted to show how jQuery script in a web page can invoke the service. Of course, this requires a cross-domain Javascript call. If you have experience with web services programming, you might recall that a few years ago this scenario was notoriously difficult to implement because the browsers would just drop the call. However, with the recent interest in cloud deployments, things are getting much easier although traps await you.

I settled on ASP.NET Web API because of its invocation simplicity and gaining popularity. If you are new to Web API, check the Your First Web API tutorial. To execute safely, Javascript cross-domain calls need to adhere to the Cross-origin resource sharing (CORS) mechanism. To update your Web API service to support CORS, you need to update your ASP.NET MVC 4 project as described here. This involves installing the Microsoft ASP.NET Cross-origin Support package (currently in a prerelease state) and its dependencies. You need also the latest version of System.Web.Helpers, System.Web.MVC, and other packages. Once all dependencies are updated, add the following line to the end of the Register method of WebApiConfig.cs

config.EnableCors(new EnableCorsAttribute(“*”, “*”, “*”));

After testing locally, I’ve deployed the service to a Virtual Machine running on Azure (no surprises here). For the client, I changed the index.cshtml view to use jQuery to make the call via HTTP POST. I decided on POST because I didn’t want to deal with JSONP complexity and because the payload of the complex object I’m passing might exceed 1,024 bytes. The most important code section in the client-side script is:

var DTO = JSON.stringify(payload);

jQuery.support.cors = true;

$.ajax({

url: ‘http://<server URL>’, //calling Web API controller

cache: false,

crossDomain: true,

type: ‘POST’,

contentType: ‘application/json; charset=utf-8’,

data: DTO,

dataType: “json”,

success: function (payload) {…}

.fail(function (xhr, textStatus, err) {..} 

Now, this is where you might have some fun. As it turned out, Chrome would execute the call successfully right off the bat. On the other hand, IE 10 barked with Access Denied. This predicament and the ensuing research threw me in a wrong direction and let me to believe that IE doesn’t support CORS and I had to use a script, such as jQuery.XDomainRequest.js, as a workaround. As a result, the call would now go out to the server but the server would return “415 Unsupported Media Type”. Many more hours lost in research…(Fiddler does wonders here). The reason was that the script delegates the call to the XDomainRequest object which doesn’t support custom request headers. Consequently, the POST request won’t include the Content-Type: ‘application/json’ header and the server drops the call because it can’t find a formatter to deserialize the payload.

As it turns out, you don’t need scripts. The reason why IE drops the call with Access Denied is because its default security settings disallow cross-domain calls. To change this:

  1. Add the service URL to the Trusted Sites in IE. This is not needed but it’s a good idea anyway.
  2. Open IE Internet Options and select the Security tab. Then, select Trusted Sites and click the Custom Level button.
  3. In the Security Settings dialog box, scroll down to the Miscellaneous section and set “Access data sources across domains”. Restart your computer.

091013_0229_WebAPICORSA1

Apparently, Microsoft learned their lesson from all the security exploits and decided to shut the cross-domain door. IMO, a better option that would have prevented many hours of debugging and tracing would have been to detect that Javascript attempts CORS (jQuery.support.cors=true) and explain the steps to change the default settings or, better yet, implement CORS preflight as the other browsers do (Chrome submits OPTIONS to ask the server if the operation is allowed before the actual POST).

UPDATE 9/10/2013
When using the web server built in Visual Studio and you debug/test the browser opens the page with localhost:portnumber. However, the browser (I tested this with IE and Chrome) does not consider the port to be a part of the Security Identifier (origin) used for Same Origin Policy enforcement and the call will fail.

Chasing Parameters

Scenario: You use the Visual Studio ASP.NET ReportViewer and you can’t figure out how to get the parameter values when the page posts back. I couldn’t find a straightforward answer on the discussion list so I thought my findings could come useful.

Solution: Depending on your scenario, you can choose one the following three approaches:

1. You can get the current parameter values after the ReportViewer.PreRender method completes. Use another event that fires after PreRender. Based on my testing, the only events I found out to work are ReportViewer.Unload or Page.Unload, e.g.:

protected void reportViewer_Unload(object sender, EventArgs e) {

    ReportViewer.ReportParameterInfoCollection parameters = reportViewer.ServerReport.GetParameters();

}

2. Subclass the control and override OnPreRender, calling the base method and then obtaining the parameter values.

3. If you are using Visual Studio 2010, the new ReportViewer exposes a new SubmittingParameterValues event for this purpose.

Trying to Communicate

Visual Studio 2008 embraces the exciting new world of Windows Communication Foundation (WCF) for communicating with services. However, pitfalls await the unwary. I’ve recently tackled invoking the Reporting Services Web service with WCF and I want to share my findings.

  1. The Visual Studio Add Web Reference menu has been renamed to Add Service Reference to denote that WCF can communicate with much more than Web services, including probably my Zune device. Although the dialog has somewhat changed, you will be find your way to generate the proxy.
  2. What’s more surprising is that the auto-generated proxy methods now have somewhat different signatures.

For example, the SQL Server Books Online has the following signature of the Reporting Services GetExecution Options API.

public ExecutionSettingEnum GetExecutionOptions (string Report,out ScheduleDefinitionOrReference Item);

Yet, WCF generates the following signature:

public ServerInfoHeader GetExecutionOptions(string Report, out ExecutionSettingEnum executionOption, out ScheduleDefinitionOrReference Item);

So, the returned value becomes an out parameter while ServerInfoHeader becomes a returned value. I am not sure how WCF figures this out. Does it mean that now the documentation should show both the 2.0 and WCF signatures?

  1. The second surprise wave hit me when I was trying to figure out a way to pass my credentials to the Web service. This, of course, will probably be one of the first things you need to do to invoke an Intranet service.

In the good ol’ 2.0 days, impersonating the user takes a single line of code.

rs.Credentials = System.Net.CredentialCache.DefaultCredentials;

How do we this in the shiny new WCF world? Strangely, the Visual Studio help says little about this. I came across some bizarre examples of declaring HTTP transports that made my head spin. In a sheer stroke of luck, I managed to figure out the right changes in the application config file (yes, now we have declarative settings).

<security mode=”TransportCredentialOnly“>

<transport clientCredentialType=”Ntlm” proxyCredentialType=”None” realm=”” />

<message clientCredentialType=”UserName” algorithmSuite=”Default” />

</security>

Wait! We need to tell WCF also that is OK to impersonate the user.

ReportingService2005SoapClient rs = new ReportingService2005SoapClient();

rs.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation;

At this point, I felt like upgrading my house only to find that I have to enter through the chimney. Upgrading to a new technology shouldn’t complicate things unnecessarily. I promptly switched back to the 2.0 style of programming. Luckily, they kept the old Add Web Reference button from the advanced settings of the Add Service Reference dialog.

I guess they were right. You can’t teach an old dog new tricks…

Happy holidays!

Introducing LINQ

If you follow the Microsoft .NET roadmap, you have probably heard about the forthcoming Language Integrated Query (LINQ) in .NET 3.0. LINQ will add query capabilities directly into the CLR and will be supported by both VB.Net and C#. This means that you will be able to use standard query operators directly from within your code!

To help you get started with LINQ, Marco Russo and Paolo Pialorsi wrote a book
Introducing Microsoft LINQ published by Microsoft Press. The book is expected to be published in mid-May. Meanwhile, the authors have set up a public forum (http://introducinglinq.com/) and they are eagerly awaiting your LINQ-related questions.

Oh, yes, I’ve made a tiny contribution to the book by reviewing a few chapters. I found the book to be a great introduction to LINQ. I particularly liked the code examples.

Transcend T-SQL Limitations with SQL Server 2005 CLR Objects

One of the coolest SQL Server 2005 feature is .NET CLR objects. When use wisely, CLR integration can solve many nagging problems with T-SQL. For example, you cannot pass columns from an outer SQL statement to a TBF even though it returns one row. Or, for some obscure reason, you cannot use dynamic execution (EXEC statement) inside a scalar –valued function. Yet, you may need to use the same scalar-valued function with an arbitrary column.

In comparison, the sky is the limit about what a CLR stored procedure or a CLR UDF can do. Here is an extract from a real-life CLR UDF written in C# that returns the YTD aggregated value for a given measure:

[Microsoft.SqlServer.Server.SqlFunction(DataAccess = DataAccessKind.Read, SystemDataAccess = SystemDataAccessKind.Read)]

public static SqlDecimal YTD(int companyID, DateTime financialPeriod, string measure) {

  using (SqlConnection conn = new SqlConnection("context connection=true")) {

     conn.Open();

     SqlCommand cmd = new SqlCommand(String.Format("SELECT SUM({0}) " +

     "FROM    <some table here> (NOLOCK) " +

     "INNER JOIN <some other table> (NOLOCK) " + "… " +

     "WHERE FinancialPeriod = @FinancialPeriod" ", measure), conn);

     cmd.Parameters.Add(new SqlParameter("@CompanyID", companyID));

     cmd.Parameters.Add(new SqlParameter("@FinancialPeriod", financialPeriod));

     return ToSqlDecimal(cmd.ExecuteScalar()); }

}

private static SqlDecimal ToSqlDecimal(object value) {

   return value is System.DBNull ? SqlDecimal.Null : SqlDecimal.Parse(value.ToString());

}

Here, the function takes a company identifier, financial period, and the name of the measure to be aggregated as input arguments. The ADO.NET SqlCommand object takes care of executing the query (a boiler-plate ADO.NET code). Note the SystemDataAccess = SystemDataAccessKind.Read attribute that decorates the function. If you omit it, you will be greated with the following exception at runtime:

This statement has attempted to access data whose access is restricted by the assembly.

Once deployed, the function can be called as a regular T-SQL UDF, e.g.

SELECT <FULLY QUALIFIED CLASS NAME>.YTD(1, '7/1/2006', "Sales"),

assuming you have a Sales decimal column in your table or view.

Who knows, perhaps one day, we will be able to ditch T-SQL whatsoever in favor of .NET languages. I know I will be the first one to jump.

You’ve been Deadlocked

If you’ve been using VS.NET 2005 for a while chances are that your debugging session could have been spectacularly crashing just when you thought you were so close finding that elusive critical bug. This situation may have manifested with the following exception:


ContextSwitchDeadlock was detected
Message: The CLR has been unable to transition from COM context <some nasty hex number> to COM context <another nasty hex number> for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.


Usually, you can’t recover from this exception and the only way to be able to debug again is to restart the debugger (that is until it crashes again). The bad news is that this is a bug with the ContextSwitchDeadlock managed debugging assistant which the VS.NET team couldn’t fix in time. The good news is that you can prevent this MDA from raising his ugly head by ever again by just going to the Exceptions dialog (doesn’t appear by default in the Debug menu but you could add it by customizing the Debug menu) and disable the ContextSwitchDeadlock MDA found under the Managed Debugging Assistants category.

When the Host is not so Perfect

Here is something that has recently bit me really bad. Customer requirements called for implementing an in-house report designer to create report definitions. Expired by the VS.NET 2005 Report Designer, we decided to implement a part of the tool as a WYSWYG designer using the design-time infrastructure (IDesignerHost) in .NET 2.0. If you don’t know what I am talking about, read the excellent Dinesh Chandnani’s Perfect Host article to learn more about the .NET designer host support.


To spice up the user experience, we decided to use the Infragistics Windows Forms suite. At runtime, the end user could drag Infragistics UltraTextBox and UltraImage controls and drop them on the design canvas. Everything was working just fine during development (aka my machine). However, once the application was deployed to QA, the WYSWYG designer failed abysmally. After some digging, we realized that the Infragistics controls was performing the same license check as they do when dropped on a Windows Form in VS.NET 2005. Since the Infragistics controls fail to find a design-time license, they throw an exception once the user attempts to site the control and there wasn’t any workaround. We had no other choice but to yank out the Infragistics controls and replace them with the plain-vanilla Windows controls – TextBox and PictureBox.


The moral of this story is to avoid using third-party controls when implementing .NET custom designers to prevent licensing gotchas.