| Comments

A little bit of hidden gem in the Silverlight 4 release is the ability to modify the Authorization header in network calls.  For most, the sheer ability to leverage network credentials in the networking stack will be enough.  But there are times when you may be working with an API that requires something other than basic authentication, but uses the Authorization HTTP header.

The Details

Basically you just set the header value.  How’s that for details :-). 

Seriously though, here’s a snippet of code:

   1: WebClient c = new WebClient();
   2: c.Headers[HttpRequestHeader.Authorization] = "Auth header from same domain-browser stack";
   3: c.DownloadStringCompleted += ((s, args) =>
   4:     {
   5:         if (args.Error != null)
   6:         {
   7:             response.Text = args.Error.Message;
   8:         }
   9:         response.Text = args.Result;
  10:     });
  11: c.DownloadStringAsync(new Uri(http://localhost:4469/handler.ashx));

As you can see in the code is rather simple.  Prior to Silverlight 4 you’d receive an exception that setting the header isn’t possible…but now it is.  If you are using HttpWebRequest instead it would be just as simple:

   1: HttpWebRequest req = (HttpWebRequest)WebRequest.CreateHttp("http://localhost:4469/handler.ashx");
   2: req.Headers[HttpRequestHeader.Authorization] = "Auth header from same domain using HWR";
   3: req.BeginGetResponse((cb) =>
   4:     {
   5:         HttpWebRequest rq = cb.AsyncState as HttpWebRequest;
   6:         HttpWebResponse resp = rq.EndGetResponse(cb) as HttpWebResponse;
   7:  
   8:         StreamReader rdr = new StreamReader(resp.GetResponseStream());
   9:         string foo = rdr.ReadToEnd();
  10:         Dispatcher.BeginInvoke(() =>
  11:             {
  12:                 response.Text = foo;
  13:             });
  14:         rdr.Close();
  15:  
  16:     }, req);

That’s it.

The Support Matrix

As such this feature does have some restrictions for security reasons.  Basically the difference has to do with cross-domain calls.  Here’s the feature support matrix in the simplest terms:

Network Stack UsedDomain TypeAuthorization Header Allowed
Browser (default)same domainYes
ClientHttpsame domainYes
Browser (default)cross-domainYes with policy
ClientHttpcross-domainYes with policy

As you can see a cross-domain call of this (i.e., setting an Authorization header on a 3rd party site) would require that a valid clientaccesspolicy.xml be in place.  Here’s an example of a pretty liberal one:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <access-policy>
   3:     <cross-domain-access>
   4:         <policy>
   5:             <allow-from http-request-headers="Content-Type,Authorization">
   6:                 <domain uri="*"/>
   7:             </allow-from>
   8:             <grant-to>
   9:                 <resource include-subpaths="true" path="/"/>
  10:             </grant-to>
  11:         </policy>
  12:     </cross-domain-access>
  13: </access-policy>

I should note that when I mean ‘pretty liberal’ this means that the above makes all your resources available to all Silverlight clients.  But pay attention to the http-request-headers section.  Notice the addition of the Authorization header (Content-Type is default always).  By adding this you would be able to have a cross-domain Authorization header writing ability.  Without it you’d see a security exception.  And remember, the policy files exist on the destination endpoint and not in your app.  To demonstrate this, here’s my quick sample application output:

Auth header sample app output

You can download the code for this sample tester application here: Authheaders.zip

Summary

Hopefully this is good news to some developers.  Now with Silverlight 4 we have network credentials support and the ability to use the Authorization header when needed for other purposes.  It’s a little hidden gem that frankly could have been better called out in the docs a bit.

Hope this helps!

| Comments

After posting my sample implementation of accessing Amazon Simple Storage Solution (S3) via Silverlight, I reflected quickly and also chatted with some AWS engineers.

Cross-domain Policy

One thing that you should never do is just deploy a global clientaccesspolicy.xml file blindly.  Often times in samples, we (I) do this.  I need to be better about this guidance to be honest, so I’ll start here.  As an example, for the S3 cross domain policy file, we really should add some additional attributes to it to make it more secure.  Since we know it is a SOAP service, we can ratchet down the requests a little bit by adding the http-request-headers restrictions like this:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <access-policy>
   3:   <cross-domain-access>
   4:     <policy>
   5:       <allow-from http-request-headers="SOAPAction,Content-Type">
   6:         <domain uri="*"/>
   7:       </allow-from>
   8:       <grant-to>
   9:         <resource include-subpaths="true" path="/"/>
  10:       </grant-to>
  11:     </policy>
  12:   </cross-domain-access>
  13: </access-policy>

Additionally (and ideally) we’d be hosting our application from a known domain.  In this instance let’s say I was going to host my application on timheuer.com in the root domain.  I would add the allow from attribute and complete my security like this:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <access-policy>
   3:   <cross-domain-access>
   4:     <policy>
   5:       <allow-from http-request-headers="SOAPAction,Content-Type">
   6:         <domain uri="http://timheuer.com"/>
   7:       </allow-from>
   8:       <grant-to>
   9:         <resource include-subpaths="true" path="/"/>
  10:       </grant-to>
  11:     </policy>
  12:   </cross-domain-access>
  13: </access-policy>

Of course if I had a cool application and others wanted to embed it, I could add more domains to that allow list as well and just list them in there.  But restricting it makes sense if you want to provide some secure access to your APIs (as a service provider) and to you (in methods of doing things like this sample).

More security with SSL

As I mentioned in the initial sample I changed the binding configuration, modifying the binding to use a security mode of “None” instead of “Transport.”  I actually did this because I use the built-in web server from Visual Studio for most of my development and it doesn’t support HTTPS connectivity.  To demonstrate my sample with S3 I had to ensure the schemes matched because in Silverlight 2 right now to access a secure service, the XAP itself would have to be served from a secure location.  The contexts must match.

I’ve come to learn that even with a bucket alias (except ones with “.” characters) you can use the SSL cert from Amazon S3 as it is a wildcard certificate.  So your endpoint (assuming a bucket name of timheuer-aws) could be https://timheuer-aws.s3.amazonaws.com/soap and it would work.

Using SSL of course means that currently you will have to serve your application from an SSL endpoint as well to avoid cross-scheme violations.

I hope this helps clear some things up and provide you with a more secure and recommended way of accessing Amazon S3 services with Silverlight!

| Comments

I’ve got a thought lately and curious if I’m thinking crazy.  You see, probably the most asked questions I get are around working with services within Silverlight.  Although I’ve got some helpful (at least I think they’re helpful) posts on the subject:

it still seems there are people not quite able to get over some stumbling blocks.  Believe me, I get it.  New stuff can be confusing when it doesn’t work as expected.  So here’s my thought.  Today I helped a colleague that kept emailing back-and-forth about something.  I told him “let’s get on a live meeting and fix it together” – and we did – and he was happy.

As I was doing that I thought…why not do that for everyone?  Of course, the obvious answer is because it couldn’t possibly work in all situations.  I can see one question: "Hey I’ve got my JBOSS service using Fitzer serialization on HP/UX servers with a RPC remote call exposed via standard SOAP – why doesn’t it work?”  Some things just aren’t solvable in this debug session format I’m envisioning.  But things like “I keep getting that friggin 404!” might be.

So what do you think?  I’m thinking that it would be a combination conference call and Live Meeting (as that is the technology I have available to myself—so participants would have to install the client software).  There would be some limits (i.e., amount of people) and you’d have to be willing to share your desktop – what’s the point of a live debugging if you aren’t going to show right?

Another (Chad) suggested that I gather a list of questions and just do a webcast answering those and working through some situations.

Leave a comment here on your thoughts.  Am I crazy?  Would it be too many cooks in the kitchen and not work?  And if you’ve done something similar, what has worked in the past?

| Comments

I ran into an interesting situation last week…the desire to access some of my Amazon S3 services from within a Silverlight application.

Amazon Simple Storage Solution (S3) is a pay service provided Amazon for object storage ‘in the cloud.’  Although there is no UI tool provided by Amazon to navigate your account in S3, a SOAP and REST API are available for developers to integrate S3 information into their applications or other uses.  You can view more information about Amazon S3 on their site.

What is S3?

Since S3 is a pretty flexible service, it can be used for many different things including storing “objects” like serialized representations of a type or something.  A lot of applications these days are using cloud storage to store their application objects like this.  For me, I use it as a file server in the cloud.  Mostly it is there to host my images on this site as well as downloads and such.  Because there is no user interface I collaborated with Aaron to make the S3 Browser for Live Writer so that I could access my files for when I need them in posting content.

Accessing content from S3 also has different meanings, even with regard to Silverlight.  For example, if you wanted to simply set an Image source to a hosted image on S3, you could easily do that using the URL provided by S3 to the object.  Since Silverlight allows media assets to be sourced from anywhere, this is not a problem.

The Problem

The problem comes when you want to download content in otherways, such as maybe a file stored there, a serialized object, or access their services in a Silverlight application.  Why is this a problem?  Well because S3 does not expose any cross-domain policy files in their implementation.  RIA platforms like Silverlight require a policy file from the providing service to exist in order to make cross-domain calls from the platform. 

You can read more about Silverlight’s cross-domain policy information here:

 

So how can we accomplish this?

The S3 Service APIs

Amazon exposes two service APIs: SOAP and REST.  Because of the requirements around using their REST service and providing an Authorization header, we are unable to use that in Silverlight at this time since Authorization is a restricted header we cannot modify at this time.  So we can use their SOAP service.  This is fine for Silverlight because Amazon provides a WSDL for us to generate a proxy with.  The defined endpoint in the WSDL for the service is https://s3.amazonaws.com/soap.  This is important so remember this.  Let’s move on.

Buckets to the rescue!

S3 uses a concept they call buckets to store information in containers.  I’m not going to into a lot of detail explaining this concept so if you want to learn more, read their documentation.  Basically a bucket is global unique to the service (not to your account)…so there can only exist one “timheuer” bucket across the service, name them accordingly :-).  All data you push to S3 must be in a bucket.  When you create a bucket, you can also access content in that bucket using a domain shortcut system.  For example when you create a bucket called “timheuer” and put a file in there called foo.txt, it has the URI of http://timheuer.s3.amazonaws.com/foo.txt.  Notice the alias that is happening here.  We can now use this method to solve some of our issues.  How?  Well the “/soap” key will be available at any bucket endpoint!

Because of this aliasing, we can use this mechanism to ‘trick’ the endpoint of our service to respond to the policy file request…so let’s review how we’ll do this.

Step 1: Create the bucket

There are different ways you could do this.  You could simply put a bucket called “foo” and use that, or you can completely alias a domain name.  I’m choosing to completely alias a domain name.  Here’s how I did it.  First I created a bucket called timheueraws.timheuer.com – yes that full name.  That’s a valid bucket name and you’ll see why I did the full one in a moment.

Step 2: Alias the domain

If you have control over your DNS, this is easy, if you don’t, you may want to use the simple bucket aliasing.  But I went into my DNS (I use dnsmadeeasy.com btw, and it rocks, you should use it too).  I added a CNAME record to my domain (timheuer.com):

   1: CNAME    timheueraws    timheueraws.timheuer.com.s3.amazonaws.com.    86400    

What does this mean?  Well any request to timheueraws.timheuer.com will essentially be made at timheueraws.timheuer.com.s3.amazonaws.com.  The last parameter is the TTL (time-to-live).

UPDATE: For security reasons you should actually stick with using only a bucket name and not a CNAME'd bucket.  This will enable you to use the SSL certificate from Amazon and make secure calls.  For example a bucket names "foo" could use https://foo.s3.amazonaws.com/soap as the endpoint.  This is highly adviseable.

Step 3: Create the clientaccesspolicy.xml file

Create the policy file and upload it to your bucket you created in step 1.  Be sure to set the access control list to allow ‘Everyone’ read permissions on it or you’ll have a problem even getting to it.

Creating the Silverlight application

Create a new Silverlight application using Visual Studio 2008.  You can get all the tools you need by visiting the Getting Started section of the Silverlight community site.  Once created let’s point out the key aspects of the application.

Create the Amazon service reference

In your Silverlight application, add a service reference.  The easiest way to do this by right-clicking on the Silverlight project in VS2008 and choosing Add Service Reference.  Then in the address area, specify the Amazon S3 WSDL location:

This will create the necessary proxy class code for us as well as a ServiceReferences.clientconfig file.

Write your Amazon code

Now for this simple purposes, let’s just list out all the buckets for our account.  There is an API method called “ListAllMyBuckets” that we’ll use.  Now Amazon requires a Signature element with every API call – it is essentially the authentication scheme.  The Signature is a hash of the request plus your Amazon secret key (something you should never share).  This can be confusing to some, so with the perusing of various code libraries on the Amazon doc areas, I came up with a simplified S3Helper to be able to do this Signature generation for us.

   1: public class S3Helper
   2: {
   3:     private const string AWS_ISO_FORMAT = "yyyy-MM-ddTHH:mm:ss.fffZ";
   4:     private const string AWS_ACTION = "AmazonS3";
   5:  
   6:     public static DateTime GetDatestamp()
   7:     {
   8:         DateTime dteCurrentDateTime;
   9:         DateTime dteFriendlyDateTime;
  10:         dteCurrentDateTime = DateTime.Now;
  11:         dteFriendlyDateTime = new DateTime(dteCurrentDateTime.Year,
  12:             dteCurrentDateTime.Month, dteCurrentDateTime.Day,
  13:             dteCurrentDateTime.Hour, dteCurrentDateTime.Minute,
  14:             dteCurrentDateTime.Second,
  15:             dteCurrentDateTime.Millisecond, DateTimeKind.Local);
  16:         return dteFriendlyDateTime;
  17:     }
  18:  
  19:     public static string GetIsoTimestamp(DateTime timeStamp)
  20:     {
  21:         string sISOtimeStamp;
  22:         sISOtimeStamp = timeStamp.ToUniversalTime().ToString(AWS_ISO_FORMAT, System.Globalization.CultureInfo.InvariantCulture);
  23:         return sISOtimeStamp;
  24:     }
  25:  
  26:     public static string GenerateSignature(string secret, string S3Operation, DateTime timeStamp)
  27:     {
  28:         Encoding ae = new UTF8Encoding();
  29:         HMACSHA1 signature = new HMACSHA1(ae.GetBytes(secret));
  30:         string rawSignature = AWS_ACTION + S3Operation + GetIsoTimestamp(timeStamp);
  31:         string encodedSignature = Convert.ToBase64String(signature.ComputeHash(ae.GetBytes(rawSignature.ToCharArray())));
  32:  
  33:         return encodedSignature;
  34:     }
  35: }

This will abstract that goop for us.

Let’s assume we have a ListBox in our Page.xaml file that we’re going to populate with our bucket names.  Mine looks like this:

   1: <ListBox x:Name="BucketList">
   2:     <ListBox.ItemTemplate>
   3:         <DataTemplate>
   4:             <TextBlock Text="{Binding Name}" />
   5:         </DataTemplate>
   6:     </ListBox.ItemTemplate>
   7: </ListBox>

So let’s just add a simple method to our Loaded event handler calling our ListAllMyBuckets method.  Remember, everything in Silverlight with regard to services is asynchronous, so we’re actually going to call ListAllMyBucketsAsync from our generated code.  We’ll need a completed event handler where we will put our binding code.  Here’s my complete code for both of these:

   1: void Page_Loaded(object sender, RoutedEventArgs e)
   2: {
   3:     DateTime timeStamp = S3Helper.GetDatestamp();
   4:  
   5:     s3 = new AWS.AmazonS3Client();
   6:  
   7:     s3.ListAllMyBucketsCompleted += new EventHandler<AWS.ListAllMyBucketsCompletedEventArgs>(s3_ListAllMyBucketsCompleted);
   8:     s3.ListAllMyBucketsAsync(AWS_AWS_ID, timeStamp, S3Helper.GenerateSignature(AWS_SECRET_KEY, "ListAllMyBuckets", timeStamp));
   9: }
  10:  
  11: void s3_ListAllMyBucketsCompleted(object sender, AWS.ListAllMyBucketsCompletedEventArgs e)
  12: {
  13:     if (e.Error == null)
  14:     {
  15:         AWS.ListAllMyBucketsResult res = e.Result;
  16:         AWS.ListAllMyBucketsEntry[] buckets = res.Buckets;
  17:         BucketList.ItemsSource = buckets;
  18:     }
  19: }

The AWS_AWS_ID and AWS_SECRET_KEY are constants in my application that represent my access key and secret for my S3 account.  You’ll notice that in this snippet above the “s3” object isn’t typed – that is because I have a global that defines it (you can see it all in the code download).

So now if we run this application, we should expect to see our bucket list populated in our ListBox, right?  Wrong.  We get a few exceptions.  First, we get an error because we can’t find the cross-domain policy file.  Ah yes, remember we’re still using the Amazon SOAP endpoint.  We need to change that.

Changing the Amazon endpoint

You may notice that the SOAP endpoint for S3 is an https:// scheme.  We won’t be able to use that using this method because of our aliasing and the fact that the SSL certificate wouldn’t match our alias.  So we need to change our endpoint.  There are two ways we can do this.

We can change this in code and alter our code by creating a new BasicHttpBinding and EndpointAddress and passing them into the constructor of new AWS.AmazonS3Client().  But that would be putting our configuration in code.  Remember that when we added the service reference we were provided with a ServiceReferences.clientconfig file.  Open that up and check it out.  It provides all the configuration information for the endpoint.  Now we could just change a few things.  I decided to create a new <binding> node for my use rather than alter the others.  I called it “CustomAWS” and copied from the existing one that was there.  Now because the default endpoint for S3 is a secure transport and we cannot use that, we have to change the <security> node to mode=”None” so that we can use our custom endpoint URI.

The second thing we do is in the <client> node change the address attribute and the bindingConfiguration attribute (to match new new config we just created).  Mine now looks like this in entirety:

   1: <configuration>
   2:     <system.serviceModel>
   3:         <bindings>
   4:             <basicHttpBinding>
   5:                 <binding name="AmazonS3SoapBinding" maxBufferSize="65536" maxReceivedMessageSize="65536">
   6:                     <security mode="Transport" />
   7:                 </binding>
   8:                 <binding name="AmazonS3SoapBinding1" maxBufferSize="65536" maxReceivedMessageSize="65536">
   9:                     <security mode="None" />
  10:                 </binding>
  11:               <binding name="CustomAWS" maxBufferSize="65536" maxReceivedMessageSize="65536">
  12:                 <security mode="None" />
  13:               </binding>
  14:             </basicHttpBinding>
  15:         </bindings>
  16:         <client>
  17:             <endpoint address="http://timheueraws.timheuer.com/soap" binding="basicHttpBinding"
  18:                 bindingConfiguration="CustomAWS" contract="Amz.AWS.AmazonS3"
  19:                 name="AmazonS3" />
  20:         </client>
  21:     </system.serviceModel>
  22: </configuration>

Now when we run the application it will work and if we sniff the traffic we’ll see that the first request is to our clientaccesspolicy.xml file that enables us to continue with the /soap requests:

Now we can see a list of our buckets and after wiring up some other code so that when we click on the bucket, we’ll see a list of all the objects, bound to a DataGrid (details blurred for privacy) :

Summary

Sweet, we’re done!  We’ve now been able to provide our own clientaccesspolicy.xml file in a place where it didn’t exist before and be able to call the service.  We can now use other methods perhaps to create a new bucket, put objects in buckets, etc.

So in order to do this, we’ve:

    • Created an alias to an bucket in our S3 account
    • Uploaded a clientaccesspolicy.xml file to that bucket
    • Changed the endpoint configuration in our service reference
    • Called the services!

I’ve included all the files for the above solution in the download file.  You’ll have to provide your own access key/secret of course as well as specify the endpoint address in the ServiceReferences.clientconfig file.

Hope this helps!  Please read Part 2 of this post.

| Comments

I’ve been getting a few notes on issues relating to people trying Silverlight beta 2 and WCF or other services.  The most common issue I’m seeing reported is “my exception is showing a 404-not found error message, but the service is there and works!”

Okay, there could be several things happening here, but let’s tackle the “make sure it is plugged in” type situations.  I don’t mean to make light of the error, because at first I, too, was banging my head against a wall.  Sometimes it helps to have a second set of eyes or a deeper understanding of the issue…or both.

First, the situation.  Most of the time you’ll see this exception when your Silverlight application is accessing a service not hosted on the same application domain.  This is considered cross-domain access and requires the service host to enable an opt-in policy file so rich client platforms are allowed to access the service.  In Silverlight we call that the clientaccesspolicy.xml file.  You can learn all about cross-domain policy files by viewing this video on the Silverlight community site (a great resource).  In beta 2, there was a subtle change to the policy file that is required.  I wrote about that here as well (and note the code download for the video has the updated policy file).

Ok, so under what conditions might you get the “(404) Not found” error message when accessing services?

No policy file at all

Silverlight will first check for clientaccesspolicy.xml first and then fallback and see if a supported crossdomain.xml file exists.  If neither exists at all, 404 baby.  Also remember Silverlight is looking for this file in the root of the requesting domain.  So if you have a file but it is in your app root…this could be the issue at all.

Incorrect, mail-formed, just plain wrong policy file

Silverlight will check for a clientaccesspolicy.xml file and if it finds one but it has an incorrect format or is mal-formed it will treat it as invalid and then look for crossdomain.xml…and if not found, boom: 404.  This is what most are running into in starting to use beta 2 with your policy files.  The missing http-request-headers attribute renders the file mal-formed.

HTML response

Most sites have custom error messages for page not found.  For example, when you visit google.com/timheuer you’ll get a less-than-helpful message, but custom nonetheless or as another example microsoft.com/timheuer you’ll get another custom response with a sitemap.  Both of these are essentially custom error messages that are returning valid HTML, but not a valid policy file.  In these instances, Silverlight sees the response, but sees it as invalid/mal-formed and treats it like it didn’t exist: 404.

These are the most common instances where a 404 would be generated and making you bang your head against the closest semi-hard surface.  How can you figure out what is going on?  Well first, make sure you do your best to ensure you meet all the requirements.  But also use some development helper tools.

Web Development Helper

For me, in service/remote/AJAX development there is a single indispensable tool that I can’t live without.  That is NikhilK’s Web Development Helper.  This tool is a plugin to Internet Explorer (yes I know there are others similar in nature for Firefox, etc. – but I LOVE this implementation and IE is fine by me) that provides in the browser HTTP-traffic sniffing.  No need for any funky port configuration or changing proxy server settings, etc.  Just enable it and it works.  I highly recommend you use this tool or something similar like it (Fiddler is another good one although requires some additional config steps usually when working with Visual Studio’s web development server).

Seriously, a tool like this will save you so much time in troubleshooting your service interactions with Silverlight, Flash, AJAX, whatever – it will help you immediately figure out where to start looking rather than grabbing your climbing gear and spelunking in unknown caverns.

So why a ‘404’ – what gives?

I’ve also heard people say “you need to make that exception be more descriptive, 404 is not accurate.”  I’m on the fence on this one.  As a .NET developer I can see where the concerns are coming from in having the most descriptive exception possible.  But one must realize what is happening under the hood.  The polciy file is being requested as a GET request, so basically an HttpWebRequest object is our object here.  Because of this, we return HTTP-specific errors.  There isn’t one for “Silverlight policy file found, but not correct” in the HTTP spec right now.  So because of this, we use a RESTful approach in providing a standard HTTP response.  In our case “404-not found” seems to be a valid response – indicating “The request for a valid policy file resulted in a valid policy file not being found.”  We make no distinction between partially valid or finding a specific typo, etc. – we simply indicate that a valid policy wasn’t found.  One could argue 406,409 or 417 might be other responses, but I’m not sure that would make anyone feel any better – we’re still going to use an HTTP response code.

Hope this helps!