| Comments

You’ve written your service.  You’ve written your Silverlight application.  You Add Service Reference to your application and got the client proxy code.  Your app ‘works on your machine’ and you push it out. 

FAIL.

NotFound.

Crap.  You forgot that your service reference had your local URI endpoint in there and when you moved it to staging and/or production it failed.  You start cursing Microsoft and the Silverlight team and add to the threads in the forums or perhaps initiate a new wishlist item for the team and throw it out on Twitter and encourage votes.

It seems this is still a common frustration and people are trying to solve it in different ways.  I’m going to throw out what is my preferred mechanism and add some additional tips and tricks here that hopefully some are using.

The Setup

Here’s the setup.  You have a Silverlight application and a web service.  To keep it simple I started with File…New Silverlight Application – kept the web project there. I added a Silverlight-enabled WCF Service to my web project with this following code:

   1: [OperationContract]
   2: public string SayHello()
   3: {
   4:     // Add your operation implementation here
   5:     return "Hello World [DEVELOPMENT]";
   6: }

I then added 2 more empty ASP.NET Web Application projects to my solution, added a single Silverlight-enabled WCF Service to each one of them with the identical code, changing only the return string to PRODUCTION or STAGING to differentiate the response.  I called one project ProductionService and the other StagingService to simulate a production and staging environment.  I then added (because it wouldn’t work otherwise in my test setup) a clientaccesspolicy.xml file to the production/staging service projects.

NOTE: You may not have to do this clientaccesspolicy.xml setup.  This was only because I deployed the Silverlight app only to one web project, not others.  This may not be required for you.  See step below on Silverlight App in Same Web Project as Service section on why.

I’ve added some rudimentary XAML to the app to basically show the current default service that will be used (with the option to change it) and the response code:

Sample application snapshot

Now to explain what I’ve done/recommend.

The ServiceReferences.clientconfig ‘magic’

When you add a service reference via the Add Service References option in Visual Studio, you get a new file in your Silverlight application called ServiceReferences.clientconfig.  This contains the binding and endpoint configurations for the service you just referenced.  Here’s where you can change some things up.

By default it adds the configuration for the literal endpoint you just referenced (think absolute URI here…most of the time in development this may be localhost).  The file isn’t fixed though and you can add your other configurations there as well.  Here’s my initial updated config file with the addition of the production and staging URI endpoints.

   1: <configuration>
   2:     <system.serviceModel>
   3:         <bindings>
   4:             <customBinding>
   5:                 <binding name="CustomBinding_HelloWorldService">
   6:                     <binaryMessageEncoding />
   7:                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />
   8:                 </binding>
   9:                 <binding name="StagingServiceBinding">
  10:                     <binaryMessageEncoding />
  11:                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />
  12:                 </binding>
  13:                 <binding name="ProductionServiceBinding">
  14:                     <binaryMessageEncoding />
  15:                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />
  16:                 </binding>
  17:             </customBinding>
  18:         </bindings>
  19:         <client>
  20:             <endpoint address="http://localhost:40473/HelloWorldService.svc"
  21:                 binding="customBinding" bindingConfiguration="CustomBinding_HelloWorldService"
  22:                 contract="HelloServices.HelloWorldService" name="CustomBinding_HelloWorldService" />
  23:             <endpoint address="http://localhost:40848/HelloWorldService.svc"
  24:                 binding="customBinding" bindingConfiguration="StagingServiceBinding"
  25:                 contract="HelloServices.HelloWorldService" name="StagingServiceBinding" />
  26:             <endpoint address="http://localhost:40849/HelloWorldService.svc"
  27:                 binding="customBinding" bindingConfiguration="ProductionServiceBinding"
  28:                 contract="HelloServices.HelloWorldService" name="ProductionServiceBinding" />
  29:         </client>
  30:     </system.serviceModel>
  31: </configuration>

Notice the endpoint names: CustomBinding_HelloWorldService, StagingServiceBinding, ProductionServiceBinding.  The first was created for me by VS – hence the awesome hugely unhelpful name :-).  By default, if you added this code in your Silverlight application:

   1: HelloWorldServiceClient client = new HelloWorldServiceClient();

Then it will be using the default endpoint it creates (which would only be one of them unless you add custom ones like I did above).

Initializing the Service with different endpoints

Now that you know that the client config file can have multiple configuration endpoints, how would you use them?  Simple.  If you take a look at the proxy code that gets generated for you when you Add Service Reference (this is in the Reference.cs file when you use the ‘show all files’ option in VS) you’ll notice that the constructor for HelloWorldService is overloaded to allow and endpoint configuration, or optionally explicit binding/endpoint address information.  It’s the former that will make this easier for you.

Take our above ServiceReferences.clientconfig modifictation.  Let’s say we wanted to work with the staging service, we could now simply instantiate with:

   1: HelloWorldServiceClient client = new HelloWorldServiceClient("StagingServiceBinding");

In fact, I might argue that you should never use the default constructor.  Being more explicit leads to easier code to read/track in my opinion.  You don’t expect to be on this project for the rest of your life do you?  I didn’t think so…make it easier on the next developer that has to come behind you and fix your bugs enhance the code to add functionality and be explicit.

Being Dynamic about your Endpoint Initialization

Obviously you don’t want to hard-code various endpoint names in your instantiation of the service proxies.  In fact, you may be struggling because you may push your code out via automated build servers and you don’t want to have to build the XAP, then change something, blah blah.  This is where conditional compilation can help.  For instance, here’s how I have the code in this project:

   1: string _endpointName = "RelativeBinding";
   2:  
   3: #if PRODUCTION
   4:     _endpointName = "ProductionServiceBinding";
   5: #endif
   6:  
   7: #if STAGING
   8:     _endpointName = "StagingServiceBinding";
   9: #endif
  10:  
  11: HelloWorldServiceClient client = new HelloWorldServiceClient(_endpointName);

Now I just have to make sure that my compile tasks add those conditional flags.  This is relatively simple.  You could use Visual Studio’s Configuration Build Manager to create new profiles, or you could also customize an MSBuild task to append those constants.  It is simple and there are plenty of resources to help you here.  For my project I simply customized (added) new configuration profiles in Visual Studio and have been manually switching them to test.

Silverlight App in Same Web Project as Service

But wait, there’s more!

If you have a simpler solution where your service is being served up in the same web application/site as your Silverlight application, then you have a simpler solution using Silverlight 4.  Just to clarify, what I mean by this is that your app - let’s say http://foo.com/clientbin/myapp.xap is a part of the same web application http://foo.com which serves up the service you are calling http://foo.com/helloworldservice.svc. 

Good news for you if you are using Silverlight 4.  You can now use relative path information for service references!  Let’s say your XAP is in /ClientBin/MyApp.xap and your service is in the same root location as the ClientBin folder at /HelloWorldService.svc.  This means that “../HelloWorldService.svc” will work!  I’d recommend still being explicit in your code so that it helps those come behind you.  In my client config file I added a “RelativeBinding” configuration:

   1: <endpoint address="../HelloWorldService.svc"
   2:                 binding="customBinding" bindingConfiguration="RelativeBinding"
   3:                 contract="HelloServices.HelloWorldService" name="RelativeBinding" />

and then in my code I can initiate it like I’ve done above using the named configuration.  But notice in the configuration I used ../HelloWorldService.svc as the URI for the service endpoint.  Assuming my staging and production follow the same paths, I don’t have to worry about conditional compilation, etc. and can push these out in the various environments.

Code Specific Endpoints to hide your configurations

In theory if your solution can use the relative binding scenario described above there is not too much to worry about as far as revealing too much.  However if you are using the other method about adding multiple configurations to your ServiceReferences.clientconfig file you should be aware that this file is compiled as a resource and could show others your staging/dev URIs that you may not want.  This might be a subtle by-product of getting added benefit to making the process easier though but you should be aware of these capabilities since tools like Silverlight Spy and Reflector could enable someone to look at your resource files.

You could by pass the client config named endpoints mechanism and use conditional compilation with explicit code-created endpoints.  Using the conditional statements as shown above, only the code that meets the condition will be compiled into IL.  Thus decompilation doesn’t show the other #if options.  So in Reflector or other tools you’d only see what was output.  Given this you could be even more aboslutely explicit and do something like this:

   1: BasicHttpBinding binding = new BasicHttpBinding();
   2: EndpointAddress endpoint;
   3:  
   4: #if PRODUCTION
   5:     _endpointName = "ProductionServiceBinding";
   6:     endpoint = new EndpointAddress("http://myproductionserver.com/HelloWorldService.svc");
   7: #endif
   8:  
   9: #if STAGING
  10:     _endpointName = "StagingServiceBinding";
  11:     endpoint = new EndpointAddress("http://mySTAGING.com/HelloWorldService.svc");
  12: #endif
  13:  
  14: HelloWorldServiceClient client = new HelloWorldServiceClient(binding, endpoint);

Using this conditional compilation method you’re doing a bit more ‘hard-coding’ than relying on changing configuration and not code, but it might be more suitable to your liking.  Sure, this can still be decompiled, but it would only reveal the endpoint you already will be using (which anyone could see anyway just watching HTTP traffic).

Summary

Managing your endpoints doesn’t have to be difficult.  Hopefully these simple ways give you an idea of the options you can use.  Can we do better in helping manage this easier?  I think so.  And I know we’re thinking about it as well.

You can download my complete code sample for this application here: ManagingServiceEndpoints.zip.

Hope this helps!  (oh and hat-tip to Shawn who wrote about this for SL2…I’ve just expanded on the same idea going deeper and providing some updates for Silverlight 4)

| Comments

UPDATE: Please read the updated information on RIA Services deployment and troubleshooting on MSDN..

So you’ve been playing around with Silverlight and WCF RIA Services (the artist formerly known as .NET RIA Services) and you are ready to deploy.  You’ve been living in your happy Visual Studio environment, perhaps even relying on the built-in web server (a.k.a. Cassini) to serve up your pages/XAP to test.  All has been well, you’ve done your testing and you are ready to publish to your server.  You compile one last time and then right-click in Visual Studio on the web project and click Publish.  You push to your IIS endpoint or via FTP and the files deploy.  Sweet!  Now you go visit your site.  And it doesn’t work.  WTF?

I’ve been getting some emails on RIA Services deployment gotchas and thought I’d take a stab at explaining some of the deployment nuances. 

First it should be said that there is no greater supplement than having your dev environment match as close as possible to your ending target production environment.  If you are using IIS6 to host your final application, then it would be ideal that it is also your development/test environment.  I know this isn’t always possible for everyone, but if it is, make the effort and save yourself some time in the long run.

What is described below are some things you might run into.  Not everyone will…some will not hit any of these.  But hopefully if you do, this will be some insight.

Deploying the RIA libraries – to bin or not to bin

Your first error you may run into is assembly loading errors in your ASP.NET application.  Perhaps it says that it cannot locate or load System.Web.Ria assembly?  And here you thought the Publish command was going to deploy those for you, didn’t you (note: so did I).  Well, they aren’t.  You can do two things here.

First, you can “bin deploy” if you want.  That term means that you would deploy any non-core framework assemblies in your web applications /bin directory, making them locally available to the web application.  If you want to go this route, you can.  You have to manually go into your references in your web application and change the Copy Local property on some assemblies:

Change Copy Local Property image

The assemblies you would want to do this on (depending on what you have referenced) would be:

  • System.Web.Ria
  • System.Web.DomainServices.* (there 4 of them depending on what you are using)
  • Microsoft.RiaServices.Tools UPDATE: this assembly only required for design-time experiences

Once you do that, on your next compile, these assemblies would be copied to your bin directory and then the subsequent Publish action would also push those to your server.

The second option you have is to install the RIA Services server libraries on the server in the Global Assembly Cache (GAC).  You may have tried this already and run the RiaServices.msi installer on your server and received the warning that you are missing Visual Studio and all sorts of tools.

And then you walked away and went the bin-deploy route.  Well open up a command prompt and run this instead:

   1: msiexec /i RiaServices.msi SERVER=TRUE

And the server assemblies for RIA Services will be installed into the GAC for all to enjoy.  The advantage this has is that it becomes easier to service if you have one set of assemblies to update versus a few /bin deployed applications scurried all over the place.

HTTP Scheme violation and IIS host-headers

Now you run your application and you get this exception:

This collection already contains an address with scheme http. There can be at most one address per scheme in this collection.
Parameter name: item

Now you’re starting to wish your development environment mirrored your deployment environment aren’t you? :-)  This lovely error message will leave you wondering what is going on for a while if you didn’t know what it meant.  I mean, it’s completely descriptive isn’t it?  Of course not.

If you are getting this, you are likely running Windows 2003 server (IIS6) and are using host-headers in IIS. 

NOTE: Host headers in IIS allow you to leverage a single IP address, but have separate web sites that respond to different hostname requests.  This information is usually provided in the IIS management console and is stored in the IIS metabase.

If this isn’t you, or you aren’t controlling your server, I’m guessing you are in a shared hosting environment.

NOTE: Full trust is required for RIA Services.  UPDATE: Partial trust is supported for .NET4/VS2010, full trust requirement is only for .NET 3.5/VS2008.

Either way, what you are seeing is a limitation of Windows Communication Foundation (WCF) under .NET 3.x.  There are a few things you can do here.

If you are running .NET 3.0 (well, you likely aren’t running RIA Services then are you) – but here’s some information on creating your own ServiceHostFactory…which isn’t really an option here.

If you are running .NET 3.5 (more likely), and can get to your web.config setting, you can add this setting:

   1: <system.serviceModel>
   2:     <serviceHostingEnvironment aspNetCompatibilityEnabled="true">
   3:         <baseAddressPrefixFilters>
   4:             <add prefix="http://some.url.here.that.matches.a.host.header"/>
   5:         </baseAddressPrefixFilters>
   6:     </serviceHostingEnvironment>
   7: </system.serviceModel>

Note that the prefix you are using must match the base URI of where your DomainService will be at as well as it must exist as a mapped host-header for the site.  More information available here.

if you are running .NET 4, you may not run into this issue, but there may be an optional opt-in configuration you will have to do when .NET 4 releases.

UPDATE: HttpModule for DomainService

Perhaps one thing that I assumed was that you’d be pushing the web app completely.  But what if you already have a web.config and you aren’t pushing that over there.  Well, pay attention to the web.config of a RIA Services created project.  You’ll see an HttpModule set up (this one is from VS2010, but will be similar, just version numbers different):

   1: <httpModules>
   2:     <add name="DomainServiceModule" type="System.Web.Ria.Services.DomainServiceHttpModule, 
   3:         System.Web.Ria, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
   4: </httpModules>

If you don’t have this, then you might see some weirdness.  You see, for the default deployment, the service is handled through this module.  You may have noticed that there are no physical .SVC files in your web app.  If you look with Fiddler at your Silverlight/RIA Services application in action you’ll see something like /ClientBin/Your-Namespace-Here-Method.svc/binary.  This is actually interpreted by the module to map the request.

If you wanted to generate a physical file yourself, you can do that and the request would be processed there versus through the virtual SVC file generated.  You can read about this in Saurabh’s post.

Multiple Authentication Schemes

Okay, now you run it again and you get a message about it not supporting multiple authentication schemes and that you may have.  The message may look like this:

IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used.

This can be a result of your <authentication> node in your ASPNET application being set to Windows, but your site being set to Anonymous in IIS.  For most, simply changing <authentication> node to mode=”Forms” will remove this error and allow you to continue.  For others, if your IIS configuration is set to use both Integrated Auth as well as Anonymous, you’ll want to uncheck one of them in the Directory Security setting for the site in IIS management console.

Install the Hotfix (XP, Vista, Windows 2003)

As noted on the RIA Services information page, if you are not running Windows 7 or Windows 2008 R2, you need to install a Hotfix.  Some people haven’t seen this note, so be sure if you fall in that category that you grab the appropriate hotfix for your architecture and run it.  UPDATE: This hotfix is only needed if you are using VS2008.

Essential Tools

So how can you troubleshoot all these things?  Some wondered where I was able to get the error messages, when their response errors in Silverlight were just showing NotFound.  I’ve said this again with regard to debugging services, especially cross-domain stuff, that if you aren’t using an HTTP diagnostic tool you are hurting your productivity in debugging.  I use Fiddler.  I used to use Web Development Helper a lot more, but have run into some problems with it registering in IE and Fiddler has finally got rid of all nuances that bothered me with it.  Some others have used Charles proxy which I’ve heard is really great, but requires Java if you don’t have it.  Any one of these tools can provide invaluable debugging information to help triage your issue.  Sometimes the HTTP response code isn’t the full story and the response body will help tremendously.

NOTE: If you are using Fiddler for http://localhost debugging, you may have seen some challenges.  In the URL, change to http://127.0.0.1./site – noting the trailing “.” after the IP address.  Example: http://127.0.0.1.:12345/MyApp – this will trigger Fiddler to monitor those requests as well.

For WCF binary encoding messages, be sure to download WCF Binary-encoded message inspector if you are using Fiddler…it’s awesome (hat tip to Dan Wahlin for the tip).

Summary

I suspect anyone running into these issues above is likely using Silverlight 3/VS2008 and deploying to an IIS6 instance.  Truly this is where the issues might manifest themselves.  When WCF RIA Services comes out of beta/ctp status and releases next year, the development story will be that of Silverlight 4 and .NET 4 on the server.  As noted above, these WCF issues (with host-headers) are solved with .NET 4 on the server, so this post will be useless when the bits release.

I hoped by posting this though, that some in the interim might find some better troubleshooting tips with regard to the shared hosting scenario mostly.  I personally ran into a few of these myself on my own dedicated server that uses host-headers (but is still full trust), so I thought others might benefit from the steps that I went through to get my RIA Services application deployed on a server.

Hope this helps.

| Comments

When working with data and Silverlight there has often been the questions of wondering why when a service call fails that Silverlight returns the HTTP 404 status code.  In fact I’ve written about troubleshooting those types of issues in the past and tools you can use to help investigate some problems.

Still people mostly ask "if there is an exception, why is Silverlight telling me ‘not found’ instead of sending me the exception?'”  Eugene Osovetsky from the connected systems team aims to answer those questions in a recent post with a little more detail than has been provided in the past as well as offering some suggestions.  From his post, this is one of the main reasons which I’ve echoed in discussions, webcasts, forums in the past as well:

“Unfortunately, web browsers have a limitation with regards to status codes: When a browser plugin (such as Silverlight) makes an HTTP request, and the response status code is not 200, the browser hides the actual status code and the message body from the plugin. All Silverlight knows is that "something went wrong", but it has no way of discovering any details.”

Take a look at Eugene’s post for some other helpful suggestions.

| Comments

I ran into an interesting situation last week…the desire to access some of my Amazon S3 services from within a Silverlight application.

Amazon Simple Storage Solution (S3) is a pay service provided Amazon for object storage ‘in the cloud.’  Although there is no UI tool provided by Amazon to navigate your account in S3, a SOAP and REST API are available for developers to integrate S3 information into their applications or other uses.  You can view more information about Amazon S3 on their site.

What is S3?

Since S3 is a pretty flexible service, it can be used for many different things including storing “objects” like serialized representations of a type or something.  A lot of applications these days are using cloud storage to store their application objects like this.  For me, I use it as a file server in the cloud.  Mostly it is there to host my images on this site as well as downloads and such.  Because there is no user interface I collaborated with Aaron to make the S3 Browser for Live Writer so that I could access my files for when I need them in posting content.

Accessing content from S3 also has different meanings, even with regard to Silverlight.  For example, if you wanted to simply set an Image source to a hosted image on S3, you could easily do that using the URL provided by S3 to the object.  Since Silverlight allows media assets to be sourced from anywhere, this is not a problem.

The Problem

The problem comes when you want to download content in otherways, such as maybe a file stored there, a serialized object, or access their services in a Silverlight application.  Why is this a problem?  Well because S3 does not expose any cross-domain policy files in their implementation.  RIA platforms like Silverlight require a policy file from the providing service to exist in order to make cross-domain calls from the platform. 

You can read more about Silverlight’s cross-domain policy information here:

 

So how can we accomplish this?

The S3 Service APIs

Amazon exposes two service APIs: SOAP and REST.  Because of the requirements around using their REST service and providing an Authorization header, we are unable to use that in Silverlight at this time since Authorization is a restricted header we cannot modify at this time.  So we can use their SOAP service.  This is fine for Silverlight because Amazon provides a WSDL for us to generate a proxy with.  The defined endpoint in the WSDL for the service is https://s3.amazonaws.com/soap.  This is important so remember this.  Let’s move on.

Buckets to the rescue!

S3 uses a concept they call buckets to store information in containers.  I’m not going to into a lot of detail explaining this concept so if you want to learn more, read their documentation.  Basically a bucket is global unique to the service (not to your account)…so there can only exist one “timheuer” bucket across the service, name them accordingly :-).  All data you push to S3 must be in a bucket.  When you create a bucket, you can also access content in that bucket using a domain shortcut system.  For example when you create a bucket called “timheuer” and put a file in there called foo.txt, it has the URI of http://timheuer.s3.amazonaws.com/foo.txt.  Notice the alias that is happening here.  We can now use this method to solve some of our issues.  How?  Well the “/soap” key will be available at any bucket endpoint!

Because of this aliasing, we can use this mechanism to ‘trick’ the endpoint of our service to respond to the policy file request…so let’s review how we’ll do this.

Step 1: Create the bucket

There are different ways you could do this.  You could simply put a bucket called “foo” and use that, or you can completely alias a domain name.  I’m choosing to completely alias a domain name.  Here’s how I did it.  First I created a bucket called timheueraws.timheuer.com – yes that full name.  That’s a valid bucket name and you’ll see why I did the full one in a moment.

Step 2: Alias the domain

If you have control over your DNS, this is easy, if you don’t, you may want to use the simple bucket aliasing.  But I went into my DNS (I use dnsmadeeasy.com btw, and it rocks, you should use it too).  I added a CNAME record to my domain (timheuer.com):

   1: CNAME    timheueraws    timheueraws.timheuer.com.s3.amazonaws.com.    86400    

What does this mean?  Well any request to timheueraws.timheuer.com will essentially be made at timheueraws.timheuer.com.s3.amazonaws.com.  The last parameter is the TTL (time-to-live).

UPDATE: For security reasons you should actually stick with using only a bucket name and not a CNAME'd bucket.  This will enable you to use the SSL certificate from Amazon and make secure calls.  For example a bucket names "foo" could use https://foo.s3.amazonaws.com/soap as the endpoint.  This is highly adviseable.

Step 3: Create the clientaccesspolicy.xml file

Create the policy file and upload it to your bucket you created in step 1.  Be sure to set the access control list to allow ‘Everyone’ read permissions on it or you’ll have a problem even getting to it.

Creating the Silverlight application

Create a new Silverlight application using Visual Studio 2008.  You can get all the tools you need by visiting the Getting Started section of the Silverlight community site.  Once created let’s point out the key aspects of the application.

Create the Amazon service reference

In your Silverlight application, add a service reference.  The easiest way to do this by right-clicking on the Silverlight project in VS2008 and choosing Add Service Reference.  Then in the address area, specify the Amazon S3 WSDL location:

This will create the necessary proxy class code for us as well as a ServiceReferences.clientconfig file.

Write your Amazon code

Now for this simple purposes, let’s just list out all the buckets for our account.  There is an API method called “ListAllMyBuckets” that we’ll use.  Now Amazon requires a Signature element with every API call – it is essentially the authentication scheme.  The Signature is a hash of the request plus your Amazon secret key (something you should never share).  This can be confusing to some, so with the perusing of various code libraries on the Amazon doc areas, I came up with a simplified S3Helper to be able to do this Signature generation for us.

   1: public class S3Helper
   2: {
   3:     private const string AWS_ISO_FORMAT = "yyyy-MM-ddTHH:mm:ss.fffZ";
   4:     private const string AWS_ACTION = "AmazonS3";
   5:  
   6:     public static DateTime GetDatestamp()
   7:     {
   8:         DateTime dteCurrentDateTime;
   9:         DateTime dteFriendlyDateTime;
  10:         dteCurrentDateTime = DateTime.Now;
  11:         dteFriendlyDateTime = new DateTime(dteCurrentDateTime.Year,
  12:             dteCurrentDateTime.Month, dteCurrentDateTime.Day,
  13:             dteCurrentDateTime.Hour, dteCurrentDateTime.Minute,
  14:             dteCurrentDateTime.Second,
  15:             dteCurrentDateTime.Millisecond, DateTimeKind.Local);
  16:         return dteFriendlyDateTime;
  17:     }
  18:  
  19:     public static string GetIsoTimestamp(DateTime timeStamp)
  20:     {
  21:         string sISOtimeStamp;
  22:         sISOtimeStamp = timeStamp.ToUniversalTime().ToString(AWS_ISO_FORMAT, System.Globalization.CultureInfo.InvariantCulture);
  23:         return sISOtimeStamp;
  24:     }
  25:  
  26:     public static string GenerateSignature(string secret, string S3Operation, DateTime timeStamp)
  27:     {
  28:         Encoding ae = new UTF8Encoding();
  29:         HMACSHA1 signature = new HMACSHA1(ae.GetBytes(secret));
  30:         string rawSignature = AWS_ACTION + S3Operation + GetIsoTimestamp(timeStamp);
  31:         string encodedSignature = Convert.ToBase64String(signature.ComputeHash(ae.GetBytes(rawSignature.ToCharArray())));
  32:  
  33:         return encodedSignature;
  34:     }
  35: }

This will abstract that goop for us.

Let’s assume we have a ListBox in our Page.xaml file that we’re going to populate with our bucket names.  Mine looks like this:

   1: <ListBox x:Name="BucketList">
   2:     <ListBox.ItemTemplate>
   3:         <DataTemplate>
   4:             <TextBlock Text="{Binding Name}" />
   5:         </DataTemplate>
   6:     </ListBox.ItemTemplate>
   7: </ListBox>

So let’s just add a simple method to our Loaded event handler calling our ListAllMyBuckets method.  Remember, everything in Silverlight with regard to services is asynchronous, so we’re actually going to call ListAllMyBucketsAsync from our generated code.  We’ll need a completed event handler where we will put our binding code.  Here’s my complete code for both of these:

   1: void Page_Loaded(object sender, RoutedEventArgs e)
   2: {
   3:     DateTime timeStamp = S3Helper.GetDatestamp();
   4:  
   5:     s3 = new AWS.AmazonS3Client();
   6:  
   7:     s3.ListAllMyBucketsCompleted += new EventHandler<AWS.ListAllMyBucketsCompletedEventArgs>(s3_ListAllMyBucketsCompleted);
   8:     s3.ListAllMyBucketsAsync(AWS_AWS_ID, timeStamp, S3Helper.GenerateSignature(AWS_SECRET_KEY, "ListAllMyBuckets", timeStamp));
   9: }
  10:  
  11: void s3_ListAllMyBucketsCompleted(object sender, AWS.ListAllMyBucketsCompletedEventArgs e)
  12: {
  13:     if (e.Error == null)
  14:     {
  15:         AWS.ListAllMyBucketsResult res = e.Result;
  16:         AWS.ListAllMyBucketsEntry[] buckets = res.Buckets;
  17:         BucketList.ItemsSource = buckets;
  18:     }
  19: }

The AWS_AWS_ID and AWS_SECRET_KEY are constants in my application that represent my access key and secret for my S3 account.  You’ll notice that in this snippet above the “s3” object isn’t typed – that is because I have a global that defines it (you can see it all in the code download).

So now if we run this application, we should expect to see our bucket list populated in our ListBox, right?  Wrong.  We get a few exceptions.  First, we get an error because we can’t find the cross-domain policy file.  Ah yes, remember we’re still using the Amazon SOAP endpoint.  We need to change that.

Changing the Amazon endpoint

You may notice that the SOAP endpoint for S3 is an https:// scheme.  We won’t be able to use that using this method because of our aliasing and the fact that the SSL certificate wouldn’t match our alias.  So we need to change our endpoint.  There are two ways we can do this.

We can change this in code and alter our code by creating a new BasicHttpBinding and EndpointAddress and passing them into the constructor of new AWS.AmazonS3Client().  But that would be putting our configuration in code.  Remember that when we added the service reference we were provided with a ServiceReferences.clientconfig file.  Open that up and check it out.  It provides all the configuration information for the endpoint.  Now we could just change a few things.  I decided to create a new <binding> node for my use rather than alter the others.  I called it “CustomAWS” and copied from the existing one that was there.  Now because the default endpoint for S3 is a secure transport and we cannot use that, we have to change the <security> node to mode=”None” so that we can use our custom endpoint URI.

The second thing we do is in the <client> node change the address attribute and the bindingConfiguration attribute (to match new new config we just created).  Mine now looks like this in entirety:

   1: <configuration>
   2:     <system.serviceModel>
   3:         <bindings>
   4:             <basicHttpBinding>
   5:                 <binding name="AmazonS3SoapBinding" maxBufferSize="65536" maxReceivedMessageSize="65536">
   6:                     <security mode="Transport" />
   7:                 </binding>
   8:                 <binding name="AmazonS3SoapBinding1" maxBufferSize="65536" maxReceivedMessageSize="65536">
   9:                     <security mode="None" />
  10:                 </binding>
  11:               <binding name="CustomAWS" maxBufferSize="65536" maxReceivedMessageSize="65536">
  12:                 <security mode="None" />
  13:               </binding>
  14:             </basicHttpBinding>
  15:         </bindings>
  16:         <client>
  17:             <endpoint address="http://timheueraws.timheuer.com/soap" binding="basicHttpBinding"
  18:                 bindingConfiguration="CustomAWS" contract="Amz.AWS.AmazonS3"
  19:                 name="AmazonS3" />
  20:         </client>
  21:     </system.serviceModel>
  22: </configuration>

Now when we run the application it will work and if we sniff the traffic we’ll see that the first request is to our clientaccesspolicy.xml file that enables us to continue with the /soap requests:

Now we can see a list of our buckets and after wiring up some other code so that when we click on the bucket, we’ll see a list of all the objects, bound to a DataGrid (details blurred for privacy) :

Summary

Sweet, we’re done!  We’ve now been able to provide our own clientaccesspolicy.xml file in a place where it didn’t exist before and be able to call the service.  We can now use other methods perhaps to create a new bucket, put objects in buckets, etc.

So in order to do this, we’ve:

    • Created an alias to an bucket in our S3 account
    • Uploaded a clientaccesspolicy.xml file to that bucket
    • Changed the endpoint configuration in our service reference
    • Called the services!

I’ve included all the files for the above solution in the download file.  You’ll have to provide your own access key/secret of course as well as specify the endpoint address in the ServiceReferences.clientconfig file.

Hope this helps!  Please read Part 2 of this post.

| Comments

I’ve been getting a few notes on issues relating to people trying Silverlight beta 2 and WCF or other services.  The most common issue I’m seeing reported is “my exception is showing a 404-not found error message, but the service is there and works!”

Okay, there could be several things happening here, but let’s tackle the “make sure it is plugged in” type situations.  I don’t mean to make light of the error, because at first I, too, was banging my head against a wall.  Sometimes it helps to have a second set of eyes or a deeper understanding of the issue…or both.

First, the situation.  Most of the time you’ll see this exception when your Silverlight application is accessing a service not hosted on the same application domain.  This is considered cross-domain access and requires the service host to enable an opt-in policy file so rich client platforms are allowed to access the service.  In Silverlight we call that the clientaccesspolicy.xml file.  You can learn all about cross-domain policy files by viewing this video on the Silverlight community site (a great resource).  In beta 2, there was a subtle change to the policy file that is required.  I wrote about that here as well (and note the code download for the video has the updated policy file).

Ok, so under what conditions might you get the “(404) Not found” error message when accessing services?

No policy file at all

Silverlight will first check for clientaccesspolicy.xml first and then fallback and see if a supported crossdomain.xml file exists.  If neither exists at all, 404 baby.  Also remember Silverlight is looking for this file in the root of the requesting domain.  So if you have a file but it is in your app root…this could be the issue at all.

Incorrect, mail-formed, just plain wrong policy file

Silverlight will check for a clientaccesspolicy.xml file and if it finds one but it has an incorrect format or is mal-formed it will treat it as invalid and then look for crossdomain.xml…and if not found, boom: 404.  This is what most are running into in starting to use beta 2 with your policy files.  The missing http-request-headers attribute renders the file mal-formed.

HTML response

Most sites have custom error messages for page not found.  For example, when you visit google.com/timheuer you’ll get a less-than-helpful message, but custom nonetheless or as another example microsoft.com/timheuer you’ll get another custom response with a sitemap.  Both of these are essentially custom error messages that are returning valid HTML, but not a valid policy file.  In these instances, Silverlight sees the response, but sees it as invalid/mal-formed and treats it like it didn’t exist: 404.

These are the most common instances where a 404 would be generated and making you bang your head against the closest semi-hard surface.  How can you figure out what is going on?  Well first, make sure you do your best to ensure you meet all the requirements.  But also use some development helper tools.

Web Development Helper

For me, in service/remote/AJAX development there is a single indispensable tool that I can’t live without.  That is NikhilK’s Web Development Helper.  This tool is a plugin to Internet Explorer (yes I know there are others similar in nature for Firefox, etc. – but I LOVE this implementation and IE is fine by me) that provides in the browser HTTP-traffic sniffing.  No need for any funky port configuration or changing proxy server settings, etc.  Just enable it and it works.  I highly recommend you use this tool or something similar like it (Fiddler is another good one although requires some additional config steps usually when working with Visual Studio’s web development server).

Seriously, a tool like this will save you so much time in troubleshooting your service interactions with Silverlight, Flash, AJAX, whatever – it will help you immediately figure out where to start looking rather than grabbing your climbing gear and spelunking in unknown caverns.

So why a ‘404’ – what gives?

I’ve also heard people say “you need to make that exception be more descriptive, 404 is not accurate.”  I’m on the fence on this one.  As a .NET developer I can see where the concerns are coming from in having the most descriptive exception possible.  But one must realize what is happening under the hood.  The polciy file is being requested as a GET request, so basically an HttpWebRequest object is our object here.  Because of this, we return HTTP-specific errors.  There isn’t one for “Silverlight policy file found, but not correct” in the HTTP spec right now.  So because of this, we use a RESTful approach in providing a standard HTTP response.  In our case “404-not found” seems to be a valid response – indicating “The request for a valid policy file resulted in a valid policy file not being found.”  We make no distinction between partially valid or finding a specific typo, etc. – we simply indicate that a valid policy wasn’t found.  One could argue 406,409 or 417 might be other responses, but I’m not sure that would make anyone feel any better – we’re still going to use an HTTP response code.

Hope this helps!