| Comments

Here’s how it started…

Lisa (my wife) [shouting from office into the kitchen]: Tim, what’s this Amazon charge for $193?
Me [thinking what I may have purchased and not remembered]: Um, don’t know…let me look.

I then logged into my Amazon account to see what order I may have forgotten.  Surely I didn’t order $200 worth of MP3…that’s ridiculous.  Sure enough nothing was there.  Immediately I’m thinking fraud.  I start freaking out, getting mad, figuring out my revenge scheme on the scammer, etc.

Then it hit me: Amazon Web Services account.

The Culprit

Sure enough I logged in and my January 2010 billing account was $193 and change.  Yikes.  Well, I could let the (what has been averaging) $30 or so charge slide under the family CFO radar for a while…but this $193 charge…the chief auditor herself caught that one.

So I panicked.  I needed to figure out where/what the spike was.  I logged into the Amazon Web Services management console (I only use the S3/CloudFront storage in their services right no) to see what was going on.  I see ‘Usage Reports’ and click.  I’m met with essentially a bunch of useless data really.  No offense to Amazon, but really the usage reports weren’t really helpful at all.  First, they gave me a Resource ID which I thought would represent the URI I was looking for.  Nope, Resource ID == Bucket.  And they didn’t even put the bucket name in the report!

For some perspective, here’s essentially what I’m used to – here’s my December 2009 billing statement details:

December 2009 S3 CloudFront Billing

Anyhow, after some hunting it was obvious that I wasn’t going to figure out what bucket objects/unique URIs were causing my spike.  This was primarily because I didn’t have logging turned on at all on my buckets.  I had in the past but really didn’t think I needed it so I turned it off.

I was wrong – go now and enable logging.

While I was searching for a solution to understand my traffic, I was curious for where my traffic was.  Like I said, I’d been averaging (actually *peaking*) at about a $30 charge for the S3 hosting.

NOTE: I use S3 for all my image/screenshot/sample code file hosting.  I’ve invested in S3 for a long time and built my blogging workflow around it with building tools like S3 Browser for Windows Live Writer.

What was interesting was my most usage of my CloudFront data was coming from Hong Kong.  Compare to above the December 2009 billing to this January 2010 billing:

January 2010 Blling Statement

Yeah, that was my reaction too.  I went from roughly 40GB of transfer bandwidth to over 960GB in one month.  I suspected I knew what happened, but needed to confirm before I changed things. 

Implementing Logging for Statistics

The problem was that I didn’t have logging enabled and I was pretty much stuck.  I needed to get some data from the logs before being for sure.  I quickly found S3Stat and it appears to be the de-facto reporting for Amazon S3 log files.  I signed up for the free trial and generated a new access key to give them.

NOTE: They have a ‘manual’ option which means a lot more work.  I simply generated a NEW S3 access key for this specific purpose.  That way I didn’t have to give them my golden key I’ve been using in other places and can shut this off at any time without issue to my other workflows.

24 hours later, I had some reports.  Wicked cool reports.  Here’s a list of what I’m currently looking at:

  • Total hits, total files, total kbytes
  • Hits/files per hour/day
  • Hourly stats
  • Top 30 URIs
  • Top URIs by kbytes used
  • Top referrers (find out who’s using your bits without you knowing)
  • User agents
    Here’s a quick snapshot of one:
    S3Stat sample report image

Wow…honestly…THIS is what I was expecting when I see “usage” data reports.  S3Stat is awesome and you should use that now.  Yes, I’m buttering up to them…but they have a great tool here for $5/month if you are a heavy Amazon S3/CloudFront user.  Amazon frankly should just buy them and integrate this into their management console.  You can see other examples of their report outputs on their site at http://www.s3stat.com

What I also found out is that the tool I use for my desktop usage of S3/CloudFront (outside of my blogger workfow and S3Browser) has S3Stat integration built in!  I use CloudBerry’s S3 Explorer Pro for managing my S3 content.  It’s awesome and you should look at it.  When I look at the logging features in CloudBerry I see this:

CloudBerry S3Stat dialog

And after enabling the logging, within CloudBerry I can view the log data within the tool:

CloudBerry view logging

Summary

Wow, this is incredibly helpful and insightful data.  I now know who/how/when my cloud storage data is being used in various ways I can see the data.  S3Stat immediately showed me incredible value within less than 24 hours of enabling it.  I know can confirm the culprit of the burst of usage and plan accordingly.

Now, to be clear I’m not complaining about the cost of cloud storage.  That has been clear to me from the beginning.  Nothing is hidden and I’m not an idiot for not understanding it.  What I did not account for was the popularity of some files…and then the ones that just happened to be the largest.  I could not have personally thought I’d see a 920GB spike in one month of usage…but now I know…and have to alter some plans. 

Hopefully this is helpful for some who are just exploring cloud storage solutions/services.  Make sure you have instrumentation and logging capabilities turned on so you can identify and tune your situations.  For me, S3Stat and CloudBerry are winners for my personal usages.  If you are an Amazon S3 customer, I recommend looking at S3Stat and turning on logging immediately!


| Comments

One of the great things I like about some of our platform products is that they are building in extensibility more and more.  Take Windows Live Writer as an example.  It’s no secret on this blog that I’ve got a geek affair with that tool.  I use it daily and have customized it (via plugins) and my blogging platform (Subtext) to make it even more of a best experience for me for web authoring.

Writing plugins for Writer has been a lot of fun and a great way to get the functionality I want/need into a workflow without having a different utility to work in.  Another one of these tools has been Expression Encoder 2 which I’ve been using a bit more lately.  Expression Encoder is a tool that enables the encoding of audio/video assets into VC-1 formats and H.264/AAC formats.  It’s a really simple tool to use and also comes with several Silverlight player templates that you can choose as a part of your output.  In one click you can have your HD home movie encoded and a rich playback experience developed for you as well.  I’ve wrote several times about Encoder, templates, etc. before and you can see some of them here:

With no shortage of information on how to do it, I got home last night and began cranking one out.  I’ve been using Amazon’s S3 web services for a while and have really grown to like it a lot.  One of the Live Writer extensions I spoke of earlier is a plugin for S3 for Live Writer that Aaron Lerch helped out with as well!  I though I should extend Encoder so that I’d have a one click publishing point to my S3 account instead of having to use S3Fox all the time (which is an awesome tool btw).

So after getting home from a user group I started cranking one out, figuring out the nuances and just coding something together.  A few hours later I came up with what I’m calling 1.0 beta of my plugin.

It’s not a fancy UI, but it doesn’t need to be, it serves a purpose: enable publishing of Encoder output directly to an Amazon S3 bucket in one click.  That’s it.  Encoding just media?  No problem.  Adding a template?  Not a problem either.  You simply need to enter your Amazon S3 account information and enter a bucket.  If the bucket isn’t there, it will attempt to create it.  You can also list your current buckets if you forgot them.

There are likely indeed a few problems and some fit-n-finish needed.  I am positive the error handling needs to be refined as well as it could probably benefit from some more efficient threading handling.  The cool think about the Encoder publishing plugins is that they are WPF user controls, so it gave me a chance to work with more XAML.

At any rate, even in the current form (which isn’t perfect but seems to be working for the specific need I built it for -- “works on my machine” warranty applies here) I wanted to share it out for any others to use and hopefully give feedback and contribute.  It’s available as a Ms-PL licensed project with source code and you can get it on CodePlex: Amazon S3 Encoder Publishing Plugin.  I hope you like it and can give feedback.  Please if you find issues log them in the Issue Tracker for the project so they are trackable.

| Comments

Amazon just released into public beta their EC2 features of enabling Windows instances.  I’m a fan of Amazon’s services and the route they’ve been taking.  I use S3 a lot, even only if as a file storage for now.  I’ve written a plugin for Live Writer so that S3 is basically my repository for everything non-text on this site and others.  Of course, if S3 goes down (like it did hard a while back) I’m screwed.  Maybe something like Reserve Chute will help me in the future.

I’ve not messed with the EC2 side of their offering only because I didn’t have a need, quite frankly.  I’ve played around with the awesomeness that is Jumpbox and how easy you can get a Jumpbox appliance running and with EC2.  But that’s been just messing around.  After all, I’m a Windows guy in my comfort zone, so that’s where I prefer to stay.

Today I fired up my EC2 information, had Elasticfox installed and started to check out the Windows on Amazon stuff.  I see a lot of people will use Elasticfox as it is easy to use.  One thing I think Amazon needs to do, is provide more intuitive image naming for their manifests.  Here’s a screenshot of how “geeky” they are:

You could decipher them of course, but they should just use their “friendly” names as well for ease of finding them.  One thing I did notice is that when I tried to run a 64-bit instance (which I assume will be named x86_64 – which is odd that it is named x86 and 64) I get a weird error message that I can’t see the rest of:

Oh well, I moved on with the x86 configuration with SQL Server Express (2005 version). 

NOTE: Right now they only have Windows Server 2003 r2 and SQL Server 2005 versions, no 2008 versions of either.

I struggled a bit with getting the right sequence going, so here’s what I did in hopes of helping others.  Twitter came in handy so thanks for those who pointed me in the right direction.  I did all of these FIRST before starting an instance.

First, having an instance running is fine, but generally you actually want to configure stuff with it!  Windows users/developers know and love Remote Desktop Connection…which is available on Windows Server.  Obviously you’d initially want to use that to configure your box in EC2.  Step 1, however, is to ensure that your security group enables the port traffic on TCP/3389.  Elasticfox makes this easy, by going to the Security Groups tab and either modifying the default group or setting a specific one for your instance:

Once that is done, I created a KeyPair (again, using Elastifox).  You simply create the KeyPair using the KeyPair tab and giving it a name.  It will prompt you to save the .pem certificate file somewhere on your machine.  Ensure you keep that handy – you will need it.

Once that is done I located the instance I wanted to fire up and chose to create a new instance.  One important step is to ensure you select the KeyPair correctly before the instance starts spinning up:

After that, I waited until Elasticfox told me that my instance was running.  Even though it indicated “running” I couldn’t connect to it right away.  My test showed that about 5-7 minutes after it turned into “running” mode that I could attempt to connect.

The first time you either attempt to connect or attempt to get the Administrator password, you’ll be prompted for that .pem file for certificate information…just browse to it and choose it.

When I attempted to connect using Elasticfox it gave me some weird errors and failed everytime…even though it gave me the Admin password and everything.  So I figured something was amiss and chose instead to fire up Remote Desktop Connection first and then paste the Public DNS address into there rather than have Elasticfox try to do some weird connection mojo for me.  Alas, it worked!  I was prompted by RDC about a certificate not matching (the certificate had the instance ID and not the public DNS name), but after accepting that, I was in.  I entered the Administrator password, and quickly changed it to something I could remember (they auto generate a rather cryptic one for you).

Wouldn’t you know it, it’s a Windows box :-).  I could configure IIS, added a hello world page and was able to browse to it over the public Internet.  Fantastic.  Your instance name gives you an indication of the IP address, so if you wanted to CNAME something you could do that.  As an example if your public DNS address is ec2-75-123-456-78.compute-1.amazonaws.com then your IP address is 75.123.456.78.  There is also the Elastic IP service which I don’t claim to know anything about, but you can associate with instances as well.

That was it, I was up and running.  To recap:

    1. Get Elasticfox – a Firefox plugin.
    2. Configure your Amazon Web Services credentials in the plugin
    3. Go to the security group tab and enable the TCP/3389 port traffic either in the default security group or your own custom security group that you’ll associate with your instance.
    4. Generate a KeyPair for your machine
    5. Use the Elasticfox filter and type in ‘windows’ to see the different options
    6. Choose your option and create a new instance, choosing the KeyPair and Security Group of your preference
    7. Wait about 5-7 minutes AFTER it says it is running
    8. Using Elasticfox, get the administrator password (right-click on the instance for that option)
    9. Using Elasticfox, get the Public DNS Address (right-click on the instance for that option)
    10. Launch Remote Desktop Connection and paste the Public DNS address
    11. Review the certificate warning and accept if all looks good
    12. Login with the administrator password, then change it so you remember what it is
    13. Configure away!

I think this makes EC2 more intriguing to me because I can spin up a fully-fledge dedicated(virtual) box right away (and if needed, scale it to more instances).  The thing that I found curious was the pricing model.  If you create what they call a “small” instance, with using SQL Express and no authentication rights (i.e., the ability to create Windows users), then you’re looking at about $90/month before any bandwidth and disk usage fees apply – that is only compute time.  Also, if you add SQL Server Standard to that, the price jumps significantly (in my calculation to about $600/month). 

NOTE: My EC2 instance registered itself as a Dual-core AMD, 2.7 ghz, 2GB RAM.

When I look at a place like ServerBeach, where I can get a dedicated machine for around $130-150/month (with 2 TB bandwidth and 160GB storage) and add SQL Standard for $275 (and I’m not sure if that is monthly or one-time)…which seems lower than my fees with EC2 before bandwidth.  Now granted, if I needed to spin up more instances I couldn’t (or not without significant cost and inflexibility) but for the “one server” guy scenario, the pricing has me a bit perplexed.  For the person who wants to move from shared hosting to dedicated/virtual this might be an option if no database or SQL Express (or other open source databases) work for you.

Regardless it is interesting.  It is usefuly for these cloud computing services to show up so you can spin up servers almost instantly, play around with scenarios, like perhaps some Silverlight cross-domain or SSL scenarios, and shut them down when needed.  As a developer, very handy.  I’m also looking forward to seeing if any rumors are true about next week at PDC.

| Comments

After posting my sample implementation of accessing Amazon Simple Storage Solution (S3) via Silverlight, I reflected quickly and also chatted with some AWS engineers.

Cross-domain Policy

One thing that you should never do is just deploy a global clientaccesspolicy.xml file blindly.  Often times in samples, we (I) do this.  I need to be better about this guidance to be honest, so I’ll start here.  As an example, for the S3 cross domain policy file, we really should add some additional attributes to it to make it more secure.  Since we know it is a SOAP service, we can ratchet down the requests a little bit by adding the http-request-headers restrictions like this:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <access-policy>
   3:   <cross-domain-access>
   4:     <policy>
   5:       <allow-from http-request-headers="SOAPAction,Content-Type">
   6:         <domain uri="*"/>
   7:       </allow-from>
   8:       <grant-to>
   9:         <resource include-subpaths="true" path="/"/>
  10:       </grant-to>
  11:     </policy>
  12:   </cross-domain-access>
  13: </access-policy>

Additionally (and ideally) we’d be hosting our application from a known domain.  In this instance let’s say I was going to host my application on timheuer.com in the root domain.  I would add the allow from attribute and complete my security like this:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <access-policy>
   3:   <cross-domain-access>
   4:     <policy>
   5:       <allow-from http-request-headers="SOAPAction,Content-Type">
   6:         <domain uri="http://timheuer.com"/>
   7:       </allow-from>
   8:       <grant-to>
   9:         <resource include-subpaths="true" path="/"/>
  10:       </grant-to>
  11:     </policy>
  12:   </cross-domain-access>
  13: </access-policy>

Of course if I had a cool application and others wanted to embed it, I could add more domains to that allow list as well and just list them in there.  But restricting it makes sense if you want to provide some secure access to your APIs (as a service provider) and to you (in methods of doing things like this sample).

More security with SSL

As I mentioned in the initial sample I changed the binding configuration, modifying the binding to use a security mode of “None” instead of “Transport.”  I actually did this because I use the built-in web server from Visual Studio for most of my development and it doesn’t support HTTPS connectivity.  To demonstrate my sample with S3 I had to ensure the schemes matched because in Silverlight 2 right now to access a secure service, the XAP itself would have to be served from a secure location.  The contexts must match.

I’ve come to learn that even with a bucket alias (except ones with “.” characters) you can use the SSL cert from Amazon S3 as it is a wildcard certificate.  So your endpoint (assuming a bucket name of timheuer-aws) could be https://timheuer-aws.s3.amazonaws.com/soap and it would work.

Using SSL of course means that currently you will have to serve your application from an SSL endpoint as well to avoid cross-scheme violations.

I hope this helps clear some things up and provide you with a more secure and recommended way of accessing Amazon S3 services with Silverlight!

| Comments

I ran into an interesting situation last week…the desire to access some of my Amazon S3 services from within a Silverlight application.

Amazon Simple Storage Solution (S3) is a pay service provided Amazon for object storage ‘in the cloud.’  Although there is no UI tool provided by Amazon to navigate your account in S3, a SOAP and REST API are available for developers to integrate S3 information into their applications or other uses.  You can view more information about Amazon S3 on their site.

What is S3?

Since S3 is a pretty flexible service, it can be used for many different things including storing “objects” like serialized representations of a type or something.  A lot of applications these days are using cloud storage to store their application objects like this.  For me, I use it as a file server in the cloud.  Mostly it is there to host my images on this site as well as downloads and such.  Because there is no user interface I collaborated with Aaron to make the S3 Browser for Live Writer so that I could access my files for when I need them in posting content.

Accessing content from S3 also has different meanings, even with regard to Silverlight.  For example, if you wanted to simply set an Image source to a hosted image on S3, you could easily do that using the URL provided by S3 to the object.  Since Silverlight allows media assets to be sourced from anywhere, this is not a problem.

The Problem

The problem comes when you want to download content in otherways, such as maybe a file stored there, a serialized object, or access their services in a Silverlight application.  Why is this a problem?  Well because S3 does not expose any cross-domain policy files in their implementation.  RIA platforms like Silverlight require a policy file from the providing service to exist in order to make cross-domain calls from the platform. 

You can read more about Silverlight’s cross-domain policy information here:

 

So how can we accomplish this?

The S3 Service APIs

Amazon exposes two service APIs: SOAP and REST.  Because of the requirements around using their REST service and providing an Authorization header, we are unable to use that in Silverlight at this time since Authorization is a restricted header we cannot modify at this time.  So we can use their SOAP service.  This is fine for Silverlight because Amazon provides a WSDL for us to generate a proxy with.  The defined endpoint in the WSDL for the service is https://s3.amazonaws.com/soap.  This is important so remember this.  Let’s move on.

Buckets to the rescue!

S3 uses a concept they call buckets to store information in containers.  I’m not going to into a lot of detail explaining this concept so if you want to learn more, read their documentation.  Basically a bucket is global unique to the service (not to your account)…so there can only exist one “timheuer” bucket across the service, name them accordingly :-).  All data you push to S3 must be in a bucket.  When you create a bucket, you can also access content in that bucket using a domain shortcut system.  For example when you create a bucket called “timheuer” and put a file in there called foo.txt, it has the URI of http://timheuer.s3.amazonaws.com/foo.txt.  Notice the alias that is happening here.  We can now use this method to solve some of our issues.  How?  Well the “/soap” key will be available at any bucket endpoint!

Because of this aliasing, we can use this mechanism to ‘trick’ the endpoint of our service to respond to the policy file request…so let’s review how we’ll do this.

Step 1: Create the bucket

There are different ways you could do this.  You could simply put a bucket called “foo” and use that, or you can completely alias a domain name.  I’m choosing to completely alias a domain name.  Here’s how I did it.  First I created a bucket called timheueraws.timheuer.com – yes that full name.  That’s a valid bucket name and you’ll see why I did the full one in a moment.

Step 2: Alias the domain

If you have control over your DNS, this is easy, if you don’t, you may want to use the simple bucket aliasing.  But I went into my DNS (I use dnsmadeeasy.com btw, and it rocks, you should use it too).  I added a CNAME record to my domain (timheuer.com):

   1: CNAME    timheueraws    timheueraws.timheuer.com.s3.amazonaws.com.    86400    

What does this mean?  Well any request to timheueraws.timheuer.com will essentially be made at timheueraws.timheuer.com.s3.amazonaws.com.  The last parameter is the TTL (time-to-live).

UPDATE: For security reasons you should actually stick with using only a bucket name and not a CNAME'd bucket.  This will enable you to use the SSL certificate from Amazon and make secure calls.  For example a bucket names "foo" could use https://foo.s3.amazonaws.com/soap as the endpoint.  This is highly adviseable.

Step 3: Create the clientaccesspolicy.xml file

Create the policy file and upload it to your bucket you created in step 1.  Be sure to set the access control list to allow ‘Everyone’ read permissions on it or you’ll have a problem even getting to it.

Creating the Silverlight application

Create a new Silverlight application using Visual Studio 2008.  You can get all the tools you need by visiting the Getting Started section of the Silverlight community site.  Once created let’s point out the key aspects of the application.

Create the Amazon service reference

In your Silverlight application, add a service reference.  The easiest way to do this by right-clicking on the Silverlight project in VS2008 and choosing Add Service Reference.  Then in the address area, specify the Amazon S3 WSDL location:

This will create the necessary proxy class code for us as well as a ServiceReferences.clientconfig file.

Write your Amazon code

Now for this simple purposes, let’s just list out all the buckets for our account.  There is an API method called “ListAllMyBuckets” that we’ll use.  Now Amazon requires a Signature element with every API call – it is essentially the authentication scheme.  The Signature is a hash of the request plus your Amazon secret key (something you should never share).  This can be confusing to some, so with the perusing of various code libraries on the Amazon doc areas, I came up with a simplified S3Helper to be able to do this Signature generation for us.

   1: public class S3Helper
   2: {
   3:     private const string AWS_ISO_FORMAT = "yyyy-MM-ddTHH:mm:ss.fffZ";
   4:     private const string AWS_ACTION = "AmazonS3";
   5:  
   6:     public static DateTime GetDatestamp()
   7:     {
   8:         DateTime dteCurrentDateTime;
   9:         DateTime dteFriendlyDateTime;
  10:         dteCurrentDateTime = DateTime.Now;
  11:         dteFriendlyDateTime = new DateTime(dteCurrentDateTime.Year,
  12:             dteCurrentDateTime.Month, dteCurrentDateTime.Day,
  13:             dteCurrentDateTime.Hour, dteCurrentDateTime.Minute,
  14:             dteCurrentDateTime.Second,
  15:             dteCurrentDateTime.Millisecond, DateTimeKind.Local);
  16:         return dteFriendlyDateTime;
  17:     }
  18:  
  19:     public static string GetIsoTimestamp(DateTime timeStamp)
  20:     {
  21:         string sISOtimeStamp;
  22:         sISOtimeStamp = timeStamp.ToUniversalTime().ToString(AWS_ISO_FORMAT, System.Globalization.CultureInfo.InvariantCulture);
  23:         return sISOtimeStamp;
  24:     }
  25:  
  26:     public static string GenerateSignature(string secret, string S3Operation, DateTime timeStamp)
  27:     {
  28:         Encoding ae = new UTF8Encoding();
  29:         HMACSHA1 signature = new HMACSHA1(ae.GetBytes(secret));
  30:         string rawSignature = AWS_ACTION + S3Operation + GetIsoTimestamp(timeStamp);
  31:         string encodedSignature = Convert.ToBase64String(signature.ComputeHash(ae.GetBytes(rawSignature.ToCharArray())));
  32:  
  33:         return encodedSignature;
  34:     }
  35: }

This will abstract that goop for us.

Let’s assume we have a ListBox in our Page.xaml file that we’re going to populate with our bucket names.  Mine looks like this:

   1: <ListBox x:Name="BucketList">
   2:     <ListBox.ItemTemplate>
   3:         <DataTemplate>
   4:             <TextBlock Text="{Binding Name}" />
   5:         </DataTemplate>
   6:     </ListBox.ItemTemplate>
   7: </ListBox>

So let’s just add a simple method to our Loaded event handler calling our ListAllMyBuckets method.  Remember, everything in Silverlight with regard to services is asynchronous, so we’re actually going to call ListAllMyBucketsAsync from our generated code.  We’ll need a completed event handler where we will put our binding code.  Here’s my complete code for both of these:

   1: void Page_Loaded(object sender, RoutedEventArgs e)
   2: {
   3:     DateTime timeStamp = S3Helper.GetDatestamp();
   4:  
   5:     s3 = new AWS.AmazonS3Client();
   6:  
   7:     s3.ListAllMyBucketsCompleted += new EventHandler<AWS.ListAllMyBucketsCompletedEventArgs>(s3_ListAllMyBucketsCompleted);
   8:     s3.ListAllMyBucketsAsync(AWS_AWS_ID, timeStamp, S3Helper.GenerateSignature(AWS_SECRET_KEY, "ListAllMyBuckets", timeStamp));
   9: }
  10:  
  11: void s3_ListAllMyBucketsCompleted(object sender, AWS.ListAllMyBucketsCompletedEventArgs e)
  12: {
  13:     if (e.Error == null)
  14:     {
  15:         AWS.ListAllMyBucketsResult res = e.Result;
  16:         AWS.ListAllMyBucketsEntry[] buckets = res.Buckets;
  17:         BucketList.ItemsSource = buckets;
  18:     }
  19: }

The AWS_AWS_ID and AWS_SECRET_KEY are constants in my application that represent my access key and secret for my S3 account.  You’ll notice that in this snippet above the “s3” object isn’t typed – that is because I have a global that defines it (you can see it all in the code download).

So now if we run this application, we should expect to see our bucket list populated in our ListBox, right?  Wrong.  We get a few exceptions.  First, we get an error because we can’t find the cross-domain policy file.  Ah yes, remember we’re still using the Amazon SOAP endpoint.  We need to change that.

Changing the Amazon endpoint

You may notice that the SOAP endpoint for S3 is an https:// scheme.  We won’t be able to use that using this method because of our aliasing and the fact that the SSL certificate wouldn’t match our alias.  So we need to change our endpoint.  There are two ways we can do this.

We can change this in code and alter our code by creating a new BasicHttpBinding and EndpointAddress and passing them into the constructor of new AWS.AmazonS3Client().  But that would be putting our configuration in code.  Remember that when we added the service reference we were provided with a ServiceReferences.clientconfig file.  Open that up and check it out.  It provides all the configuration information for the endpoint.  Now we could just change a few things.  I decided to create a new <binding> node for my use rather than alter the others.  I called it “CustomAWS” and copied from the existing one that was there.  Now because the default endpoint for S3 is a secure transport and we cannot use that, we have to change the <security> node to mode=”None” so that we can use our custom endpoint URI.

The second thing we do is in the <client> node change the address attribute and the bindingConfiguration attribute (to match new new config we just created).  Mine now looks like this in entirety:

   1: <configuration>
   2:     <system.serviceModel>
   3:         <bindings>
   4:             <basicHttpBinding>
   5:                 <binding name="AmazonS3SoapBinding" maxBufferSize="65536" maxReceivedMessageSize="65536">
   6:                     <security mode="Transport" />
   7:                 </binding>
   8:                 <binding name="AmazonS3SoapBinding1" maxBufferSize="65536" maxReceivedMessageSize="65536">
   9:                     <security mode="None" />
  10:                 </binding>
  11:               <binding name="CustomAWS" maxBufferSize="65536" maxReceivedMessageSize="65536">
  12:                 <security mode="None" />
  13:               </binding>
  14:             </basicHttpBinding>
  15:         </bindings>
  16:         <client>
  17:             <endpoint address="http://timheueraws.timheuer.com/soap" binding="basicHttpBinding"
  18:                 bindingConfiguration="CustomAWS" contract="Amz.AWS.AmazonS3"
  19:                 name="AmazonS3" />
  20:         </client>
  21:     </system.serviceModel>
  22: </configuration>

Now when we run the application it will work and if we sniff the traffic we’ll see that the first request is to our clientaccesspolicy.xml file that enables us to continue with the /soap requests:

Now we can see a list of our buckets and after wiring up some other code so that when we click on the bucket, we’ll see a list of all the objects, bound to a DataGrid (details blurred for privacy) :

Summary

Sweet, we’re done!  We’ve now been able to provide our own clientaccesspolicy.xml file in a place where it didn’t exist before and be able to call the service.  We can now use other methods perhaps to create a new bucket, put objects in buckets, etc.

So in order to do this, we’ve:

    • Created an alias to an bucket in our S3 account
    • Uploaded a clientaccesspolicy.xml file to that bucket
    • Changed the endpoint configuration in our service reference
    • Called the services!

I’ve included all the files for the above solution in the download file.  You’ll have to provide your own access key/secret of course as well as specify the endpoint address in the ServiceReferences.clientconfig file.

Hope this helps!  Please read Part 2 of this post.