| Comments

I’ve been spending a lot of time looking at the GitHub Actions experience for .NET developers.  Right now I’m still using Azure Pipelines for my project, Alexa.NET, in building, testing, and deploying to NuGet.  As the tools and process for using DevOps tools for CI/CD have so vastly improved over the years, I’ve become a huge advocate for this being the means for your build/deploy steps…YES, even as a single developer or a smaller team.  It simply really helps you get to a purer sense of preserving the ability for your code to live on, for others to accurately build it, and for you to have peace of mind that your code works as intended.  I’m just such a huge fan now.  That said, I still think there is a place for ‘right-click publish’ activities in inner-loop development.  In fact, I use it regularly for a few internal apps I’ve written.  For simple solutions that method works well, but I certainly don’t think I can right-click-publish a full solution to a Kubernetes environment though.  I’m currently researching new tooling ways to help those ‘publish to CI/CD’ from Visual Studio (would love your opinions here) so I’ve been spending a lot more time in GitHub Actions.  I decided to look at publishing a Blazor app to Azure Storage as a static site…here’s what I did.

Setting up the Storage endpoint

The first thing you need is an Azure Storage account.  Don’t have an Azure account, no worries you can get a Free Azure account easily which includes up to 5GB of Azure Blob Storage free for the first 12 months.  Worried about pricing afterwards?  Well check out the storage pricing calculator and I’m sure you’ll see that even at 1TB storage it is cost-effective means of storage.  But any rate, you need a storage account and here are the configuration you need.

First, you may already have one.  As a developer do you create your infrastructure resources or are these provisioned for you by infra/devops roles in your company (leave comments)?  Earlier this year at Build we enabled static website hosting in Azure Storage.  You first create a Storage resource (ensuring you choose v2 which is the default, but that is the version that enables this feature).  After you create your resource scroll on the left and you’ll see ‘Static website' section.  Here’s what the configuration looks like and let me explain a few areas here:

Screenshot of the Static website configuration
All of this configuration is under the Static website area.  First you obviously need to enable it…that’s just toggling the enabled/disabled capability.  Enabling this now gets you two things: 2 endpoints (basically the URI to the website) and a specific blob contianer named $web where your static content needs to live.  The endpoints default map to this blob container without having to add a container name to the root URI.  Remember the resource group you’ve given to your storage instance here, you will need that later.

NOTE: You can later add CDN/custom domain to these endpoints, but I’m not covering those here.

The second thing you need is to set a default document and error page.  The default document for your SPA is your root entry point, and for most frameworks I’ve seen this is indeed index.html.  For Blazor WebAssembly (WASM) apps, this is also the default if you are using the template.  So you set the default document as ‘index.html’ and move on.  The error document path is another interesting one…you need to set this for SPA apps because right now the static website capability of Azure Storage does not account for custom routing rules.  What this means is that storage will throw an HTTP 404 error message when you go to something like /somepage where it actually doesn’t exist but your SPA framework knows how to handle it.  Until custom routing works on Azure Storage your error document becomes your route entry point.  So for this set the error document path to also be index.html for Blazor WASM.

NOTE: Yes this isn’t ideal for routing.  On top of that it still does show an HTTP 404 actual network message even though your route is being handled.  Azure Storage team has heard this request…working on advocating for y’all.

That’s it.  Now you have a storage endpoint with a blob container that you can begin putting your content in and browse to using your endpoint URI provided from the portal.  For a simple tool to navigate your storage, I’ve been using Azure Storage Explorer and it is intuitive to me and works well to quickly navigate your storage account and containers (and supports multi-account!). 

Setting up your Azure Service Principal credentials

The next thing you will need is a service principal credential.  This will be used to authenticate with your Azure account to be able to use DevOps tools to work on your behalf in your account.  It’s a simple process if you have a standard account.  I say this only because I know there might be some configurations for environments where you yourself don’t have access to create service principals and may need someone to create one on your behalf, or also there might be credentials you can already use.  Either way here is the process I used.

I used the Azure CLI so if you don’t have that installed go ahead and grab that and install it.  This should now be in your PATH environment and using your favorite terminal you should be able to start executing commands.  To start out, login to the CLI using `az login` – this will launch a browser for you to authenticate via your account and then issue the token in your environment so that for the remainder of your session you’ll be authenticated.  After logging in successfully running `az account show` will emit what subscription you are using and you’ll need the subscription ID later so grab that and put it somewhere on your scratch notepad for later command usage.

NOTE: If you have more than one subscription and have not set a default subscription you should set that using the `az account set` command.

Now you can use the CLI to create a new service principal.  To do that issue this command:

az ad sp create-for-rbac --name "myApp" --role contributor \
                            --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
                            --sdk-auth

Note on line 2 here that you need to replace {subscription-id} with your own actual subscription id (the GUID) and {resource-group} with the resource group name where your storage account is located.  On line 1 the “myApp” can be anything but I recommend making it meaningful as this is basically the account name of the principal.  The output of this command is your service principal.  The full JSON output.  Save this off in a place for now as we’ll need that later to configure GitHub Actions properly.  Great now to move on to the app!

Create your Blazor WASM app

I assume since you may be reading this far you aren’t new to Blazor and probably already have the tools.  As of this writing, Blazor WASM is still in preview so you have to install the templates separately to acquire them to show up in `dotnet new` and in Visual Studio File…New Project.  To do that from a terminal run:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.1.0-preview4.19579.2

Then you will be able to create a new project.  I’m showing Visual Studio here and this is the WASM template:

Screenshot of Blazor new project dialog

In the dialog here you will see Blazor WebAssembly App and that’s what you will use.  Now you have a choice to have it ASP.NET Core hosted using that checkbox, which if you were going to do other things in ASP.NET maybe that’s what you want.  For the purposes of this article we are talking about just having the WASM app and having a place to host it that isn’t a web server with other content…just hosting static content and using storage to do so…so we aren’t checking that box.  The result will be a Blazor WASM app with no host.  Now let’s add that to GitHub.  If you are using Visual Studio 16.4+ you’ll be able to take advantage of an improved flow for pushing to GitHub.  Once you have your project, in the lower right click ‘Add to Source Control’ choosing Git and then you’ll see the panel to choose GitHub and create/push a repo right away.  You don’t have to go to GitHub site first and clone later…all in one step:

Animation of Visual Studio GitHub flow

Great!  Now we have our WASM project and we’ve created and pushed the current bits to a new GitHub repository.  Now to create the workflow.

Setup the GitHub Action Workflow

Now we’ve got an Azure Storage blob container, a service principal, a Blazor WASM project, and a GitHub repository…all set to configure the CI/CD flow now.  First let’s put that service principal as a secret in our repository.  In the settings of your repository navigate to the Secrets section and add a secret named AZURE_CREDENTIALS.  The content of this is the full content of your service principal (the JSON blob) that we generated earlier:

Screenshot of GitHub Secrets configuration

This saves the secret for us to use in the workflow and reference as a variable.  You can add more secrets here if you’d like if you wanted to add your resource storage account name as well (probably a good idea).  Secrets are isolated to the original repository so no forks get the secrets at all.  Now that we have these let’s create the workflow file.

Today, Visual Studio isn’t too helpful in authoring the YAML files (would love your feedback here too!) but a GitHub Action is just a YAML file in a specific location in your repository: .github/workflows/azure-storage-deploy.yaml.  The file name can be anything but putting it in this folder structure is what is required.  You can start in the GitHub repo itself using the Actions tab and through the online editor get some level of completion assistance to help you navigate the YAML editing.  Go to the Actions tab in your repository and create a new workflow.  You’ll be offered a starter workflow based on what GitHub thinks your project is like.  As of this writing it thinks Blazor apps are Jekyll workflows so you’ll need to expand and either find the .NET Core one or just start from a blank workflow yourself.

Screenshot of GitHub Actions config

Four my workflow I want to build, publish and deploy my app.  I’ve separated it into a build and deploy jobs.  You can read all about the various aspects of GitHub Actions in their docs with regard to jobs and other syntax as I won’t try to expound upon that in this article.  Here is my full YAML for the entire workflow with some key areas highlighted:

name: .NET Core Build and Deploy

on: [push]

env:
  AZURE_RESOURCE_GROUP: blazor-deployment-samples
  BLOB_STORAGE_ACCOUNT_NAME: timheuerblazorwasm

jobs:
  build:
    name: Build
    runs-on: ubuntu-latest
    steps:
    - uses: actions/[email protected]
    - name: Setup .NET Core
      uses: actions/[email protected]
      with:
        dotnet-version: 3.1.100

    - name: Build with dotnet
      run: dotnet build --configuration Release
    
    - name: Publish with dotnet
      run: dotnet publish --configuration Release 
    
    - name: Publish artifacts
      uses: actions/[email protected]
      with:
        name: webapp
        path: bin/Release/netstandard2.1/publish/BlazorApp27/dist

  deploy:
    needs: build
    name: Deploy
    runs-on: ubuntu-latest
    steps:

    # Download artifacts
    - name: Download artifacts
      uses: actions/[email protected]
      with:
        name: webapp

    # Authentication
    - name: Authenticate with Azure
      uses: azure/[email protected]
      with:
        creds: ${{ secrets.AZURE_CREDENTIALS  }}

    # Deploy to storage using CLI
    - name: Deploy to storage using CLI
      uses: azure/[email protected]
      with:
        azcliversion: latest
        inlineScript: | 
          # show azure account being used
          az account show
          # az storage account upload
          az storage blob upload-batch --account-name ${{ env.BLOB_STORAGE_ACCOUNT_NAME }} -s webapp -d \$web
          # az set content type of wasm file until this is fixed by default from Azure Storage
          az storage blob update --account-name ${{ env.BLOB_STORAGE_ACCOUNT_NAME }} -c \$web -n _framework/wasm/mono.wasm --content-type application/wasm

So a few things going on here, let’s talk about them.

  • Lines 5-7: these are ‘local’ environment variables I set up.  The storage account name is NOT the blob container name but the actual storage account name.  This ideally probably should be a Secret as mentioned above.  Environment variables can be set here and then placeholders reference them later.
  • Starting at line 9 is where the ‘build’ portion is here.  We checkout the code, acquire the SDK and run the build and publish commands.  On line 26-30 is a step where we put the publish output to a specific artifact location for later retrieval of steps.  This is good practice.
  • Lines 40-42 is where we are now in the ‘deploy’ step of our CD and we retrieve those artifacts we previously pushed and we set them as a name ‘webapp’ that the later will use in deployment
  • Line 45 is where we are going to first authenticate to Azure using our service principal retrieved from the Secrets.  The ‘secrets’ object is not something you have to define and is part of the workflow so you just add the property you want to retrieve
  • Line 51 is where we start the deployment to Azure using the CLI commands and our param ‘webapp’ as the source.  This is the CLI command for uploading batch to storage as described in the docs for `az storage blob upload-batch`
  • Line 61 is an additional step that we need for .wasm files.  I believe this to be a bug because there is logic in the CLI to correctly map the content-type but for some reason it is not working…so for now you need to set the content-type for .wasm to `application/wasm` or the Blazor app will not work

This is made possible through a series of actions: checkout, dotnetcore, azure…all brining their functionality we can draw on and configure.  There are a bunch of Azure GitHub Actions we just released for specific tasks like deploying to App Service and such.  These don’t require CLI commands but rather just provide parameters to configure.  Because there is no Storage specific Action as of now, we can use the default CLI action to script what we want.  It is an enabler in lieu of a more strongly-typed action.  Now that we have this workflow YAML file complete we can commit and push to the repository.  In doing that we now have a CI/CD action that will trigger on any push (because that’s how we configured it).  We can see this action happening in my sample repo and you can see since we separate it in two jobs it will show them separately:

Screenshot of action deployment log

Summary

So now we have it complete end-to-end.  And subsequent check-in will trigger the workflow and deploy the bits to my storage account and I can now use my Azure Storage account as a host for my static website built on Blazor WASM.  This full YAML sample flow is available on my repo for this and you can examine it in more detail.

I would love to know how y’all are coming along using GitHub Actions with your .NET projects.  Please comment below! 

| Comments

I’ve been doing a lot of playing around with GitHub Actions lately.  GitHub has had access to repo activity via webhook capabilities for a while.  Actions basically gives similar capabilities in a DevOps flow on the repo itself, where the code for your ‘hook’ is an asset in the repo…using YAML configuration.  Recently an idea came up in one of our teams to provide better pro-active notification of certain types of Issues on our repos.  In GitHub, you can monitor activity in a few ways as a consumer: watching the repo and subscribing to a conversation.  In watching a repo you get a lot of noise of lots of notifications.  In subscribing to an Issue you can only do so after the issue is created and not notification when it is initially created.  What I wanted was simple: Notify me when an Issue is added to a repo that has been labeled as a breaking change.  So with that goal I set off to create this.

Creating the Action

Creating the action was simple.  I followed the great javascript-action template.  I recommend following the instructions in the template rather than the actual documentation as it is simpler to follow and more concise.  The cool thing about the template is you can click ‘Use this template’ and get a new repo for your action quickly:

actionstemplate1

I was able to configure my action quickly.  My goal was to accomplish the following things:

  • Look at an Issue
  • If the issue had a specific (or multiple) labels grab the content of the issue
  • Convert the contents from markdown to HTML and send an email to a set of folks

Actions are JavaScript apps and I was able to use two libraries to help me achieve this quickly: remarkable (to convert Markdown) and SendGrid (to email).  Aside from those you are able to use GitHub core SDKs to get access to the ‘context’ of what that Action is…well, acting upon.  In having this context, I can examine the payload and the specific Issue within that payload.  It looks something like this (relevant lines highlighted):

var core = require('@actions/core');
var github = require('@actions/github');
var sendgrid = require('@sendgrid/mail');
var moment = require('moment');
var Remarkable = require('remarkable').Remarkable;
var shouldNotify = false;

// most @actions toolkit packages have async methods
async function run() {
  try { 
    // set SendGrid API Key
    sendgrid.setApiKey(process.env.SENDGRID_API_KEY);

    // get all the input variables
    var fromEmail = core.getInput('fromMailAddress');
    var toEmail = core.getInput('toMailAddress');
    var subject = core.getInput('subject');
    var verbose = core.getInput('verbose');
    var labelsToMonitor = core.getInput('labelsToMonitor').split(",");
    var subjectPrefix = core.getInput('subjectPrefix');

    // check to make sure we match any of the labels first
    var context = github.context;
    var issue = context.payload.issue;

This context gives me all I need to inspect the Issue contents, labels, etc.  From that then I can decide that I need to perform the notification, convert, and send the email.  Simple and done.

Consuming the Action

Since the Action is now defined, something needs to consume it.  This is in the form of a GitHub Workflow.  This is a YAML file that decides when to operate and what to do.  Specifically you define a Trigger.  These can be things like when a push happens, a PR is issued, or, in my case, when an Issue happens.  So now on my repo I can consume the action and decide when it should operate.  As an example here is how I’m consuming it by putting a yaml file in .github/workflows folder in my repo.

name: "bc-notification"
on: 
  issues:
    types: [edited, labeled]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/[email protected]
    - uses: timheuer/[email protected]
      env:
        SENDGRID_API_KEY: ${{ secrets.SENDGRID_API }}
      with:
        fromMailAddress: '${{ secrets.BC_NOTIFY }}'
        toMailAddress: '${{ secrets.BC_NOTIFY }}'
        subject: 'BC:'
        subjectPrefix: 'BC:'
        labelsToMonitor: "Breaking change"

Looking at this workflow you can see in the highlighted areas that I’m triggering on Issues, and then secondarily only when they are edited or labeled.  Then later on this workflow defines using my new action I created (and now published as a tagged version) called issue-notifier.  Done.  Now whenever an Issue is labeled as a breaking change in this repo and email is sent to a set of partners via email proactively without them having knowledge that there may be something they want to subscribe to in the repo.  Here is an example of seeing it triggered:

issuesampleanim

and the result notification in my inbox:

samplemail

Dev experience for Actions

I’ve had a good experience working with GitHub Actions and learning the various ways of automating a few things beyond just build in my repos.  My #1 wish for the ‘inner loop’ experience in creating Actions is the debugging experience.  You have to actually push the workflow and trigger it so ‘test’ it.  This leads to a slow inner-loop development flow.  It would be nice to have some more local runner capability to streamline this process and not muddy the repo with a bunch of check-ins fixing dumb things as you are iterating.

Anyhow, if you want to use this action I created, feel free: https://github.com/marketplace/actions/github-issue-notifier

| Comments

I’m an avid user of Twitter (join me on Twitter @timheuer) for things social, family and technical.  I use it to keep in touch with friends, learn things from technical sources, get news and otherwise interact with names I’ve never met in person.  My use of Twitter has changed much over the years and I’ve found myself just using the web site more and more while on the desktop where a full browser is available.  On my mobile I use native clients but presently not one of the ‘official’ Twitter mobile apps (I find them less full-featured than the 3rd party ones which also offer a better performing experience for me).  In my use of the web site I’ve noticed more and more links that say “view summary” in my feed. 

Twitter Summary Card sample

Sample Twitter Summary Card from dev.twitter.com

When expanding this, the twitter post (which is limited to 140 characters itself) basically provides more information about the post.  The most frequent I’ve seen is what Twitter calls a Summary Card, which reads metadata from the URL posted in the Tweet, effectively extending the 140 character limitation a bit (however summary descriptions are limited to 200 characters).  I post to Twitter most of my blog posts and thought it would be a great little enhancement to provide this summary data in the feed view (it isn’t expanded by default so users must explicitly click ‘view summary’ and thus there is no real risk of making your Twitter feed more noisy without you choosing to read more).

Learning about Twitter Cards

I recently learned more about these after a quick exchange with @JeffSand (former director at Channel 9, now at Twitter) about another implementation (more on that later) and thought I’d get into doing the Summary Card for my own personal use.  Twitter has some pretty decent documentation on this topic on their Dev center.  There are a few types of Cards available for content authors to leverage and the great thing is that you, the content author, really just have to add metadata to your content and that’s it.  Twitter feeds/apps do the rest by interpreting and displaying that data in a Card view in the feed.

Summary Cards seem to be the most popular to me but that could be just my own personal use.  The others that I could see add value are the App info/install and deep-linking ones.  These do presently, however, seem to add most value in the mobile space and not the desktop space.

  • App Cards – provide a link to mobile apps and product page information.  This is currently limited to iOS and Google Play store listings it seems.  There is also some documentation on App Install/Deep-linking techniques.
  • Photo Cards – provide a good representation of an image in-line.  You see this for images from Twitter, Flicker, SkyDrive, etc.  Sadly, Instagram chose not to pull this support for their usage last year (yay customers…grrr).
  • Player Card – seems great for podcast publishers!  You see YouTube usage of this most of the time as well.
  • Product Cards – I’ve not seen usage of this in my interactions but I would imagine we’ll see more of this in sponsored posts/ads as they become more prevalent

After reading about these, I set out to add this metadata to my own content on my site.

Modifying my blog engine

At present I use Subtext as my blogging engine of choice.  This is due to some choices made long ago using .Text and then migrating to Subtext when that transitioned the work.  Subtext has served me very well over the years but is showing its age more recently as I’ve been desiring to modernize some things and leverage the vast ecosystem of WordPress type themes and plugins.  Still, it is Open Source, written in ASP.NET and so I could modify things as I need.

I first went to the source of Subtext and thought I’d just upgrade.  The version I run is the latest stable/release build of Subtext and hasn’t really evolved since that version release a few years back.  The version on GitHub now is moving toward ASP.NET 4, which is good and I thought I’d just move to that.  I had an immensely painful time trying to do this upgrade and ultimately decided against it (blog post on my ASP.NET migration woes perhaps to come) as there wasn’t much return on my time investment at present to do that.

The first thing you have to do is set up your Twitter Card.  Twitter’s docs have a cool Cards validator tool (only works on Webkit browsers for some reason) to give you a display of your card so you can tweak settings.  Once you do this and have your metadata set up, you need to submit your URI to Twitter for validation.  This step is critical or they won’t show up.  This step is also not entirely clear in the process and you may think that just setting the META tags are sufficient…they are not.  Use the validator tool to see the second tab to “Validate & Apply” for your card view.  I did a basic Summary Card for my entire site to get through this process so I could test/validate my own logic for the per-post metadata later.

I wanted the data from each of my posts to be a unique Summary Card as I felt that has been the most valuable I’ve used in my feed.  Card data is implemented as META tags so your mileage may vary on how you need to implement this. 

If you are a WordPress user, there is an amazing ecosystem of plugin developers who have already done this for you and you just install a plugin.  It doesn’t seem that Twitter provides a directory of recommended plugins for popular CMS systems, so I can’t speak to which ones are reliable or not.  Twitter Cards for WordPress seems to have a lot of downloads so I’d check that one out first if you are a WordPress user.

Unfortunately the way Subtext is architected doesn’t easily let me jam new META tags into my site that are dynamic (there is a tool that allows me to easily add static ones without altering the page itself).  A few properties for my cards would indeed be static (creator, site, domain) so I just used the Subtext admin area to add those.  The title and description I wanted to be content-specific.  I didn’t modify the Subtext source, but rather created my own ASP.NET 3.5 control that did this.  I put the code on my GitHub repository for you to view…you’ll see it is a quick few lines of code.  My control basically gets the context of the request, pulls the title/body out and sets these META tags.  Now whenever I post a URL to Twitter, my Summary Card option will be there and provide some helpful information to the reader (hopefully).

Viewing the results on different platforms

Looking at the results, I’m happy with my quick hack.  Outside of the web site, the cards only show on Twitter official apps it seems.  Here are some examples based on my site content.  The red box is only there to illustrate what content is actually the card:

Twitter Summary Card web view

View on twitter.com when expanding ‘view summary’

Twitter Summary Card iOS

View on Twitter for iOS when viewing full Tweet detail

Twitter Summary Card on Windows

View on Twitter for Windows

Simple metadata additions providing some more content beyond the 140 characters Twitter allows in the post itself.  I chose to use my face in the image because my blog doesn’t associate a “poster” image per-post.  If yours does, you should use that if available and change the twitter:image value as needed.

Summary

This didn’t take me long to implement and adds some additional value on to the content I post on Twitter (I hope).  Again, this is optional for the user, they have to click “view summary” or view the Tweet in details view to see this, so there is no general feed noise here.  I have found that it appears to cache the results per URL so there may be an issue if you change title/description on a post.  I’ve found that my initial tests wouldn’t update the previous Cards I had which were just the blog summary data…that’s unfortunate, but now I know.

When I talked with @JeffSand he mentioned this would be nice to have the App cards for Windows Phone/Windows Store apps (ugh, we really have to do something about that name).  I think this is a great idea!  Unfortunately it looks like Twitter needs to do something to understand Windows apps as it only understands iOS/Google Play app stores.  I’ve started a conversation with Jeff’s team as of the date of this writing and hopefully we can understand what might be required and deliver some goodness here.

If you are a content publisher (hint: if you are a blogger you are) this is a subtle and helpful addition to your content in my opinion.  If you are podcaster, consider using the Player Card as well.  It was really easy to understand and implement and I like the results.

Hope this helps!

| Comments

As someone who uses a few iOS devices around the house, I’ve become fond of visiting sites and seeing a little banner that lets me know that a native app is available for the web app I’m using.  This concept was introduced in iOS 6 and called “Smart App Banners” in the developer documentation.  You may have seen them as well

Smart App Banner example

I wanted to provide the same affordance for Windows Store apps and recalled there are already two ways to integrate the web app browsing experience with the native app in the Windows Store:

In iOS, this is implemented a bit natively in mobile Safari, not through any additional JavaScript files/functions.  As with any random idea I have, I always set out to search if it has been done before (often it already has).  I couldn’t find something like this specifically for Windows Store apps, however I did find an implementation for pre-iOS6 browsers as well as Android.  It was from Arnold Daniels and hosted on GitHub, so what’s a guy to do?  Fork it!

And that’s what I did: https://github.com/timheuer/jquery.smartbanner

With a few more conditional checks, if you put this in your web app/site now you’ll get it automatically linked to your Windows Store app if you have the correct meta tags.  For the implementation for Windows Store apps I’ve made some assumptions about the meta tags used.  The following would get you a good experience:

   1: <html>
   2:   <head>
   3:     <title>Hulu Plus</title>
   4:     <meta name="author" content="Hulu LLC"/>
   5:     <meta name="msApplication-ID" content="App" />
   6:     <meta name="msApplication-PackageFamilyName" content="HuluLLC.HuluPlus_fphbd361v8tya" />
   7:     <link rel="stylesheet" href="jquery.smartbanner.css" type="text/css" media="screen"/>
   8:     <meta name="msapplication-TileImage" content="http://wscont1.apps.microsoft.com/winstore/1x/c8ac8bab-d412-4981-8f31-2d163815afe4/Icon.6809.png" />
   9:   </head>
  10:   <body>
  11:         <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8/jquery.min.js"></script>
   1:  
   2:         <script src="jquery.smartbanner.js">
   1: </script>
   2:         <script type="text/javascript">
   3:             $(function () { $.smartbanner() })
   4:         
</script>
  12:   </body>
  13: </html>

If you have the combination of linking your app using the msApplication-ID and msApplication-PackageFamilyName values and the pinned sites high fidelity msApplication-TileImage value to deliver the combined experience.  If you have all these, then the banner “just works” for you.

NOTE: Because the Microsoft documentation mixes msApplication and msapplication (note the casing on the “A”), you should use the capitalization (which is more consistent in the documentation) for ensuring it all works smoothly. 

Now some may use a different pinned site TileImage than that of what they may use for the app tile, however this can also be overridden in the options:

   1: $(function () { $.smartbanner({ icon:'http://some/url/to/tile/image.png' }) })

When done the web app/site would display a dismissable banner (using Hulu as an example here) on a machine running Windows 8:

Hulu smart app banner example

So adding one reference, 2 files and an initialization and you’ll help give your app a bit more promotional flare for those viewing in browsers other than the ‘modern’ IE view.  As you can see the screenshot above is from the desktop version which doesn’t take advantage of the other msApplication-** attribute usage in desktop mode.  The one thing it doesn’t do that the iOS system-level banner does is detect if the app is installed and change “view” to “open” however there isn’t a mechanism to easily detect if the app is installed using JavaScript from a web app.

NOTE: While the ‘View in Store’ button works in IE, Safari and Firefox it does not work in Chrome.  I’m not sure what any timeframe it would be for Chrome to support the store launch protocol.

Anyhow, I thought this might be a useful addition to app developers web apps that have a Windows Store app counterpart (and perhaps already have an iOS and Androi counterpart as well).  I’ve submitted a pull request to the project I forked from, but you can get it from my GitHub repository for jquery.smartbanner.

Hope this helps!

| Comments

In the late nights when I have time to work on Callisto (my Windows 8 XAML toolkit), I often am making changes that I really wish I could either pair with someone on or at a minimum solicit some feedback.  Primarily single-person open source projects are lonely :-).

Last night was no different.  I had a user of Callisto having a problem with a specific code path.  While my testing of the fix was fine, I didn’t want to rush it without him understanding the simple fix.  I had no good way of showing him a combined diff to comment on at the time.  My problem was, what I thought, simple: how do I quickly share a diff from a GitHub project without requiring them to have a fork?

I set out to the best place to have an argument, Twitter.  I asked the same question and got a lot of different responses, but almost all of them pointing to what appears to be the typical git flow in this matter is to use pull requests for code reviews and sharing potential updates.  This really frustrated me as I thought it was unnatural to the goal I was trying to accomplish.  Lots of folks told me this was the way to do it and that I was thinking about it wrong.  To them I say this is broken.  I didn’t want to pollute pull requests to be a mix of legitimate pull requests to “in-flight” code changes that shouldn’t be even considered for merge yet.  I simply wanted a quick, light-weight way to share a diff graph.

Paul Betts (as he has always awesomely helped with my GitHub questions) pointed me to the thing I was ultimately looking for:

   1: git diff > current.diff

This gave me the output that I needed and Gist understands color coding for this data so I just needed to post it there.  Good thing is that GitHub has an API that is relatively simple to understand.  Since I’ve been using poshgit as my shell lately I set out to create a CmdLet for my flow.  I wanted one command that would grab the diff and post to a Gist and give me the URL.  I’ve never touched a line of PowerShell programming before and set out to start.

Honestly, it was painful to understand PowerShell for me in this simple task.  The PowerShell “IDE” is slow to help with any really good flow.  I wanted some VS experience to bootstrap me and I knew I didn’t want a full-on assembly approach.  A PowerShell script should do me just fine.  I scoured many sites online and started out with my script.  After a few hours of searching/trying I was almost there but still not successful.  I ended the night posting back to Twitter with a Gist of my attempt.

It was late, I went to bed.  As is how most things happen on the interwebs, I awoke to some pointers to where someone has already done most of the work!  Despite my searching I never found something that basically took me 90% there already.  This was in the form of PsGist.  Boom!  This is exactly what I needed.  I just needed to modify the input from a literal file to the source of the diff.  I spent an unnecessary hour fighting with an unauthorized error on the GitHub API due to some unexpected value being in the string captured after entering credentials, but after that was solved I was done.  I decided to merge this back to PsGist and submit a pull request for the new command.  A few minutes later and after a few questions/quick modification, it was merged.  I now have a simple/fast flow to quickly say “hey, here’s what I’m working on, what do you think?” to anyone without having to do any different branching that isn’t currently in my flow.

   1: New-DiffGist -Description "SettingsFlyout fix" -Public

After the module is imported the above command will perform a diff, and immediately post that to Gist and give you the URL (and optionally launch in browser if you use the –Launch param).  I know it may sound trivial, but it helps my flow and is the simple thing that I was looking for.  Thanks Ryan for the PsGist starting point (and for merging my new CmdLet)!

Hope this helps someone!