| Comments

Everyone!  As a part of my responsibilities on the Visual Studio team for .NET tools I try to spend the time using our products in various different ways, learning what pitfalls customers may face and ways to solve them.  I work a lot with Azure services and how to deploy apps and I’m a fan of GitHub Actions so I thought I’d share some of my latest experiments.  This post will outline the various ways as of this writing you can host Blazor WebAssembly (Wasm) applications.  We actually have some great documentation on this topic in the Standalone Deployment section of our docs but I wanted to take it a bit further and demonstrate the GitHub Actions deployment of those options to Azure using the Azure GitHub Actions.

Let’s get started!

If you don’t know what Blazor Wasm is then you should read a bit about What is Blazor on our docs.  Blazor Wasm enables you to write your web application front-end using C# with .NET running in the browser.  This is different than previous models that enabled you to write C# in the browser like Silverlight where a separate plug-in was required to enable this.  With modern web standards and browser, WebAssembly has emerged as a standard to enable compilation of high-level languages for web deployment via browsers.  Blazor enables you to use C# and create your web app from front-end to back-end using a single language and .NET.  It’s great.  When you create a Blazor Wasm project and publish the output you are essentially creating a static site with assets that can be deployed to various places as there is no hard server requirement (other than to be able to serve the content and mime types).  Let’s explore these options…

ASP.NET Core-hosted

For sure the simplest way to host Blazor Wasm would be to also use ASP.NET Core web app to serve it.  ASP.NET Core is cross-platform and can run pretty much anywhere.  If you are likely using C# for all your development, this is likely the scenario you’d be using anyway and you can deploy your web app, which would container your Blazor Wasm assets as well to the same location (Linux, Windows, containers).  When creating a Blazor Wasm site you can choose this option in Visual Studio by selecting these options:

Blazor Wasm creation in Visual Studio

or using the dotnet CLI using this method:

dotnet new blazorwasm --hosted -o YourProjectName

Both of these create a solution with a Blazor Wasm client app, ASP.NET Core Server app, and a shared (optional) library project for sharing code between the two (like models or things like that).  This is an awesome option and your deployment method would follow the same method of deploying the ASP.NET Core app you’d already be using.  I won’t focus on that here as it isn’t particularly unique.  One advantage of using this method is ASP.NET Core already has middleware to properly serve the pre-compressed Brotli/gzip formats of your Blazor Wasm assets from the server, reducing the payload across the wire.  You’ll see more of this in below options, but using ASP.NET Core does this automatically for you.  You can deploy your app to Azure App Service or really anywhere else easily.

Benefits:

  • You’re deploying a ‘solution’ of your full app in one place, using the same tech to host the front/back end code
  • ASP.NET Core enables a set of middleware for you for Blazor routing and compression

Be Aware:

  • Basically billing.  Know that you would most likely host in an App Service or non-serverless (consumption) model.  It’s not a negative, just an awareness.

Azure Storage

If you just have the Blazor Wasm site and are calling in to a set of web APIs, serverless functions, or whatever and you just want to host the Wasm app only then using Storage is an option.  I actually already wrote about this previously in this blog post Deploy a Blazor Wasm Site to Azure Storage Using GitHub Actions so I won’t repeat it here…go over there and read that detail.

Example GitHub Action Deployment to Azure Storage: azure-deploy-storage.yml

Benefits:

  • Consumption-based billing for storage.  You aren’t paying for ‘on all the time’ compute
  • Blob-storage managed (many different tools to see the content)

Be Aware:

  • Routing: errors will need to be routed to index.html as well and even though they will be ‘successful’ routes, it will still be an HTTP 404 response code.  This could be mitigated by adding Azure CDN in front of your storage and using more granular rewrite rules (but this is also an additional service)
  • Pre-compressed assets won’t be served as there is no middleware/server to automatically detect and serve these files.  Your app will be larger than it could be if serving the compressed brotli/gzip assets.

Azure App Service (Windows)

You can directly publish your Blazor Wasm client app to Azure App Service for Windows.  When you publish a Blazor Wasm app, we provide a little web.config in the published output (unless you supply your own) and this contains some rewrite information for routing to index.html.  Since App Service for Windows uses IIS when you publish this output this web.config is used and will help your app routing.  You can also publish from Visual Studio using this method as well:

Visual Studio publish dialog

or using GitHub Actions easily using the Azure Actions.  Without the ASP.NET Core host you will want to provide IIS with better hinting on the pre-compressed files as well.  This is documented in our Brotli and Gzip documentation section and a sample web.config is also provided in this sample repo.  This web.config in the root of your project (not in the wwwroot) will be used during publish instead of the pre-configured one we would provide if there was none.

Example GitHub Action Deployment to Azure App Service for Windows: azure-app-svc-windows-deploy.yml

Benefits:

  • Easy deployment and default routing configuration provided in published output
  • Managed PaaS
  • Publish easily from Actions or Visual Studio

Be Aware:

  • Really just understanding your billing choices for the App Service

Azure App Service (Linux w/Containers)

If you like containers, you can put your Blazor Wasm app in a container and deploy that where supported, including Azure App Service Containers!  This enables you to encapsulate a little bit more in your own container image and also control the configuration of the server a bit more.  For Linux, you’d be able to specify a specific OS image you want to host your app and even supply the configuration of that server.  This is nice because we need to do a bit of that for some routing rules for the Wasm app.  Here is an example of a Docker file that can be used to host a Blazor Wasm app:

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app

COPY . ./
WORKDIR /app/
RUN dotnet publish -c Release

FROM nginx:1.18.0 AS build
WORKDIR /src
RUN apt-get update && apt-get install -y git wget build-essential libssl-dev libpcre3-dev zlib1g-dev
RUN CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') \
    git clone https://github.com/google/ngx_brotli.git && \
    cd ngx_brotli && git submodule update --init && cd .. && \
    wget -nv http://nginx.org/download/nginx-1.18.0.tar.gz -O - | tar -xz && \
    cd nginx-1.18.0 && \ 
    ./configure --with-compat $CONFARGS --add-dynamic-module=../ngx_brotli

WORKDIR nginx-1.18.0
RUN    make modules

FROM nginx:1.18.0 as final

COPY --from=build /src/nginx-1.18.0/objs/ngx_http_brotli_filter_module.so /usr/lib/nginx/modules/
COPY --from=build /src/nginx-1.18.0/objs/ngx_http_brotli_static_module.so /usr/lib/nginx/modules/

WORKDIR /var/www/web
COPY --from=build-env /app/bin/Release/netstandard2.1/publish/wwwroot .
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80 443

In this configuration we’re using an image to first build/publish our Blazor Wasm app, then using the nginx:1.18.0 image as our base and building the nginx_brotli compression modules we want to use (lines 8-19,23-24).  We want to supply some configuration information to the nginx server and we supply an nginx.conf file that looks like this:

load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;
events { }
http {
    include mime.types;
    types {
        application/wasm wasm;
    }
    server {
        listen 80;
        index index.html;

        location / {
            root /var/www/web;
            try_files $uri $uri/ /index.html =404;
        }

        brotli_static on;
        brotli_types text/plain text/css application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/vnd.microsoft.icon image/bmp image/svg+xml application/octet-stream application/wasm;
        gzip on;
        gzip_types      text/plain application/xml application/x-msdownload application/json application/wasm application/octet-stream;
        gzip_proxied    no-cache no-store private expired auth;
        gzip_min_length 1000;
        
    }
}

Now when we deploy the Docker image is composed, provided to Azure Container Registry and then deployed to App Service for us.  In the above example, the first two lines are loading the modules we build in the Docker image previously.

Example GitHub Action Deployment to Azure App Service using Linux Container: azure-app-svc-linux-container.yml

Benefits:

  • Containers are highly customizable, allowing you some portability and flexibility
  • Easy deployment from Actions and Visual Studio (you can use the same publish mechanism in VS)

Be Aware

  • Additional service here of using Azure Container Registry (or another registry to pull from)
  • Understanding your billing plan for App Service
  • Might need more configuration awareness to take advantage of pre-compressed assets (by default nginx requires an additional module for brotli and you’d have to rebuild it into nginx)
    • NOTE: The example repo has a sample configuration which adds brotli compression support for nginx

Azure App Service (Linux)

Similarly to App Service for Windows you could also just use App Service for Linux to deploy your Wasm app.  However there is a big known workaround you have to achieve right now in order to enable this method.  Primarily this is because there is no default configuration or ability to use the web.config like you can for Windows.  Because of this if you use the Visual Studio publish mechanism it will appear as if the publish fails.  Once completed and you navigate to your app you’d get a screen that looks like the default “Welcome to App Service” page if no content is there.  This is a bit of a false positive :-).  Your content/app DOES get published using this mechanism, but since we pus the publish folder the App Service Linux configuration doesn’t have the right rewrite defaults to navigate to index.html.  Because of this I’d recommend if Linux is your desired host, that you use containers to achieve this.  However you CAN do this using GitHub Actions as you manipulate the content to push.

Example GitHub Action Deployment to Azure App Service Linux: azure-app-svc-linux-deploy.yml

Benefits:

  • Managed PaaS

Be Aware:

  • Cannot publish ideally from Visual Studio
  • No pre-compressed assets will be served
  • Understand your billing plan for App Service

Summary

Just like you have options with SPA frameworks or other static sites, for a Blazor Wasm client you have similar options as well.  The unique aspects of pre-compressed assets provide some additional config you should be aware of if you aren’t using ASP.NET Core hosted solutions, but with a small bit of effort you can get it working fine. 

All of the samples I have listed here are provided in this repository: timheuer/blazor-deploy-samples and would love to see any issues you may find.  I hope this helps summarize the documentation we have on configuring options in Azure to support Blazor Wasm.  What other tips might you have?

Stay tuned for more!

| Comments

One of the things that I like about Azure DevOps Pipelines is the ability to make minor changes to your code/branch but not have full CI builds happening.  This is helpful when you are updating docs or README or things like that which don’t materially change the build output.  In Pipelines you have the built-in functionality to put some comments in the commit message that trigger (or don’t trigger rather) the CI build to stop.  The various ones that are supported are identified in ‘Skipping CI for individual commits’ documentation.

Today that functionality isn’t built-in to GitHub Actions, but you can add it as a base part of your workflows with the help of being able to get to the context of the commit before a workflow starts!  Here is an example of my workflow where I look for it:

name: .NET Core Build and Deploy

on:
  push:
    branches:
      - master

jobs:
  build:
    if: github.event_name == 'push' && contains(toJson(github.event.commits), '***NO_CI***') == false && contains(toJson(github.event.commits), '[ci skip]') == false && contains(toJson(github.event.commits), '[skip ci]') == false
    name: Build Package 
    runs-on: ubuntu-latest

You can see at Line 10 that I’m looking at the commit message text for: ***NO_CI***, [ci skip], or [skip ci].  If any of these are present then the job there does not run.  It’s as simple as that!  Here is an example of my last commit where I just was updating the repo to include the build badge:

Screenshot of a commit message on GitHub

And you can see in the workflows that it was not run:

Screenshot of workflow status on GitHub

A helpful little tip to add to your workflows to give you that flexibility!  Hope this helps!

| Comments

I’ve continuing been doing research on GitHub Actions for .NET developers and came across a comment that someone said (paraphrasing): I wish I could use it for .NET Framework apps but it is just .NET Core.

NOT TRUE! And I want to help fix that perception.  There are some bumps in the road, but allow me to explain some simple (yes I realize they are simple) steps to get it working.

NOTE: I’ve been on this research because I’m looking to better get ‘publish’ experiences in Visual Studio for your apps, but I want to help you get into best practices for CI/CD and DevOps practices.  Basically I’m on a mission for right-click, publish to CI to improve for you :-)

So in this post I’ll walk through an ASP.NET Framework (MVC) app and have it build/publish artifacts using GitHub Actions.  Let’s get started…

The simple app

I am starting from File…New Project and selecting the ASP.NET Web Application (.NET Framework):

Screenshot of template selection

So it’s basic vanilla and I’m not changing anything.  The content of the app is not important for this post, just that we have a full .NET Framework (I chose v4.8) app to use.  From here in Visual Studio you can build the app, run, debug, etc.  Everything you need here is in Visual Studio of course.  If you wanted to use a terminal to build this app, you’d be likely (recommended) using MSBuild to build this and not the dotnet CLI.  The command might look something like this:

code

I’m specifying to build the solution and use a release profile.  We’ll come back to this, now let’s move on.

Publish profile

Now for our example, I want to publish this app using some pre-compiled options.  In the end of the publish task I’ll have a folder that I’d be able to deploy to a web server.  To make this simple, I’m using the Publish capabilities in Visual Studio to create a publish profile.  You get there from right-click Publish (don’t worry, we’re not publishing to production but just creating a folder profile).

Publish profile screenshot

The end result is that it will create a pubxml file in the Properties folder in your solution

Publish profile in solution explorer

So we have our app and our publish (to a folder) profile.  Moving on to the next step!

Publish to the repo and create initial GitHub Actions workflow

From Visual Studio we can add this to GitHub directly.  In the lower right of visual Studio you’ll see the ability to ‘Add to Source Control’ and select Git:

Add to source control tray button

which will bring up the UI to create/push a new repository to GitHub directly from Visual Studio:

Publish to GitHub from VS

Now we have our project in GitHub and we can go to our repository and create the initial workflow.

NOTE: This is the area if you have comments about please do so below.  In the workflow (pun intended) right now you leave Visual Studio and go to GitHub to create a new workflow file then have to pull/sync, etc.  You don’t *have* to do this but usually this is the typical workflow to find templates of workflow files for your app.  Got feedback on what Visual Studio might do here, share below!

Now that you have the publish profile created and your solution in GitHub you’ll need to manually add the pubxml file to the source control (as by default it is a part of the .gitignore file).  So right click that file in solution explorer and add to your source control.  Now on your repository in GitHub go to the Actions tab and setup a new workflow:

Setting up new workflow

The reason for this (in choosing new) is that you won’t see a template that is detected for .NET Framework.  And due to whatever reason GitHub thinks this is a JavaScript repository.  Anyhow, we’re effectively starting with blank.  Create the workflow and you’ll get a very blank default:

name: CI

on: [push]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/[email protected]
    - name: Run a one-line script
      run: echo Hello, world!
    - name: Run a multi-line script
      run: |
        echo Add other actions to build,
        echo test, and deploy your project.

And it will not be helpful, so we’ll be wiping it out.  I’ve named my workflow build.yml as I’m only focusing on build right now. 

Defining the .NET Framework build steps

For this post I’m going to put all the steps here rather than build-up and explain each one so you can see the entirety.  Here’s the final script for me:

name: Build Web App

on: [push]

jobs:
  build:

    runs-on: windows-latest

    steps:
    - uses: actions/[email protected]
      name: Checkout Code
    
    - name: Setup MSBuild Path
      uses: warrenbuckley/[email protected]
      
    - name: Setup NuGet
      uses: NuGet/[email protected]
    
    - name: Restore NuGet Packages
      run: nuget restore SimpleFrameworkApp.sln

    - name: Build and Publish Web App
      run: msbuild SimpleFrameworkApp.sln /p:Configuration=Release /p:DeployOnBuild=true /p:PublishProfile=FolderProfile

    - name: Upload Artifact
      uses: actions/[email protected]
      with:
        name: published_webapp
        path: bin\Release\Publish

Let’s start explaining them.

Ensuring the right runner

In a previous post I described what a ‘runner’ is: What is a GitHub Action Runner?  In that post I pointed to the documentation of runners including what is installed on them.  Now for .NET Framework apps we need to use Windows because .NET Framework only works on Windows :-).  Our action needs to specify using Windows and we are using the windows-latest runner image as we are on Line 8.  I won’t spend time talking about self-hosted runners here, but regardless even your self-hosted runner needs to support .NET Framework.  As a part of the windows-latest runner image, you can see what is already provided on the image.  Currently windows-latest is defined as Windows Server 2019 and the documentation shows what is provided on the hardware resource.  This includes already having a version of Visual Studio 2019 installed…which means MSBuild is already there!

Setting up MSBuild

Even though Visual Studio is on the runner, MSBuild is not presently in the default PATH environment (as of the date of this writing)…so you have options.  The documentation provides the path to where Visual Studio is installed and you can determine the right location to MSBuild from there and specify the path fully.  However, I think there should be easier ways to do this and the community agrees!  In the marketplace there is an Action you can use to setup the PATH to have the MSBuild toolset in your path and you can see this being used on Line 14/15.  The action here basically does a ‘vswhere’ and sets up the ability to later just call MSBuild directly.  This only does MSBuild and not other VS tools that are added to PATH as a part of the ‘Visual Studio Command Prompt’ that most people use.  But using this one we have here, we can now build our Framework app with less path ugliness.

Building and publishing the app

With our MSBuild setup in place, we can start building.  The first thing we need to do is restore any NuGet packages.  In Line 20,21 is where we use the NuGet CLI to restore the solution’s packages that are needed.

NOTE: For some reason using msbuild –t:Restore was not working at the time of this writing that I expected to work…

Once we have the packages restored, we can proceed to build.  In Line 24 is our full command to build the solution.  We are specifying some parameters:

  • Configuration – simple, we are building release bits
  • DeployOnBuild – this helps us trigger the publish step
  • PublishProfile – this uses the publish profile we specify to execute that step and all the other options we have set in that configuration.  We just have to specify the name, not the path

After the completion of this step (we didn’t set any different output folders) we will have a bunch of files in the default publish folder (which would be bin\<config>\Publish).

Publish the artifacts

Once we have the final published bits, we can upload them as the artifact for this build pipeline.  As we see starting at Line 26 we are using another action to upload our content (binaries, files) to this completed workflow as an artifact named ‘published_webapp’ and this will be associated with this run and zipped up all these assets you can download or later use these artifacts to publish to your servers, cloud infrastructure, etc.

Summary

So if you thought you couldn’t use GitHub Actions for your .NET Framework now you know you can with some extra steps that may not have been obvious…because they aren’t.  In the end you have a final build:

Picture of a final build log

What I’ve shared here I put in a sample repro: timheuer/SimpleFrameworkApp where you can see the workflow (in .github/workflows/build.yml) and the logs.  I hope this helps, please share your experiences you’d like to see in Visual Studio to help you better for GitHub Actions.

| Comments

So what exactly is a runner and how do I know what’s in it?  When you use GitHub Actions and specify:

jobs:
  build:
    name: Build
    runs-on: ubuntu-latest

What exactly does that mean ‘ubuntu-latest’?  Well a runner is defined as ‘a virtual machine hosted by GitHub with the GitHub Actions runner application installed.’  Clear? LOL, basically it is a machine that has a target operating system (OS) as well as a set of software and/or tools you may desire for completing your job.   GitHub provides a set of these pre-configured runners that you are using when you use the runs-on label and use any one of the combination of: windows-latest, ubuntu-latest (or ubuntu-18.04 or ubuntu-16.04), macosx-latest.  As of this writing the matrix is documented here with also the specs of the virtual environment: Supported runners and hardware resources.

What is on a GitHub-hosted runner?

I personally think it is good practice to never assume the tool you want is on the environment you didn’t create and you should always acquire the SDK, tools, etc. you need.  That’s just me and possibly being overly cautious especially when a definition of a hosted runner provides the tools you need.  But it makes your workflow very explicit, perhaps portable to other runners, etc.  Again, I just think it is good practice. 

Runner log

But you may want to know what exactly you can use on a GitHub-hosted runner when you specify it.  Luckily GitHub publishes this in the documentation Software installed on GitHub-hosted runners.  For example as a .NET developer you might be interested to know that the windows-latest runner has:

  • Chocolatey
  • Powershell Core
  • Visual Studio 2019 Enterprise (as of this writing 16.4)
  • WinAppDriver
  • .NET Core SDK 3.1.100 (and others)

This would be helpful to know that you could use choco install commands to get a new tool for your desired workflow you are trying to accomplish.  What if you don’t see a tool/SDK that you think should be a part of the base image?  You can request to add/update a tool on a virtual environment on their repo!  Better yet, submit a repo if you can.

How much will it cost me to use GitHub-hosted runners?

Well, if you are a public repository, it’s free.  If you are not a public repository your account gets a certain number of minutes per month for free before billing as well.  It’s pretty generous and you can read all the details here: About billing for GitHub Actions.  In your account settings under the Billing section you can see your usage.  They don’t even bother to show your usage for public repositories because it’s free.  I have one private repo that I’ve used 7 minutes on this month.  My bill is $0 so far.  The cool thing is you can setup spending limits there as well.

Can I run my own runner?

Yes! Similar to Azure Pipelines you can create and host your own self-hosted runner.  The GitHub team did an amazing job with the steps here and it seriously couldn’t be simpler.  Details about self-hosted runners (either on your local machine, your own cloud environment, etc.) can be found in About self-hosted runners documentation.  Keep in mind that now the billing is on you and you should understand the security here as well because PRs and such may end up using these agents and the documentation talks all about this.  But if you are needing to do this, the steps are dead simple and the page in your repo pretty much makes it fool proof for most cases:

Screenshot of self-hosted runner config

It’s good to know what is on the environment you are using for your CI/CD and also cool to know you can bring your own and still use the same workflow.  I’ve experimented with both and frankly like the GitHub-hosted model the best for my projects.  They don’t have unique requirements and since they are all public repositories, no cost to me.  Best of all that I don’t have to now manage an environment!

| Comments

Reflecting back on this blog I realized it’s been 16 years of life here.  It hasn’t always been consistent content focus on tech over the early years versus a more random outlet of my thoughts (and apparent lack of concern over punctuation and capitalization).  Some months had more volume and as my career (and perhaps passions) changed some had lower volume.

Month comparison of blog post quantity

I’ve enjoyed getting back in to posting more recently and finding more time (and again, perhaps the passion) to do so.  I think also with newer various outlets of social media, my personal passions are posted elsewhere now like Instagram (if you want to follow my escapades on the bike mostly).  Recently this summer in 2019 I switched job roles at Microsoft back into program management with the .NET team.  I’m focusing on a few different things but having spent so much time in UI frameworks on the client side for so long, I missed some waves of changes in ASP.NET and needed to re-learn.  I spent the first month of my new role doing this and exploring the end-to-end experiences.  Instead of building a To-do app, I wanted to have some real scenario for me to work with so I set off to migrate my blog…it was time anyway as just that week I had received warnings on my server about some errors.  I could avoid this no longer.  It wasn’t an easy path, but here was my journey.

Existing website frameworks

My blog started in 20-Aug-2003 and was built using Community Server (from Telligent) initially.  That quickly forked into a product called .TEXT from Scott Watermasysk and I moved to using that as it was solely content management and not forums or other things I didn’t need.  I stayed on .TEXT for as long as it lived until again another fork happened.  This time the .NET ecosystem around Open Source was improving and this fork was one of those projects.  Phil Haack, along with others, created SubText which was initially a pretty direct fork, but quickly evolved.  I wasn’t much interested in the code at this point, but wanted to follow ‘current’ frameworks so I moved to SubText.  All along this path it was easy because these migrations were similar, using the same frameworks and similar (if not same) data structures.  The SubText site was an ASP.NET 2.0 site using SQL Server as the data store.  Over time this moved to .NET 3.5 but not much more after that (for me at least). 

Over time I made a few adjustments to the SubText environment for me, never really concerning myself about the source, but just patching crap in random binaries that I’d inject into the web.config.  My last one was in 2013 to support Twitter cards and it was a painful reminder at the time that this site was fragile.   By this time as well the SubText project itself was fragile and not really being maintained as ASP.NET had moved to newer things like MVC and such.  The writing was on the wall for me but I ignored it.

Hosting environment

In addition to the platform/framework used, I was using an interesting hosting setup.  Well, not abnormal really considering at the time in 2003 there was no ‘cloud’ as we know it today.  I had a dedicated box (1U server) hosted at my own data center (I was managing, among other things, a data center rack at the time).  This was running Windows Server 2000 and whatever goop that came with.  Additionally this was SQL Server Express [insert some old version here].  I had moved on to another job and after a period of time I needed to move that server.  I was using the server for more than just my site, running about 10 other WordPress blogs for my community, my wife’s business, and various other things.  WordPress sites were constantly being attacked/hacked due to vulnerabilities in WordPress and leading to my server being filled with massive video porn files and me not knowing until my site was down then I had to login remotely and clean crap up.  Loads of fun.  I eventually moved that server to a co-located environment at GoDaddy still maintaining a dedicated 1U server for me.  It was nice having direct access to the server to do whatever I wanted, but I was quickly not needing that type of hands-on configuration anymore…but still dealing with the management.

During each of these moves I was just moving folders around.  I had no builds, no original source code reliably, etc.  “Fixing” things was me writing new code and finding interesting ways to redirect some SubText functionality as I didn’t have the source nor was interested in digging up tooling to get the source to work.  I never upgraded from Windows 2000 server and was well beyond support for things I was doing.  When I wanted to upgrade the OS at GoDaddy, I was faced with “Sure we’ll set up a new server for you and you migrate your apps” approach.  So again I was going to have to re-configure everything.  Another nightmare waiting and I just put it off.

To the Cloud!

My first step was moving data to the cloud.  I wrote about this when I did this task back in 2012.  I quickly learned that if my site wasn’t on Azure as well and with the traffic I was still getting, the egress costs were not going to be attractive for me.  A few years ago I went about moving just my blog app to Azure App Service as well.  Not having anything to build, this was going to be a fun ‘deployment’ where I needed to copy a lot of things manually to my App Service environment.  I felt dirty just FTP-ing in to the environment and continually trying things until it worked.  But it eventually did.  I had my .NET Framework SubText app running on App Service and using Azure SQL.  The cool thing about Azure SQL is the monitoring and diagnostics it provides.  I immediately was met with a few recommendations and configuration changes I should make and/or it automatically made on my behalf.  That was awesome.  I did have one stored procedure from SubText that was causing all kinds of performance havoc and contributing to me hitting capacity with my chosen SQL plan requiring me to bump up to the next plan and more costs.  Neither of which I wanted to do.  And due to what I mentioned prior about not having SubText buildable I couldn’t reliably make a change to the stored proc without really changing the code that called it.  Just another dent in the plan.  I needed a real migration plan.

Migrate to ASP.NET Core

I mentioned that in my new role I needed to spend time re-learning ASP.NET and this was a perfect opportunity.  I decided to dedicate the time and ‘migrate’ to ASP.NET Core.  Why the air quotes?  Because realistically I couldn’t migrate anything but data.  I did not have reliably building source for SubText and despite that it was WebForms and I didn’t know what I might be getting myself in to.  I needed a new plan, which meant a new framework and I went looking.  Immediately I was met with recommendations that I should go static sites, that Jekyll and GitHub pages are the new hotness and why would I want anything else.  I don’t know, for me, I still wanted some flexibility in the way I worked and I wasn’t seeing that I’d be able to get what I want out of a static site approach.  I wanted to move to ASP.NET Core solutions and found a few frameworks that looked attractive.  Most were in varying states and others felt just too verbose for my needs.  I landed on a recommendation to look at miniblogcore.  This was the smallest, simplest, most understandable solution to my needs that I found.  No frills, just render posts with some dynamicism.

I did not even attempt to migrate any of my existing ASP.NET WebForms code or styling as modern platforms were using Bootstrap and other things to do the site, so that was where I needed to start.  I spent a good amount of time working on the simple styling structure for a few things and learning MVC in the process to componentize some of the areas.  I added a few pieces of customization on the miniblog source, adding search routing (using Google site search), a timeline/calendar view thanks to Telerik controls, category browsing (although mine is horrendous due to waaay too many categories used over the years), using Disquss for commenting, adding SyntaxHighlighter for code formatting support, an image provider for my embedded images during authoring, and a few other random things.  I wrote a little MVC controller to do the data migration once from SQL to the XML file-based storage that MiniBlog uses.  That was a lot simpler than I thought it would take to migrate the data to the new structure.  Luckily my old blog had ‘slug’ support and this new one had it as well, so the URI mapping worked fine, but now I had to ensure the old routing would work.  I had to play around with some RegEx skills to accomplish this but in the end I found a pattern that would match and implemented that in my routing, using proper redirect response codes:

// This is for redirecting potential existing URLs from the old Miniblog URL format
// old subtext non-slugged & slugged
// https://timheuer.com/blog/archive/2003/08/19/145.aspx
// https://timheuer.com/blog/archive/2015/04/21/join-windows-engineers-at-free-build-events-around-the-world-xaml.aspx
[Route("/post/{slug}")]
[Route("/blog/archive/{year:regex(\\d{{4}})}/{month:regex(\\d{{2}})}/{day:regex(\\d{{2}})}/{slug}.aspx")]
[HttpGet]
public IActionResult Redirects(string slug)
{
    // if the post was a non-named one we need to append some text to it otherwise it will think it is a page
    // if (slug doesn't contain letters) { redirect to $"post-{slug}" }
    var newSlug = slug;
    var isMatch = Regex.IsMatch(slug, "^[0-9]*$");
    if (isMatch) newSlug = $"post-{slug}";

    return LocalRedirectPermanent($"/blog/{newSlug}");
}

That ended up being remarkably simpler than I thought it would be as well.  This alone was causing me stress to maintain the URIs that had existed over time and using ASP.NET routing with RegEx I got what I needed quickly.

Moving to Azure App Service once this was all done was simple.  When I first moved ASP.NET Core 3.0 wasn’t yet available so I had to deploy as a self-contained app.  This isn’t difficult though and in some cases may be more explicitly what you want to do.  I wrote how to Deploy .NET apps as self-contained so you can follow the steps.  This basically is a ‘bring the framework with you’ approach when the runtime might not be there.  Azure App Service now has .NET Core 3.1 available though so I no longer have to do that, but good to know I can test future versions of .NET by using this mechanism.

Summary

So what did I learn?  Well, not having source for your apps you care about hurts.  I didn’t even get a chance to actually attempt to truly migrate SubText to ASP.NET Core because I had let my implementation rot for so long.  I have become such a huge believer in DevOps now it’s unreal.  I won’t do a simple project even without it.  The confidence you gain when your projects have continuity through automation is amazing.  My new blog app is fully run on DevOps and deploys using that as well…I just commit changes and they are deployed when I approve them.  I learned that even though this was ‘just a blog’ it was a fairly involved app with separation of user controls and things.  It didn’t need to be so complex, but it was and I’m glad for MiniBlog not being so complex.  The performance of my content site and costs are much more manageable now and my stress is reduced knowing that should anything happen I’m in a better place for restoring a good state.  My biggest TODO task I think is re-thinking the XML-based data store though.  This actually is the one thing causing me some DevOps pain because the ‘data’ is content within the web app and when using slot-staging deployment that doesn’t work well.  Azure has a way to use Azure Storage as a mounted point to serve content from in your App Service though and I’ve started to try that with some mixed results so far.  Using this approach separates my app from my data and allows for more meaningful deployment flows and data backup.  I’ve also explored using an Azure Storage provider for my data layer, but the method for how the initial cache is build in MiniBlog right now makes this not a great story due to startup latency when you have 2,000 posts to retrieve from blob container.  I’m still playing around with ideas here, so if you have some I’d love to hear (dasBlog users would hit similar concerns).

I’m happy with where I landed and hope this keeps me on a path for a while.  I’ve got a simple design, responsive design, easy-to-maintain source code, all the features I want (for now), no broken links (I think), works with my editing flow (Open Live Writer), and less stress worrying about a server.  I’ve already updated to ASP.NET Core 3.1 and it was a simple config change to do that now that my setup is so streamlined. 

What are your migration stories?