| Comments

Last night I got tweeted at asking me how one could halt a CI workflow in GitHub Actions on a condition.  This particular condition was if the code coverage tests failed a certain coverage threshold.  I’m not a regular user of code coverage tools like Coverlet but I went Googling for some answers and oddly did not find the obvious answer that was pointed out to me this morning.  Regardless the journey to discover an alternate means was interesting to me so I’ll share what I did that I feel is super hacky, but works and is a similar method I used for passing some version information in other workflows.

First, the simple solution for if you are using Coverlet and want to fail a build and thus a CI workflow is to use the MSBuild integration option and then you can simply use:

dotnet test /p:CollectCoverage=true /p:Threshold=80

I honestly felt embarrassed that I didn’t find this simple option, but oh well, it is there and is definitely the simplest option if you can use this option.  But there you have it.  When used in an Actions workflow if the threshold isn’t met, this will fail that step and you are done.

Picture of failed GitHub Actions step

Creating your condition to inspect

But let’s say you need to fail for a different reason or in this example here, you couldn’t use the MSBuild integration and instead are just using the VSTest integration with a collector.  Well, we’ll use this code coverage scenario as an example but the key step here is focusing on how to fail a step.  Your condition may be anything but I suspect it is usually based on some previous step’s output or value.  Well first, if you are relying on previous steps values, be sure you understand the power of using outputs.  This is I think the best way to kind of ‘set state’ of certain things in steps.  A step can do some things and either in the Action itself set an Output value, or in the workflow YAML you can do this as well using a shell command and calling the ::set-output method.  Let’s look at an example…first the initial step (again using our code coverage scenario):

- name: Test
  run: dotnet test XUnit.Coverlet.Collector/XUnit.Coverlet.Collector.csproj --collect:"XPlat Code Coverage"

This basically will produce an XML output ‘report’ that contains the values we want to extract.  Namely it’s in this snippet:

<?xml version="1.0" encoding="utf-8"?>
<coverage line-rate="0.85999999999" branch-rate="1" version="1.9" timestamp="1619804172" lines-covered="15" lines-valid="15" branches-covered="8" branches-valid="8">
  <sources>
    <source>D:\</source>
  </sources>
  <packages>
    <package name="Numbers" line-rate="1" branch-rate="1" complexity="8">
      <classes>

I want the line-rate value (line 2) in this XML to be my condition…so I’m going to create a new Actions step to extract the value by parsing the XML using a PowerShell cmdlet.  Once I have that I will set the value as the output of this step for later use:

- name: Get Line Rate from output
  id: get_line_rate
  shell: pwsh  
  run: |
    $covreport = get-childitem -Filter coverage.cobertura.xml -Recurse | Sort-Object -Descending -Property LastWriteTime -Top 1
    Write-Output $covreport.FullName
    [xml]$covxml = Get-Content -Path $covreport.FullName
    $lineRate = $covxml.coverage.'line-rate'
    Write-Output "::set-output name=lineRate::$lineRate"

As you can see in lines 2 and 9 I have set a specific ID for my step and then used the set-output method to write a value to an output of the step named ‘lineRate’ that can be later used.  So now let’s use it!

Evaluating your condition and failing the step manually

Now that we have our condition, we want to fail the run if the condition evaluates a certain way…in our case if the code coverage line rate isn’t meeting our threshold.  To do this we’re going to use a specific GitHub Action called actions/github-script which allows you to run some of the GitHub API directly in a script.  This is great as it allows us to use the core library which has a set of methods for success and failure!  Let’s take a look at how we combine the condition with the setting:

- name: Check coverage tolerance
  if: ${{ steps.get_line_rate.outputs.lineRate < 0.9 }}
  uses: actions/[email protected]
  with:
    script: |
        core.setFailed('Coverage test below tolerance')

Okay, so we did a few things here.  First we are defining this step as executing the core.setFailed() method…that’s specifically what this step will do, that’s it…it will fail the run with a message we put in there.  *BUT* we have put a condition on the step itself using the if condition checking.  In line 6 we are executing the setFailed function with our custom message that will show in the runner log.  On line 2 we have set the condition for if this step even runs at all.  Notice we are using the ID of a previous step (get_line_rate) and the output parameter (lineRate) and then doing a quick math check.  If this condition is met, then this step will run.  If the condition is NOT met, this step will not run, but also doesn’t fail and the run can continue.  Observe that if the condition is met, our step will fail and the run fails:

Failed run based on condition

If the condition is NOT met the step is ignored, the run continues:

Condition not met

Boom, that’s it! 

Summary

This was just one scenario but the key here is if you need to manually control a fail condition or otherwise evaluate conditions, using the actions/github-script Action is a simple way to do a quick insertion to control your run based on a condition.  It’s quick and effective for some scenarios where your steps may not have natural success/fail exit codes that would otherwise fail your CI run.  What do you think? Is there a better/easier way that I missed when you don’t have clear exit codes?

Hope this helps someone!

| Comments

One of the biggest things that I’ve wanted (and have heard others) when adopting GitHub Actions is the use of some type of approval flow.  Until now (roughly the time of this writing) that wasn’t possible easily in Actions.  The concept of how Azure Pipelines does it is so nice and simple to understand in my opinion and a lot of the attempts by others using various Actions stitched together made it tough to adopt.  Well, announced at GitHub Universe, reviewers is now in Beta for Actions customers!!!  Yes!!!  I spent some time setting up a flow with an ASP.NET 5 web app and Azure as my deployment to check it out.  I wanted to share my write-up in hopes it might help others get started quickly as well.  First I’ll acknowledge that this is the simplest getting started you can have and your workflows may be more complex, etc.  If you’d like to have a primer and see some other updates on Actions, be sure to check out Chris Patterson’s session from Universe: Continuous delivery with GitHub Actions.  With that let’s get started!

Setting things up

First we’ll need a few things to get started.  These are things I’m not going to walk through here but will explain briefly what/why it is needed for my example.

  • An Azure account – I’m using this sample with Azure as my deployment because that’s where I do most of my work.  You can get a free Azure account as well and do exactly this without any obligation.
  • Set up an Azure App Service resource – I’m using App Service Linux and just created it using basically all the defaults.  This is just a sample so those are fine for me.  I also created these using the portal to have everything setup in advance.
  • I added one Application Setting to my App Service called APPSERVICE_ENVIRONMENT so I could just extract a string noting which environment I was in and display it on the home page.
  • In your App Service create a Deployment Slot and name it “staging” and choose to clone the main service settings (to get the previous app setting I noted).  I then changed the app setting value for this deployment slot.
  • Download the publish profile for each your production and staging instances individually and save those somewhere for now as we’ll refer back to them in the next step.
  • I created an ASP.NET 5 Web App using the default template from Visual Studio 2019.  I made some code changes in the Index.cshtml to pull from app settings, but otherwise it is unchanged.
  • I used the new Git features in Visual Studio to quickly get my app to a repository in my GitHub account and enabled Actions on that repo.

That’s it!  With those basics set up I can get started with the next steps of building out the workflow.  I should note that the steps I’m outlining here are free for GitHub public repositories.  For private repositories you need to be a GitHub Enterprise Server customer.  Since my sample is public I’m ready to go!

Environments

The first concept is Environments.  These are basically a separate segmented definition of your repo that you can associate secrets and protection rules with.  This is the key to the approval workflow as one of the protection rules is reviewers required (aka approvers).  The first thing we’ll do is set up two environments: staging and production.  Go to your repository settings and you’ll see a new section called Environments in the navigation. 

Screenshot of environment config

To create an environment, click the New Environment button and give it a name.  I created one called production and one called staging.  In each of these you can do things independently like secrets and reviewers.  Because I’m a team of one person my reviewer will be me, but you could set up others like maybe a build engineer for staging approval deployment and a QA team for production deployment.  Either way  click the Required reviewers checkbox and add yourself at least and save protection rule.

NOTE: This area may expand more to further protection rules but for now it is reviewers or a wait delay.  GitHub indicates others may be in the future.

Now we’ll add some secrets.  With Environments, you can have independent secrets for each environment.  Maybe you want to have different deployment variables, etc. for each environment, this is where you could do it.  For us, this is specifically what we’ll use the different publish profiles for.  Remember those profiles you downloaded earlier, now you’ll need them.  In the staging environment create a new secret named AZURE_PUBLISH_PROFILE and paste in the contents of your staging publish profile.  Then go to your production environment settings and do the same using the same secret name and use the production publish profile you downloaded earlier.  This allows our workflow to use environment-specific secret settings when they are called, but still use the same secret name…meaning we don’t need AZURE_PUBLISH_PROFILE_STAGING naming as we’ll be marking the environment in the workflow and it will pick up secrets from that environment only (or the repo if not found there – you can have a hierarchy of secrets effectively).

Okay we’re done setting up the Environment in the repo…off to set up the workflow!

Setting up the workflow

To get me quickly started I used my own template so I could `dotnet new workflow` in my repo root using the CLI.  This gives me a strawman to work with.  Let’s build out the basics, we’re going to have 3 jobs: build, deploy to staging, deploy to prod.  Let’s get started.  The full workflow is in my repo for this post, but I’ll be extracting snippets to focus on and show relevant pieces here.

Build

For build I’m using my standard implementation of restore/build/publish/upload artifacts which looks like this (with some environment-specific keys):

jobs:
  build:
    name: Build
    if: github.event_name == 'push' && contains(toJson(github.event.commits), '***NO_CI***') == false && contains(toJson(github.event.commits), '[ci skip]') == false && contains(toJson(github.event.commits), '[skip ci]') == false
    runs-on: ubuntu-latest
    steps:
    - uses: actions/[email protected]
    - name: Setup .NET Core SDK ${{ env.DOTNET_CORE_VERSION }}
      uses: actions/[email protected]
      with:
        dotnet-version: ${{ env.DOTNET_CORE_VERSION }}
    - name: Restore packages
      run: dotnet restore "${{ env.PROJECT_PATH }}"
    - name: Build app
      run: dotnet build "${{ env.PROJECT_PATH }}" --configuration ${{ env.CONFIGURATION }} --no-restore
    - name: Test app
      run: dotnet test "${{ env.PROJECT_PATH }}" --no-build
    - name: Publish app for deploy
      run: dotnet publish "${{ env.PROJECT_PATH }}" --configuration ${{ env.CONFIGURATION }} --no-build --output "${{ env.AZURE_WEBAPP_PACKAGE_PATH }}"
    - name: Publish Artifacts
      uses: actions/[email protected]
      with:
        name: webapp
        path: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}

Notice this job is ‘build’ and ends with uploading some artifacts to the job.  That’s it, the core functionality is to build/test this and store the final artifacts.

Deploy to staging

Next job we want is to deploy those bits to staging environment, which will be our staging slot in our Azure App Service we set up before.  Here’s the workflow job definition snippet:

  staging:
    needs: build
    name: Deploy to staging
    environment:
        name: staging
        url: ${{ steps.deploy_staging.outputs.webapp-url }}
    runs-on: ubuntu-latest
    steps:
    # Download artifacts
    - name: Download artifacts
      uses: actions/[email protected]
      with:
        name: webapp

    # Deploy to App Service Linux
    - name: Deploy to Azure WebApp
      uses: azure/[email protected]
      id: deploy_staging
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
        slot-name: staging

In this job we download the previously published artifacts to be used as our app to deploy.  Observe a few other things here:

  • I’ve declared that this job ‘needs’ the ‘build’ job to start.  This ensures a sequence workflow.  If build job fails, this doesn’t start.
  • I’ve declared this job an ‘environment’ and marked it using staging which maps to the Environment name we set up on the repo settings.
  • In the publish phase I specified the slot-name value mapping to the Azure App Service slot name we created on our resource in the portal.
  • Specify getting the AZURE_PUBLISH_PROFILE secret from the repo

You’ll also notice the ‘url’ setting on the environment.  This is a cool little delighter that you should use.  One of the outputs of the Azure web app deploy action is the URL to where it was deployed.  I can extract that from the step and put it in this variable.  GitHub Actions summary will now show this final URL in the visual map of the workflow.  It is a small delighter, but you’ll see useful a bit later.  Notice I don’t put any approver information in here.  By declaring this in the ‘staging’ environment it will follow the protection rules we previously set up.  So in fact, this job won’t run unless (1) build completes successfully and (2) the protection rules for the environment are stratified. 

Deploy to production

Similarly to staging we have a final step to deploy to production.  Here’s the definition snippet:

  deploy:
    needs: staging
    environment:
      name: production
      url: ${{ steps.deploy_production.outputs.webapp-url }}
    name: Deploy to production
    runs-on: ubuntu-latest
    steps:
    # Download artifacts
    - name: Download artifacts
      uses: actions/[email protected]
      with:
        name: webapp

    # Deploy to App Service Linux
    - name: Deploy to Azure WebApp
      id: deploy_production
      uses: azure/[email protected]
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}

This is almost identical to staging except we changed:

  • Needs ‘staging’ to complete before this runs
  • Changed the environment to production to follow those protection rules
  • Removed the slot-name for deployment (default is production)
  • Changed the URL output value to the value from this job

Notice that we have the same AZURE_PUBLISH_PROFILE secret used here.  Because we are declaring environments we will get the environment-specific secret in these job scopes.  Helpful to have a common name and just map to different environments rather than many little ones – at least my opinion it does.

That’s it, we now have our full workflow to build –> deploy to staging with approval –> deploy to production with approval.  Let’s see it in action!

Trigger the workflow

Once we have this workflow in fact we can commit/push this workflow file and it should trigger a run itself.  Otherwise you can do a different code change/commit/push to trigger as well.  We get a few things here when the run happens.

First we get a nicer visualization of the summary of the job:

Screenshot of summary view

When the protection rules are hit, a few things happen.  Namely the run stops and waits, but the reviewers are notified.  The notification happens in standard GitHub notification means. I have email notifications and so I got an email like this:

Picture of email notification

I can then click through and approve the workflow step and add comments:

Screenshot of approval step

Once that step is approved, the job runs.  On the environment job it provides a nice little progress indicator of the steps:

Picture of progress indicator

Remember that URL setting we had?  Once that job finished, you’ll see it surface in that nice summary view to quickly click through and test your staging environment:

Picture of the URL shown in summary view in step

Once we are satisfied with the staging environment we can then approve the next workflow and the same steps happen and we are deployed to production!

Screenshot of final approval flow

And we’re done!

Summary

The concept of approvals in Actions workflows has been a top request I’ve heard and I’m glad it is finally there!  I’m in the process of adding it as an extra protection to all my public repo projects, whether it be for a web app deployment or a NuGet package publish, it is a helpful protection to put in place in your Actions.  It’s rather simple to set up and if you have a relatively simple workflow it is equally simple to config and modify already to incorporate.  More complex workflows might require a bit more thought but still simple to augment.  I’ve posted my full sample here and the workflow file in the repo timheuer/actions-approval-sample where you can see the full workflow file here.  This was fun to walk through and I hope this write-up helps you get started as well!

| Comments

Today I was working on one of our internal GitHub repositories that apparently used to be used for our tooling issue tracking.  I have no idea the history but a quick look at the 68 issues with the latest dating back to 2017 told me that yeah, nobody is looking at these anymore.  After a quick email ack from my dev lead that I could bulk clear these out I immediately went to the repo issues list, and was about to do this:

Screenshot of mark all as closed

Then I realized that all that was going to do was close them without any reasoning at all.  I know that closing sends a notification to people on the issue and that wasn’t the right thing to do.  I quickly looked around, did some googling and didn’t find anything in the GitHub docs that would allow me to “bulk resolve and add a message” outside of adding a commit and a bunch of “close #XXX” statements.  That was unrealistic.  I threw it out on Twitter in hopes maybe someone had a tool already.  The other debate in my head was writing some code to iterate through them and close with a message.  This felt heavy for my needs, I’d need to get tokens, blah blah.  I’m lazy.

Then I thought to myself, Self, I’m pretty sure you should be able to use the ‘labeled’ trigger in GitHub Actions to automate this! Thinking this way made me think that I could use a trigger to still bulk close them but the action would be able to add a message to each one.  Again, a quick thinking here led me to be writing more code than I thought…but I was on the right track.  Some more searching for different terms (adding actions) and I discovered the action actions/stale to the rescue!  This is a workflow designed to run on a schedule, look at ‘stale’ (to be defined by you) and label them and/or close them after certain intervals.  The design looks to be something like “run every day and look for things that are X days old, label them stale, then warn that if action isn’t taken in Y days that they would be closed” – perfect for my need except I wanted to close NOW!  No problem.  Looking at the sample it used a schedule trigger and a CRON format for the schedule.  Off to crontab.guru to help me figure out the thing I can never remember.  What’s worse, regex or cron?  Who knows?

And then it dawned on me!  My favorite GitHub Actions tip is to add workflow_dispatch as one of the triggers to workflows.  This allows you to manually trigger a workflow from your repo:

Screenshot of manual workflow trigger

I use this ALL the time to make sure I can not have to fake a commit or something on certain projects.  This was the perfect thing I needed.  The combination of workflow_dispatch and this stale action would enable me to complete this quickly.  I added the following workflow to our repo:

name: "Close stale issues"
on:
  workflow_dispatch:
    branches:
    - master
    
jobs:
  stale:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/[email protected]
      with:
        repo-token: ${{ secrets.GITHUB_TOKEN }}
        days-before-stale: 30
        days-before-close: 0
        stale-issue-message: 'This issue is being closed as stale'
        close-issue-message: 'This repo has been made internal and no longer tracking product issues. Closing all open stale issues.'

I just had to set a few parameters for a stale message (required) and I set the warning day basically to 0 so it would happen NOW.  Then I trigger the workflow manually.  Boom!  The workflow ran and 2 minutes later all 68 issues were marked closed with a message that serves as the reason and the user won’t be too alarmed for some random bulk closure.

Screenshot of GitHub message

I’m glad I remembered that GitHub Actions aren’t just for CI/CD uses and can be used to quickly automate much more.  In fact I’m writing this blog post maybe to help others, but certainly to serve as a bookmark to myself when I forget about this again.

Hope this helps!

| Comments

What the heck is a code analyzer?  Well if you are a Visual Studio user you probably have seen the lightbulbs and wrenches from time to time.  Put it simply in my own terms, code analyzers keep an eye on your code and find errors, suggest different ways of doing things, help you know what you aren’t using, etc.  Usually coupled with a code fix, an analyzer alerts you to an opportunity and a code fix can be applied to remedy that opportunity.  Here’s an example:

Screenshot of a code file with a code fix

These are helpful in your coding workflow (or intended to be!) to be more productive, learn some things along the way, or enforce certain development approaches.  This is an area of Visual Studio and .NET that part of my team works on and I wanted to learn more than beyond what they are generally.  Admittedly despite being on the team I haven’t had the first-hand experience of creating a code analyzer before so I thought, why not give it a try.  I fired up Visual Studio and got started without any help from the team (I’ll note in one step where I was totally stumped and needed a teammates help later).  I figured I’d write my experience in that it helps anyone or just serves as a bookmark for me for later when I totally forget all this stuff and have to do it again.  I know I’ve made some mistakes and I don’t think it’s fully complete, but it is ‘good enough’ so I’ll carry you on the journey here with me!

Defining the analyzer

First off we have to decide what we want to do.  There are a lot of analyzers already in the platform and other places, so you may not need one custom.  For me I had a specific scenario come up at work where I thought hmm, I wonder if I could build this into the dev workflow and that’s what I’m going to do.  The scenario is that we want to make sure that our products don’t contain terms that have been deemed inappropriate for various reasons.  These could be overtly recognized profanity, accessible terms, diversity-related, cultural considerations, etc. Either way we are starting from a place that someone has defined a list of these per policy.  So here is my requirements:

  • Provide an analyzer that starts from a specific set of known ‘database’ of terms in a structured format
  • Warn/error on code symbols and comments in the code
  • One analyzer code base that can provide different results for different severities
  • Provide a code fix that removes the word and fits within the other VS refactor/renaming mechnisms
  • Have a predictable build that produces the bits for anyone to easily consume

Pretty simple I thought, so let’s get started!

Getting started with Code Analyzer development

I did what any person would do and searched.  I ended up on a blog post from a teammate of mine who is the PM in this area, Mika titled “How to write a Roslyn Analyzer” – sounds like exactly what I was looking for! Yay team! Mika’s post help me understand the basics and get the tools squared away.  I knew that I had the VS Extensibility SDK workload installed already but wasn’t seeing the templates, so the post helped me realize that the Roslyn SDK is optional and I needed to go back and install that.  Once I did, I was able to start with File…New Project and search for analyzer and choose the C# template:

New project dialog from Visual Studio

This gave me a great starting point with 5 projects:

  • The analyzer library project
  • The code fix library project
  • A NuGet package project
  • A unit test project
  • A Visual Studio extension project (VSIX)

Visual Studio opened up the two key code files I’d be working with: the analyzer and code fix provider.  These will be the two things I focus on in this post.  First I recommend going to each of the projects and updating any/all NuGet Packages that have an offered update.

Analyzer library

Let’s look at the key aspects of the analyzer class we want to implement.  Here is the full template initially provided

public class SimpleAnalyzerAnalyzer : DiagnosticAnalyzer
{
    public const string DiagnosticId = "SimpleAnalyzer";
    private static readonly LocalizableString Title = new LocalizableResourceString(nameof(Resources.AnalyzerTitle), Resources.ResourceManager, typeof(Resources));
    private static readonly LocalizableString MessageFormat = new LocalizableResourceString(nameof(Resources.AnalyzerMessageFormat), Resources.ResourceManager, typeof(Resources));
    private static readonly LocalizableString Description = new LocalizableResourceString(nameof(Resources.AnalyzerDescription), Resources.ResourceManager, typeof(Resources));
    private const string Category = "Naming";

    private static readonly DiagnosticDescriptor Rule = new DiagnosticDescriptor(DiagnosticId, Title, MessageFormat, Category, DiagnosticSeverity.Warning, isEnabledByDefault: true, description: Description);

    public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics { get { return ImmutableArray.Create(Rule); } }

    public override void Initialize(AnalysisContext context)
    {
        context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.None);
        context.EnableConcurrentExecution();

        context.RegisterSymbolAction(AnalyzeSymbol, SymbolKind.NamedType);
    }

    private static void AnalyzeSymbol(SymbolAnalysisContext context)
    {
        var namedTypeSymbol = (INamedTypeSymbol)context.Symbol;

        // Find just those named type symbols with names containing lowercase letters.
        if (namedTypeSymbol.Name.ToCharArray().Any(char.IsLower))
        {
            // For all such symbols, produce a diagnostic.
            var diagnostic = Diagnostic.Create(Rule, namedTypeSymbol.Locations[0], namedTypeSymbol.Name);

            context.ReportDiagnostic(diagnostic);
        }
    }
}

A few key things to note here.  The DiagnosticId is what is reported to the errors and output.  You’ve probable seen a few of these that are like “CSC001” or stuff like that.  This is basically your identifier.  The other key area is the Rule here.  Each analyzer basically creates a DiagnosticDescriptor that it will produce and report to the diagnostic engine.  As you can see here and the lines below it you define it with a certain set of values and then indicate to the analyzer what SupportedDiagnostics this analyzer supports.  By the nature of this combination you can ascertain that you can have multiple rules each with some unique characteristics. 

Custom rules

Remember we said we wanted different severities and that is one of the differences in the descriptor so we’ll need to change that.  I wanted basically 3 types that would have different diagnostic IDs and severities.  I’ve modified my code as follows (and removing the static DiagnosticId):

private const string HtmlHelpUri = "https://github.com/timheuer/SimpleAnalyzer";

private static readonly DiagnosticDescriptor WarningRule = new DiagnosticDescriptor("TERM001", Title, MessageFormat, Category, DiagnosticSeverity.Warning, isEnabledByDefault: true, description: Description, helpLinkUri: HtmlHelpUri);
private static readonly DiagnosticDescriptor ErrorRule = new DiagnosticDescriptor("TERM002", Title, MessageFormat, Category, DiagnosticSeverity.Error, isEnabledByDefault: true, description: Description, helpLinkUri: HtmlHelpUri);
private static readonly DiagnosticDescriptor InfoRule = new DiagnosticDescriptor("TERM003", Title, MessageFormat, Category, DiagnosticSeverity.Info, isEnabledByDefault: true, description: Description, helpLinkUri: HtmlHelpUri);

public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics { get { return ImmutableArray.Create(WarningRule, ErrorRule, InfoRule); } }

I’ve created 3 specific DiagnosticDescriptors and ‘registered’ them as supported for my analyzer.  I also added the help link which will show up in the UI in Visual Studio and if you don’t supply one you’ll get a URL that won’t be terribly helpful to your consumers.  Notice each rule has a unique diagnostic ID and severity.  Now we’ve got these sorted it’s time to move on to some of our logic.

Initialize and register

We have to decide when we want the analyzer to run and what it is analyzing.  I learned that once you have the Roslyn SDK installed you have available to you this awesome new tool called the Syntax Visualizer (View…Other Windows…Syntax Visualizer).  It let’s you see the view that Roslyn sees or what some of the old schoolers might consider the CodeDOM. 

Syntax Visualizer screenshot

You can see here that with it on and you click anywhere in your code the tree updates and tells you what you are looking at.  In this case my cursor was on Initialize() and I can see this is considered a MethodDeclarationSyntax type and kind.  I can navigate the tree on the left and it helps me discover what other code symbols I may be looking for to consider what I need my analyzer to care about.  This was very helpful to understand the code tree that Roslyn understands.  From this I was able to determine what I needed to care about.  Now I needed to start putting things together.

The first part I wanted to do is to register the compilation start action (remember I have intentions of loading the data from somewhere so I want this available sooner).  Within that I then have context to the analyzer and can ‘register’ actions that I want to participate in.  For this sample purposes I’m going to use RegisterSymbolAction because I just want specific symbols (as opposed to comments or full body method declarations).  I have to specify a callback to use when the symbol is analyzed and what symbols I care about.  In simplest form here is what the contents of my Initialize() method now looks like:

public override void Initialize(AnalysisContext context)
{
    context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.None);
    context.EnableConcurrentExecution();

    context.RegisterCompilationStartAction((ctx) =>
    {
        // TODO: load the terms dictionary

        ctx.RegisterSymbolAction((symbolContext) =>
        {
            // do the work
        }, SymbolKind.NamedType, SymbolKind.Method, SymbolKind.Property, SymbolKind.Field,
                SymbolKind.Event, SymbolKind.Namespace, SymbolKind.Parameter);
    });
}

You can see that I’ve called RegisterCompilationStartAction from the full AnalysisContext and then called RegisterSymbolAction from within that, providing a set of specific symbols I care about.

NOTE: Not all symbols are available to analyzers.  I found that SymbolKind.Local is one that is not…and there was an analyzer warning that told me so!

Note that since I’m using a lambda approach here I removed the template code AnalyzeSymbol function from the class.  Okay, let’s move on to actually looking at the next step and load our dictionary.

Seeding the analyzer with data

I mentioned that we’ll have a dictionary of terms already.  This is a JSON file with a specific format that looks like this:

[
  {
    "TermID": "1",
    "Severity": "1",
    "Term": "verybad",
    "TermClass": "Profanity",
    "Context": "When used pejoratively",
    "ActionRecommendation": "Remove",
    "Why": "No profanity is tolerated in code"
  }
]

So the first thing I want to do is create a class that makes my life easier to work with this so I created Term.cs in my analyzer project.  The class basically is there to deserialize the file into strong types and looks like this:

using System.Text.Json.Serialization;

namespace SimpleAnalyzer
{
    class Term
    {
        [JsonPropertyName("TermID")]
        public string Id { get; set; }

        [JsonPropertyName("Term")]
        public string Name { get; set; }

        [JsonPropertyName("Severity")]
        public string Severity { get; set; }

        [JsonPropertyName("TermClass")]
        public string Class { get; set; }

        [JsonPropertyName("Context")]
        public string Context { get; set; }

        [JsonPropertyName("ActionRecommendation")]
        public string Recommendation { get; set; }

        [JsonPropertyName("Why")]
        public string Why { get; set; }
    }
}

So you’ll notice that I’m using JSON and System.Text.Json so I’ve had to add that to my analyzer project.  More on the mechanics of that much later.  I wanted to use this to make my life easier working with the terms database I needed to.

NOTE: Using 3rd party libraries (in this case System.Text.Json is considered one to the analyzer) requires more work and there could be issues depending on what you are doing.  Remember that analyzers run in the context of Visual Studio (or other tools) and there may be conflicts with other libraries.  It’s nuanced, so tred lightly.

Now that we have our class, let’s go back and load the dictionary file into our analyzer. 

NOTE:  Typically Analyzers and Source Generators use the concept called AdditionalFiles to load information.  This relies on the consuming project to have the file though and different than my scenario.  Working at the lower level in the stack with Roslyn, the compilers need to manage a bit more of the lifetime of files and such and so there is this different method of working with them.  You can read more about AdditionalFiles on the Roslyn repo: Using Additional Files (dotnet/roslyn).  This is generally the recommended way to work with files.

For us we are going to add the dictionary of terms *with* our analyzer so we need to do a few things.  First we need to make sure the JSON file is in the analyzer and also in the package output.  This requires us to mark the terms.json file in the Analyzer project as Content and to copy to output.  Second in the Package project we need to add the following to the csproj file in the _AddAnalyzersToOutput target:

<TfmSpecificPackageFile Include="$(OutputPath)\terms.json" PackagePath="analyzers/dotnet/cs" />

And then in the VSIX project we need to do something similar where we specify the NuGet packages to include:

<Content Include="$(OutputPath)\terms.json">
    <IncludeInVSIX>true</IncludeInVSIX>
</Content>

With both of these in place now we can get access to our term file in our Initialize method and we’ll add a helper function to ensure we get the right location.  The resulting modified Initialize portion looks like this:

context.RegisterCompilationStartAction((ctx) =>
{
    if (terms is null)
    {
        var currentDirecory = GetFolderTypeWasLoadedFrom<SimpleAnalyzerAnalyzer>();
        terms = JsonSerializer.Deserialize<List<Term>>(File.ReadAllBytes(Path.Combine(currentDirecory, "terms.json")));
    }
// other code removed for brevity in blog post
}

The helper function here is a simple one liner:

private static string GetFolderTypeWasLoadedFrom<T>() => new FileInfo(new Uri(typeof(T).Assembly.CodeBase).LocalPath).Directory.FullName;

This now gives us a List<Term> to work with.  These lines of code required us to add some using statements in the class that luckily an analyzer/code fix helped us do! You can see how helpful analyzers/code fixes are in your everyday usage and we take for granted!  Now we have our data, we have our action we registered for, let’s do some analyzing. 

Analyze the code

We basically want each symbol to do a search to see if it contains a word in our dictionary and if there is a match, then register a diagnostic rule to the user.  So given that we are using RegisterSymbolAction the context provides us with the Symbol and name we can examine.  We will look at that against our dictionary of terms, seeing if there is a match and then create a DiagnosticDescriptor for that in line with the severity of that match.  Here’s how we start:

ctx.RegisterSymbolAction((symbolContext) =>
{
    var symbol = symbolContext.Symbol;

    foreach (var term in terms)
    {
        if (ContainsUnsafeWords(symbol.Name, term.Name))
        {
            var diag = Diagnostic.Create(GetRule(term, symbol.Name), symbol.Locations[0], term.Name, symbol.Name, term.Severity, term.Class);
            symbolContext.ReportDiagnostic(diag);
            break;
        }
    }
}, SymbolKind.NamedType, SymbolKind.Method, SymbolKind.Property, SymbolKind.Field,
    SymbolKind.Event, SymbolKind.Namespace, SymbolKind.Parameter);

In this we are looking in our terms dictionary and doing a comparison.  We created a simple function for the comparison that looks like this:

private bool ContainsUnsafeWords(string symbol, string term)
{
    return term.Length < 4 ?
        symbol.Equals(term, StringComparison.InvariantCultureIgnoreCase) :
        symbol.IndexOf(term, StringComparison.InvariantCultureIgnoreCase) >= 0;
}

And then we have a function called GetRule that ensures we have the right DiagnosticDescriptor for this violation (based on severity).  That function looks like this:

private DiagnosticDescriptor GetRule(Term term, string identifier)
{
    var warningLevel = DiagnosticSeverity.Info;
    var diagId = "TERM001";
    var description = $"Recommendation: {term.Recommendation}{System.Environment.NewLine}Context: {term.Context}{System.Environment.NewLine}Reason: {term.Why}{System.Environment.NewLine}Term ID: {term.Id}";
    switch (term.Severity)
    {
        case "1":
        case "2":
            warningLevel = DiagnosticSeverity.Error;
            diagId = "TERM002";
            break;
        case "3":
            warningLevel = DiagnosticSeverity.Warning;
            break;
        default:
            warningLevel = DiagnosticSeverity.Info;
            diagId = "TERM003";
            break;
    }

    return new DiagnosticDescriptor(diagId, Title, MessageFormat, term.Class, warningLevel, isEnabledByDefault: true, description: description, helpLinkUri: HtmlHelpUri, term.Name);
}

In this GetRule function you’ll notice a few things.  First, we are doing this because we want to set the diagnostic ID and the severity differently based on the term dictionary data.  Remember earlier we created different rule definitions (DiagnosticDescriptors) and we need to ensure what we return here matches one of them.  This allows us to have one analyzer that tries to be a bit more dynamic.  We are also passing in a final parameter (term.Name) in the ctor for the DiagnosticDescriptor.  This is passed in the CustomTags parameter of the ctor.  We’ll be using this later in the code fix so we used this as a means to pass some context from the analyzer to the code fix (the term to replace).  The other part you’ll notice is that in the Diagnositc.Create method earlier we’re passing in some additional parameters as optional.  These get passed to the MessageFormat string that you’ve defined in your analyzer.  We didn’t mention it detail earlier but it comes in to play now.  The template gave us a Resources.resx file with three values:

Resource file contents

These are resources that are displayed in the outputs and Visual Studio user interface.  MessageFormat is one that enables you to provide some content into the formatter and that’s what we are passing here.  The result will be a more user-friendly message with the right context.  Great I think we have all our analyzer stuff working, let’s move on to the code fix!

Code fix library

With just the analyzer – which we are totally okay to only have – we have warnings/squiggles that will present to the user (or log in output).  We can optionally provide a code fix to remedy the situation.  In our simple sample here we’re going to do that and simply suggest to remove the word.  Code fixes also provide the user the means to suppress certain rules.  This is why we wanted different diagnostic IDs earlier as you may want to suppress the SEV3 terms but not the others.  Without that distinction/difference in DiagnosticDescriptors you cannot do that.  Moving over to the SimpleAnalyzer.CodeFixes project we’ll open the code fix provider and make some changes.  The default template provides a code fix to make the symbol all uppercase…we don’t want that but it provides a good framework for us to learn and make simple changes.  The first thing we need to do is tell the code fix provider what diagnostic IDs are fixable by this provider.  We make a change in the override provided by the template to provide our diagnostic IDs:

public sealed override ImmutableArray<string> FixableDiagnosticIds
{
    get { return ImmutableArray.Create("TERM001","TERM002","TERM003"); }
}

Now look in the template for MakeUppercaseAsync and let’s make a few changes.  First rename to RemoveTermAsync.  Then in the signature of that change it to include IEnumerable<string> so we can pass in those CustomTags we provided earlier from the analyzer.  You’ll also need to pass in those custom tags to the call to RemoveTermAsync.  Combined those look like these changes now in the template:

public sealed override async Task RegisterCodeFixesAsync(CodeFixContext context)
{
    var root = await context.Document.GetSyntaxRootAsync(context.CancellationToken).ConfigureAwait(false);

    // TODO: Replace the following code with your own analysis, generating a CodeAction for each fix to suggest
    var diagnostic = context.Diagnostics.First();
    var diagnosticSpan = diagnostic.Location.SourceSpan;

    // Find the type declaration identified by the diagnostic.
    var declaration = root.FindToken(diagnosticSpan.Start).Parent.AncestorsAndSelf().OfType<TypeDeclarationSyntax>().First();

    // Register a code action that will invoke the fix.
    context.RegisterCodeFix(
        CodeAction.Create(
            title: CodeFixResources.CodeFixTitle,
            createChangedSolution: c => RemoveTermAsync(context.Document, declaration, diagnostic.Descriptor.CustomTags, c),
            equivalenceKey: nameof(CodeFixResources.CodeFixTitle)),
        diagnostic);
}

private async Task<Solution> RemoveTermAsync(Document document, TypeDeclarationSyntax typeDecl, IEnumerable<string> tags, CancellationToken cancellationToken)
{
    // Compute new uppercase name.
    var identifierToken = typeDecl.Identifier;
    var newName = identifierToken.Text.Replace(tags.First(), string.Empty);

    // Get the symbol representing the type to be renamed.
    var semanticModel = await document.GetSemanticModelAsync(cancellationToken);
    var typeSymbol = semanticModel.GetDeclaredSymbol(typeDecl, cancellationToken);

    // Produce a new solution that has all references to that type renamed, including the declaration.
    var originalSolution = document.Project.Solution;
    var optionSet = originalSolution.Workspace.Options;
    var newSolution = await Renamer.RenameSymbolAsync(document.Project.Solution, typeSymbol, newName, optionSet, cancellationToken).ConfigureAwait(false);

    // Return the new solution with the now-uppercase type name.
    return newSolution;
}

With all these in place we now should be ready to try some things out.  Let’s debug.

Debugging

Before we debug remember we are using some extra libraries?  In order to make this work, your analyzer needs to ship those alongside.  This isn’t easy to figure out and you need to specify this in your csproj files to add additional outputs to your Package and Vsix projects.  I’m not going to emit them here, but you can look at this sample to see what I did.  Without this, the analyzer won’t start.  Please note if you are not using any 3rd party libraries then this isn’t required.  In my case I added System.Text.Json and so this is required.

I found the easiest way to debug was to set the VSIX project as the startup and just F5 that project.  This launches another instance of Visual Studio and installs your analyzer as an extension.  When running analyzers as extensions these do NOT affect the build.  So even though you may have analyzer errors, they don’t prevent the build from happening.  Installing analyzers as NuGet packages into the project would affect the build and generate build errors during CI, for example.  For now we’ll use the VSIX project to debug.  When it launches create a new project or something to test with…I just use a console application.  Remember when earlier I mentioned that the consumer project has to provide the terms dictionary?  It’s in this project that you’ll want to drop a terms.json file into the project in the format mentioned earlier.  This file also must be given the build action of “C# analyzer additional file” in the file properties.  Then let’s start writing code that includes method names that violate our rules.  When doing that we should now see the analyzer kick in and show the issues:

Screenshot of analyzer errors and warnings

Nice!  It worked.  One of the nuances of the template and the code fix is that I need to register a code action for each of the type of declaration that we had previously wanted the analyzer to work against (I think…still learning).  Without that the proper fix will not actually show/work if it isn’t the right type.  The template defaults are for NamedType, so my sample using method name won’t work on the fix, because it’s not the right declaration (again, I think…comment if you know).  I’ll have to enhance this later more, but the general workflow is working and if the type is named bad you can see the full end-to-end working:

Building it all in CI

Now let’s make sure we can have reproduceable builds of our NuGet and VSIX packages.  I’m using my quick template that I created for creating a simple workflow for GitHub Actions from the CLI and modifying a bit.  Because analyzers use VSIX, we need to use a Windows build agent that has Visual Studio on it and thankfully GitHub Actions provides one.  Here’s my resulting final CI build definition:

name: "Build"

on:
  push:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
  workflow_dispatch:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
      
jobs:
  build:
    if: github.event_name == 'push' && contains(toJson(github.event.commits), '***NO_CI***') == false && contains(toJson(github.event.commits), '[ci skip]') == false && contains(toJson(github.event.commits), '[skip ci]') == false
    name: Build 
    runs-on: windows-latest
    env:
      DOTNET_CLI_TELEMETRY_OPTOUT: 1
      DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1
      DOTNET_NOLOGO: true
      DOTNET_GENERATE_ASPNET_CERTIFICATE: false
      DOTNET_ADD_GLOBAL_TOOLS_TO_PATH: false
      DOTNET_MULTILEVEL_LOOKUP: 0
      PACKAGE_PROJECT: src\SimpleAnalyzer\SimpleAnalyzer.Package\
      VSIX_PROJECT: src\SimpleAnalyzer\SimpleAnalyzer.Vsix\

    steps:
    - uses: actions/[email protected]
      
    - name: Setup .NET Core SDK
      uses: actions/[email protected]
      with:
        dotnet-version: 5.0.x

    - name: Setup MSBuild
      uses: microsoft/[email protected]

    - name: Setup NuGet
      uses: NuGet/[email protected]

    - name: Add GPR Source
      run: nuget sources Add -Name "GPR" -Source ${{ secrets.GPR_URI }} -UserName ${{ secrets.GPR_USERNAME }} -Password ${{ secrets.GITHUB_TOKEN }}

    - name: Build NuGet Package
      run: |
        msbuild /restore ${{ env.PACKAGE_PROJECT }} /p:Configuration=Release /p:PackageOutputPath=${{ github.workspace }}\artifacts

    - name: Build VSIX Package
      run: |
        msbuild /restore ${{ env.VSIX_PROJECT }} /p:Configuration=Release /p:OutDir=${{ github.workspace }}\artifacts

    - name: Push to GitHub Packages
      run: nuget push ${{ github.workspace }}\artifacts\*.nupkg -Source "GPR"

    # upload artifacts
    - name: Upload artifacts
      uses: actions/[email protected]
      with:
        name: release-pacakges
        path: |
            ${{ github.workspace }}\artifacts\**\*.vsix
            ${{ github.workspace }}\artifacts\**\*.nupkg

I’ve done some extra work to publish this in the GitHub Package Repository but that’s just optional step and can be removed (ideally you’d publish this in the NuGet repository and you can learn about that by reading my blog post on that topic).  I’ve not got my CI set up and every commit builds the packages that can be consumed!  You might be asking what’s the difference between the NuGet and VSIX package.  The simple explanation is that you’d want people using your analyzer on the NuGet package because that is per-project and carries with the project, so everyone using the project gets the benefit of the analyzer.  The VSIX is per-machine and doesn’t affect builds, etc.  That may ideally be your scenario but it wouldn’t be consistent with everyone consuming the project that actually wants the analyzer.

Summary and resources

For me this was a fun exercise and distraction.  With some very much needed special assist from Jonathan Marolf on the team I learned a bunch and needed help on the 3rd party library thing mentioned earlier.  I’ve got a few TODO items to accomplish as I didn’t fully realize my goals.  The code fix isn’t working exactly how I want and would have thought, so I’m still working my way through this.  This whole sample by the way is on GItHub at timheuer/SimpleAnalyzer for you to use and point out my numerous mistakes of everything analyzer and probably C# even…please do!  In fact in my starting this and conversing with a few on Twitter, a group in Australia created something very similar that is already published on NuGet.  Check out merill/InclusivenessAnalyzer which aims to improve inclusivity in code.  The source like mine is up on GitHub and they have it published in the NuGet Gallery you can add to your project today!

There are a few resources you should look at if you want this journey yourself:

| Comments

I’ve become a huge fan of DevOps and spending more time ensuring my own projects have a good CI/CD automation using GitHub Actions.  The team I work on in Visual Studio for .NET develops the “right click publish” feature that has become a tag line for DevOps folks (okay, maybe not in the post flattering way!).  We know that a LOT of developers use the Publish workflow in Visual Studio for their .NET applications for various reasons.  In reaching out to a sampling and discussing CI/CD we heard a lot of folks talking about they didn’t have the time to figure it out, it was too confusing, there was no simple way to get started, etc.  In this past release we aimed to improve that experience for those users of Publish to help them very quickly get started with CI/CD for their apps deploying to Azure.  Our new feature enables you to generate a GitHub Actions workflow file using the Publish wizard to walk you through it.  In the end you have a good getting started workflow.  I did a quick video on it to demonstrate how easy it is:

It really is that simple! 

Making it simple from the start

I have to admit though, as much as I have been doing this the YAML still is not sticking in my memory enough to type from scratch (dang you IntelliSense for making me lazy!).  There are also times where I’m not using Azure as my deployment but still want CI/CD to something like NuGet for my packages.  I still want that flexibility to get started quickly and ensure as my project grows I’m not waiting to the last minute to add more to my workflow.  I just saw Damian comment on this recently as well:

I totally agree!  Recently I found myself continuing to go to older repos to copy/paste from existing workflows I had.  Sure, I can do that because I’m good at copying/pasting, but it was just frustrating to switch context for even that little bit.  My searching may suck but I also didn’t see a quick solution to this either (please point out in my comments below if I missed a better solution!!!).  So I created a quick `dotnet new` way of doing this for my projects from the CLI.

Screenshot of terminal window with commands

I created a simple item template that can be called using the `dotnet new` command from the CLI.  Calling this in simplest form:

dotnet new workflow

will create a .github\workflows\foo.yaml file where you call it from (where ‘foo’ is the name of your folder) with the default content of a workflow for .NET Core that restores/builds/tests your project (using a default SDK version and ‘main’ as the branch).  You can customize the output a bit more with a command like:

dotnet new workflow --sdk-version 3.1.403 -n build -b your_branch_name

This will enable you to specify a specific SDK version, a specific name for the .yaml file, and the branch to monitor to trigger the workflow.  An example of the output is here:

name: "Build"

on:
  push:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
  workflow_dispatch:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
      
jobs:
  build:
    if: github.event_name == 'push' && contains(toJson(github.event.commits), '***NO_CI***') == false && contains(toJson(github.event.commits), '[ci skip]') == false && contains(toJson(github.event.commits), '[skip ci]') == false
    name: Build 
    runs-on: ubuntu-latest
    env:
      DOTNET_CLI_TELEMETRY_OPTOUT: 1
      DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1
      DOTNET_NOLOGO: true
      DOTNET_GENERATE_ASPNET_CERTIFICATE: false
      DOTNET_ADD_GLOBAL_TOOLS_TO_PATH: false
      DOTNET_MULTILEVEL_LOOKUP: 0

    steps:
    - uses: actions/[email protected]
      
    - name: Setup .NET Core SDK
      uses: actions/[email protected]
      with:
        dotnet-version: 3.1.x

    - name: Restore
      run: dotnet restore

    - name: Build
      run: dotnet build --configuration Release --no-restore

    - name: Test
      run: dotnet test

You can see the areas that would be replaced by some input parameters on lines 6,13,38 here in this example (these are the defaults).  This isn’t meant to be your final workflow, but as Damian suggests, this is a good practice to start immediately from your “File…New Project” aspect and build up the workflow as you go along, rather than wait until the end to cobble everything together.  For me, now I just need to add my specific NuGet deployment steps when I’m ready to do so.

Installing and feature wishes

If you find this helpful feel free to install this template from NuGet using:

dotnet new --install TimHeuer.GitHubActions.Templates

You can find the package at TimHeuer.GitHubActions.Templates which also has the link to the repo if you see awesome changes or horrible bugs.  This is a simple item template so there are some limitations that I wish it would do automatically.  Honestly I started out making a global tool that would solve some of these but it felt a bit overkill.  For example:

  • It adds the template from where you are executing.  Actions need to be in the root of your repo, so you need to execute this in the root of your repo locally.  Otherwise it is just going to add some folders in random places that won’t work. 
  • It won’t auto-detect the SDK you are using.  Not horrible, but would be  nice to say “oh, you are a .NET 5 app, then this is the SDK you need”

Both of these could be solved with more access to the project system and in a global tool, but again, they are minor in my eyes.  Maybe I’ll get around to solving them, but selfishly I’m good for now!

I just wanted to share this little tool that has become helpful for me, hope it helps you a bit!