×

First time here?

You are looking at the most recent posts. You may also want to check out older archives. Please leave a comment, ask a question and consider subscribing to the latest posts via RSS or email. Thank you for visiting!

Deploying .NET Core 3 apps as self-contained

Yay! .NET Core 3.0 is now available!  You now are migrating your apps and want to get it to your favorite cloud hosting solution.  But they may not have it quite ready yet and you are still eager to deploy to production.  No worries, we have a solution for you.

As our documentation explains on different deployment models we have for .NET Core apps (simplified);

  • Framework-dependent (FDD): you are expecting the required framework to be where you are deploying…you are just deploying your code.
  • Self-contained (SCD) : you are packaging the required libraries and runtimes needed for your code to run, not expecting any shared runtime on your endpoint.
  • Framework-dependent executable (FDE): similar to FDD, but packaged as an executable, but expects a shared runtime to exist where you are deploying

Ideally your favorite cloud host provider will have your desired runtime available to you in their PaaS offerings.  But we’ve already established you are eager and want to get your new app there sooner.  No worries, then SCD is for you.  So how do you do that?  Here’s some helpful hints to get you going.

Producing the bits for SCD deployment is a part of the ‘publish’ pipeline to get the fully functional bits in all the right formats/places for you.  Here’s how you get it going in a few different environments.  First, though all of these require the understanding of runtime identifier values, or RID as we affectionately call it.  RIDs identify the target platform where the app will run.  You can see a list of all possible values in the RID catalog.  Understanding your value is key first.  For this, I’m going to deploy to Azure App Service for Linux and going to use linux-64 as my RID in all samples below.  This would vary based on what/where you are deploying.  I am also assuming in my samples that I’m executing these commands where they know the context of where my csproj file exists.  You can, of course, specify the path to your project files explicitly.

dotnet CLI

From the dotnet CLI you would first want to build and then publish using the RID.  Why build?  Well, it’s valuable to ensure your building against the RID that you will publish.  Catch any build errors in advance, ya know?  So from the CLI:

dotnet build -r linux-x64
dontet publish --self-contained -r linux-x64

The key here is the ‘-r’ and ‘--self-contained’ arguments.  I’m using ‘-r’ but you can also use ‘--runtime’ for the long form.  What this will produce (unless you specified a different output argument) is a folder with your RID and then the publish folder within that.  So for me above in my project directory it would be in \bin\release\netcoreapp3.0\linux-x64\publish and everything in that \publish is what is needed for SCD. 

Using the CLI is the basic you can do.  Now it’s on your own to push the bits where they need to be, specify startup commands, etc.  But better, you can use some tools that build on top of this CLI goodness.

Azure DevOps 

Let’s first talk about DevOps.  You should be using this.  No let’s be real clear – USE CI/CD!!! “But I’m only myself as a developer!” So what, you should still use this.  I’m a believer now and it is so simple to set up and you then just worry about committing your code and your DevOps flow/pipelines take care of build/test/publish/release for you…it’s awesome.  With that out of the way, let’s show you how to do it in Azure DevOps. 

Honestly, if you haven’t gotten the hang of CI/CD you really should spend the time in one day and get your project configured.  Whether it is Azure DevOps, GitHub Actions, AppVeyor, Jenkins, whatever, you will be better off.  Azure DevOps is basically free and you’ll be happier once you’ve got it done.  You owe it to yourself.  But don’t worry, I explain the other ways below too.

In your pipeline you would add the .NET Core task and configure it for publish (assuming you also have the build/test).  This task basically calls the CLI so you put the same arguments as we just went through.  Here is my task in part one:

Screenshot of Azure DevOps pipeline showing publish arguments

Observe the ‘Zip Published Projects’ – I enabled this because this is what will be used for Azure App Service deploy next.

SPECIAL NOTE: When using the ‘Use .NET Core’ task in Azure DevOps, I recommend using the major.minor.x pattern to specify the version, so in this case 3.0.x.  This will ensure you get the latest patch updates for your specified major.minor version.  For self-contained this is smart since you are bringing the platform runtime with you and not getting auto-updates from the platform.

Then you want to deploy it, and for me to Azure App Service, I add the Azure App Service Deploy task and configure it using my published bits from the previous step.  Notice I’m still specifying the Runtime Stack value (even though at the time of this writing I need to use SCD) and specifying the startup command as the folder path to my app.

Screenshot of Azure DevOps pipeline showing deployment

And that’s it…now when I commit a change, these pipelines run and build/publish/deploy my app to my cloud provider without me having to worry if they have the runtime available yet or not.  You should be able to do this with Azure, AWS, wherever, and still use Azure DevOps to manage your workflow (or GitHub Actions).

Oh you’re a YAML person?  Sorry about that…here’s basically what it looks like using Azure DevOps tasks:

steps:
# task for publish CLI command
- task: [email protected]
  displayName: Publish
  inputs:
    command: publish
    publishWebProjects: True
    arguments: '--configuration $(BuildConfiguration) --output $(build.artifactstagingdirectory) --self-contained -r linux-x64'
    zipAfterPublish: True

#task for deploy to Azure App Service
- task: [email protected]
  displayName: 'Azure App Service Deploy: tacticview'
  inputs:
    azureSubscription: 'Azure-Microsoft'
    appType: webAppLinux
    WebAppName: tacticview
    deployToSlotOrASE: true
    ResourceGroupName: 'timheuer-linuxappsvc'
    SlotName: staging
    RuntimeStack: 'DOTNETCORE|3.0'
    StartupCommand: /home/site/wwwroot/TacticView

So there you have it for Azure DevOps!

GitHub Actions

For GitHub Actions, it is very similar to Azure DevOps using the CLI.  Here’s the relevant snippet for those steps (minus the deploy part):

name: ASP.NET Core CI

on: [push]

jobs:
  build:

    runs-on: ubuntu-latest
    
    steps:
    - uses: actions/[email protected]
    - name: Setup .NET Core
      uses: actions/[email protected]
      with:
        dotnet-version: 2.2.108
    - name: Build with dotnet
      run: dotnet build --configuration Release
    - name: Publish with dotnet
      run: dotnet build --configuration Release -r linux-x64 --self-contained

You would them want to add the deploy pieces in GitHub Actions and you can follow along with that from Abel’s blog post.

Visual Studio

I STRONGLY recommend you adopt a full CI/CD workflow for all your projects.  This has gotten so simple for most cases that it would be a shame not to do that.  But we do also provide means in Visual Studio tools to publish as well.  In your ASP.NET Core application if you right-click and choose Publish, you’ll see the options.  You first create a publish profile which walks you through where you want to publish.  I won’t show that part because it can be different depending on your selection.  But once complete you’ll be presented with this view:

Screenshot of Visual Studio showing deployment mode option

In that view you click on the area highlighted that is labeled Deployment Mode and will be presented with the dialog to change:

Screenshot of Visual Studio showing deployment mode and target runtime options

Once those two things are changed, then when you click Publish it will use the self-contained version of your app and push to whichever endpoint you chose.  And you are done for Visual Studio.

VS Code

What about if you use Visual Studio Code, VS Code?  With the latest release of the Azure App Service extension, .NET projects got better support for deploying to Azure App Service.  The default flow for this is simpler and doesn’t assume you want to deploy using SCD, so you have to do a bit more setup.  When you have a C# project in VS Code you should get prompted (and you should accept) the option to add assets to your project.

Screenshot of Visual Studio Code dialog requesting to add assets

This adds a .vscode foler with a tasks.json file in it that looks like this:

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "build",
            "command": "dotnet",
            "type": "process",
            "args": [
                "build",
                "${workspaceFolder}/testwebdeploy.csproj",
                "/property:GenerateFullPaths=true",
                "/consoleloggerparameters:NoSummary"
            ],
            "problemMatcher": "$msCompile"
        },
        {
            "label": "publish",
            "command": "dotnet",
            "type": "process",
            "args": [
                "publish",
                "${workspaceFolder}/testwebdeploy.csproj",
                "/property:GenerateFullPaths=true",
                "/consoleloggerparameters:NoSummary"
            ],
            "problemMatcher": "$msCompile"
        },
        {
            "label": "watch",
            "command": "dotnet",
            "type": "process",
            "args": [
                "watch",
                "run",
                "${workspaceFolder}/testwebdeploy.csproj",
                "/property:GenerateFullPaths=true",
                "/consoleloggerparameters:NoSummary"
            ],
            "problemMatcher": "$msCompile"
        }
    ]
}

Note lines 17-24 which show the CLI commands we’ve already been talking about here.  So these you’d have to change to add arguments.  For brevity I’m only showing the modified publish lines here to what you would need:

{
    "label": "publish",
    "command": "dotnet",
    "type": "process",
    "args": [
        "publish",
        "${workspaceFolder}",
        "--configuration",
        "Release",
        "/property:GenerateFullPaths=true",
        "/consoleloggerparameters:NoSummary",
        "--runtime",
        "linux-x64",
        "--output",
        "bin/release/netcoreapp3.0/publish",
        "--self-contained",
    ],
    "problemMatcher": "$msCompile",
    "dependsOn": "clean"
}

But that’s not it.  Remember when we talked about how the CLI publishes to a /netcoreapp3.0/<RID>/publish folder?  You’ll need to know that for the Azure App Service deploy.  Notice I added an explicit output argument here.  Why?  Well laziness for one.  The App Service Extension when you deploy will also add a settings.json file in your .vscode folder that looks like this:

{
    "appService.preDeployTask": "publish",
    "appService.deploySubpath": "bin/Release/netcoreapp3.0/publish"
}

Notice the appService.deploySubpath argument?  Well I just didn’t want to change that to /linuxx64/publish.  Either way, you could have changed the argument here or in tasks.json but the bottom line is they need to match paths so the extension knows what you want to publish!  With these two complete, when you choose to publish using the extension you’ll be publishing a self-contained app to your provisioned Azure App Service instance.

I hope this helps get an idea of the different ways you can publish and how to use the SCD option.  Realistically you don’t need to use this in cloud environments which support your framework version.  In fact, as soon as they do support your version, go back in to your DevOps flow and remove the SCD argument and kick off a new release…no need to change code in your project or use your dev tools…another benefit of using DevOps – you can even do it from your phone!

Thanks for reading this far!

Navigating the new Windows Terminal capabilities and code

I don’t like C++.  There I said it…got it out of the way.  It’s not a fair statement, as I know many people who do and think I’m crazy for not only writing code in C++.  As someone who didn’t come from a traditional computer science background, I just never ‘grew up’ on C or C++ as fundamentals. 

NOTE: You can hear more about my journey (and others) to tech on the awesome CodeNewbie podcast.  Here’s my episode right here: From police recruit to developer.

Go ahead, insert ‘you’re not a real programmer’ comments below.  I’ve established enough thick skin over the years to hear that feedback.  Anyhow, I digress.  While I’m not a fan of C++ it really isn’t about the language but the iteration dev loop.  I’ve often found it slower than what I’d like and the mix/match of toolsets, settings, etc. has left a lot to be desired to me.  HOWEVER, all this said, looking at C++ code is an excellent way to learn and to actually appreciate more the language and code that others write.  No better source of learning development exists than code.

Last week at the Microsoft Build conference a fun new project was unleashed to the developer community…a new Windows Terminal.  AND it is Open Source! 

I’ve never seen as much excitement about a terminal app in this ecosystem as I have the past 10 days.  So much fun.  First, here’s the team that brings you the Windows Terminal project:

Great people, passionate about making Windows a great development environment for no matter what you are developing.  This is also the team bringing you WSL and the new capabilities there!  So you want to try out the terminal…your first question is Where can I get it?

Building Windows Terminal

Well, for now it is source only.  This will likely change over the course of the next few months, but for now if you really want to try it out, you need to build it yourself.  The repository on GitHub has all the bits and instructions for you to do so.  I jumped on this right away like others but ran into a few bumps because I was using Visual Studio 2019.  Luckily this has been solved by this PR to help make the scavenger hunt for dependencies less painful. 

SIDE NOTE: If you have an open source or team-based project, do yourself a favor and add a .vsconfig file now.  Details: Configure Visual Studio across your organization with .vsconfig

Basically the steps are:

If you use Visual Studio 2019 (recommended by me) opening the OpenConsole.sln file should prompt you with a screen that will give you a note to ‘install’ the missing dependencies in the Solution Explorer:

If you click Install it will run you through the missing components you may not have enabled for completing the build.  Finish that process.  Once done, you will have all the bits and can go back to the OpenConsole.sln project and choose ‘Build Solution’ and start the process.  Lots of stuff building for the first time so it will take a bit.  Now what?

Deploying/Running Windows Terminal

Building is step 1, now you want to run it.  Windows Terminal is a Universal Windows Platform (UWP) application and must be deployed first.  The easiest way for doing this is from Visual Studio, but you may want to know what project!?  The Terminal\CascadiaPackage project is the one you want and just right-click on that and choose Deploy.  This will deploy it to your machine (assuming you enabled developer mode).  And this will now be in your start menu:

Click/tap/whatever on that and you will launch Terminal for the first time!

Wait, where are the tabs I saw?

Ah, now that you have it running, let’s take a tour.  The settings right now are a JSON file that you can access via the settings menu.  How do you get the settings menu?  Hit CTRL + T to bring up the tab view and then you should have a drop-down menu in the upper right area:

If you click Settings it will open up the profile.json file in the default editor on your machine configured for editing JSON files.  For me this was Visual Studio, but you can use Code or other editor as well.  This is editing your own profile for your terminal environment.  By default this JSON is void of white-space so you may want to format it to make it more readable.  In Visual Studio that is CTRL + K, CTRL + D and it will prettify it for you.  You’ll then see some initial settings in the top of the file:

{
  "defaultProfile": "{a933a071-2a32-42c9-b03a-550845793252}",
  "initialRows": 30,
  "initialCols": 120,
  "alwaysShowTabs": true,
  "showTerminalTitleInTitlebar": true,
  "experimental_showTabsInTitlebar": false,

If you change line 5 here like I did to true then you will always start with tabs even if only one console host is running.  When you save the profile.json file it will re-format to no whitespace FYI (there is a watcher on the file in the code) but the settings will become immediate.

Navigating the profile.json options

The profile file main meat is in the profiles themselves.  These drive what shells/consoles you can launch and their configuration.  For example here is a snippet of what mine looks like right now a bit:

{
  "defaultProfile": "{a933a071-2a32-42c9-b03a-550845793252}",
  "initialRows": 30,
  "initialCols": 120,
  "alwaysShowTabs": true,
  "showTerminalTitleInTitlebar": true,
  "experimental_showTabsInTitlebar": false,
  "profiles": [
    {
      "startingDirectory": "c:\\users\\timheuer\\documents\\github",
      "guid": "{b056b6a8-89ba-4868-86c6-2ea078cd4fdd}",
      "name": "cmd",
      "colorscheme": "UbuntuLegit",
      "historySize": 9001,
      "snapOnInput": true,
      "cursorColor": "#FFFFFF",
      "cursorHeight": 25,
      "cursorShape": "vintage",
      "commandline": "cmd.exe",
      "fontFace": "Cascadia Code",
      "fontSize": 12,
      "acrylicOpacity": 0.75,
      "useAcrylic": true,
      "closeOnExit": true,
      "padding": "0, 0, 0, 0",
      "icon": "ms-appdata:///roaming/cmd-icon.png"
    },
    {
      "startingDirectory": "c:\\users\\timheuer\\documents\\github",
      "guid": "{a933a071-2a32-42c9-b03a-550845793252}",
      "name": "PowerShell",
      "background": "#0C0C0C",
      "colorscheme": "UbuntuLegit",
      "historySize": 9001,
      "snapOnInput": true,
      "cursorColor": "#FFFFFF",
      "cursorHeight": 25,
      "cursorShape": "vintage",
      "commandline": "powershell.exe",
      "fontFace": "Meslo LG M for Powerline",
      "fontSize": 12,
      "acrylicOpacity": 0.75,
      "useAcrylic": true,
      "closeOnExit": true,
      "padding": "0, 0, 0, 0",
      "icon": "ms-appdata:///roaming/powershell_64.png"
    },

While I don’t have all options enabled a little spelunking the source code helps you know what other options exist:

  • name: the name of the particular profile.  Right now this is what shows in the drop-down menu shown previously.  There is an issue logged to perhaps make this the name of the tab as well.
  • guid: the unique identifier of this profile.  Incidentally if you didn’t know and wanted to create your own profiles and need a GUID, VIsual Studio has a “Create GUID” tool available from the tools menu…choose registry format and then copy/paste
  • colorscheme: this maps to the color scheme for this area…which is an array of colors that maps to the various foreground/background/text/highlight/etc. settings that are also configurable.  You can see in your profile file that Campbell and a few others (Solarized Dark/Light) are pre-configured.  You can use iterm2colors values to create a new scheme.
  • foreground: the foreground, duh (overwriting colortable/scheme)
  • background: the background, duh (overwriting colortable/scheme)
  • colortable: an in-line version of the color array that would be within a scheme
  • historySize: I honestly haven’t looked at this one yet in the code to know
  • snapOnInput: I honestly haven’t looked at this one either
  • cursorColor: color of the cursor style you chose
  • cursorShape: different options to show the cursor: vintage (thick underscore), bar (vertial bar), underscore (thin underscore), filledBox, emptyBox
  • cursorHeight: height for the cursor
  • commandline: the command to run for the profile (full path or something that will be found in PATH)
  • fontFace: the fond to use for this profile.  You may see in mine that I am using ‘Meslo LG M for Powerline’ to get the cool customizations of the prompt.  Find more on how to do that here and it works in Windows Terminal
    • Note: the team will also be open sourcing a new font that will be used for the terminal, but is not yet available as of this post
  • fontSize: size of the font
  • acrylicOpacity: the opacity of the window if you choose to use the acrylic (transparency) feature.  value from 0-1
  • useAcrylic: true/false if you want to use the transparency
  • scrollbarState: hidden/visible are the options
  • closeOnExit: true/false if you want the tab to close when you initiate the exit command for your host
  • padding: this affects some of the output but honestly not working well right now, recommend leaving at 0,0,0,0
  • startingDirectory: the startup directory for this profile.  I have mine configured with my github directory where all my code is instead of $home (which is default)
  • icon: an icon that will show in the menu and the tab.  These live in your RoamingState directory (C:\Users\<yourusername>AppData\Local\Packages\WindowsTerminalDev_8wekyb3d8bbwe\RoamingState) and the format for the value is “ms-appdata:///roaming/<yourfilename>”

For now these are the available options that you should fitz around with to customize what you need to change.  I’ve messed around with these enough and got to where I want them to be for me right now.

Navigating the source and contributing

As I started this post about how I didn’t like C++, it is a good place to learn a lot.  The Windows Terminal source is a maze of someone else’s code and right now without a clear map.  So the best way is to really just dig in and follow a particular flow.  I find this is the best way to navigate any project source code: find one feature and try to find and follow it in the code.  For me, I was messing with the settings file so much and I hated using the menu with my mouse I wanted a keyboard shortcut.  I knew that CTRL + T launched a new tab, so hey I could implement a keyboard shortcut to launch the settings file quickly.  I first logged that feature as an issue on the project: Add keybinding for access to Settings and then went to work.  I knew that the new tab shortcut existed so I took my own advice: follow the code!  After a few tries and navigating all the headers and various code files I submitted a PR to the project for my own issue: PR#684.  There were 6 files changed to add this key binding and if you look at the change I proposed, there was already a function I could call (same one the menu uses) and I really just needed to map the key binding and have it call the existing function.  Simple, but I learned a bit of how the current code is structured in the repository.

So my recommendation would be to find something that you are seeing as a piece of existing functionality or something you want to add yourself and start looking at searching for where you think it might be.  You’ll quickly find out some of the structure around TerminalSettings, TerminalCore, TerminalControl, etc. and play around with making some changes.  Keep in mind this is active development and things will change around quite frequently.  The team even indicates much:

Note: The Command-Line Team is actively working out of this repository and will be periodically re-structuring the code to make it easier to comprehend, navigate, build, test, and contribute to, so DO expect significant changes to code layout on a regular basis.

I have already been bitten a bit by a merge, but no big deal just fix up a few things and move along again!

What’s Next?

The team has a good aspiration of what they want to accomplish and just getting started.  Quality will improve, bugs will get fixed, consumption of the terminal will be better, etc.  Read the README file in the repository and engage with the team on Twitter (their contact info in the README).

Hope this helps!

How I solved Jessica’s NYC Parking Problem with Twitter, Twilio and NO CODE

I’ve been reading a lot of great content on dev.to lately (seriously you should go check it out and follow some tags…great community there) and came across this great headline “How I Solved My NYC Parking Problem With Python, the Search Tweets API and Twilio” by Jessica Garson, a Developer Advocate at Twitter.  Jessica was trying to solve a problem that NYC folks have when trying to determine when to move their street-side parked cars each night due to ‘alternate  side regulations’ which determine when certain sides of roads can be used or the regulations won’t be enforced due to holidays, events, whatever.

New York City parking sign

She delved into using Python, the Twitter Search APIs and Twilio to tie these all together.  Fun problem to solve and results in a text to Jessica whether she needs to worry about moving her car or not.  Immediately after reading it I though of the game show Name That Tune.

Picture of Name That Tune game show

The premise of the game show is that contestants battle it out to who can identify a song based on the fewest amount of notes in a song.  So of course, it is 100% applicable to code, right?  Anyhow I thought to myself Self, I can solve this problem in less effort and with NO code!

Immediately I went to work using Azure Logic Apps.  Azure Logic Apps are a means to…well, let’s let the marketing people tell us what they are:

Azure Logic Apps simplifies how you build automated scalable workflows that integrate apps and data across cloud services and on-premises systems. 

Lots of buzzwords there but basically it’s workflow/orchestration platform that allows you to use ‘connectors’ between various inputs and outputs.  Logic Apps can be developed using code, but most are easily developed using no code and using the graphical connector tools.  If you’ve ever used/heard of Microsoft Office Flow this is powered by Azure Logic Apps underneath!  So immediately after reading the article I went to work.  I knew that Azure already had pre-build connectors for Twitter and Twilio and that I should be able to do this easily.  Connectors are pre-defined pieces of logic that help get access to events, data, and actions across other apps and platforms, in our case Twitter and Twilio!

Jessica’s problem was fairly simple: Watch when the @NYCASP account tweets and if it indicates rules are ‘suspended’ then alert her with a text message.  Now this relies on @NYCASP account being consistent in their tweets and it turns out they are VERY consistent in them, so it’s easy to search for the simple terms as Jessica did.  So let’s do the same. 

Screenshot of Twitter account for NYCASP

To get started I needed an Azure account which is free to get started and there are a ton of services free that you can use forever.  I also still needed a Twilio account like Jessica notes so you still need that to get your Twilio credentials…be sure you have that.  With both of these in hand, let’s log in to the Azure Portal….we won’t even need any tools other than a browser to complete this app!

In the portal you’ll create a new resource in Azure…search for Logic App and it will show up:

Screenshot of Azure portal

You’ll need to provide a name, choose a resource group plan and a geographic location where you want this to live.  If you’ve never created any Azure resources before, a resource group is a container for certain compute resources to leverage.  For this, I recommend creating an App Service resource group and using a free plan that comes with your free trial account.  Then you can use this resource group for your Logic App.  Once you have that created navigate to that resource and you will see a page that welcomes you with a tutorial video and some options for pre-configured templates.  Thankfully one of the starting options is “When a new tweet is posted” so let’s use that one to help us get started as it will default add the Twitter connector!

Screenshot of Azure Portal Logic App creation

This dumps us into the Logic App designer, a graphical interface that helps us do the connections.  You’ll immediately see the first ‘trigger’ which is our Twitter one and it wants you to sign in to be able to use the functionality (think of this as authorizing use of the API).  Once you sign in click continue and you’ll get the options.  Now basically we want to look for when a new tweet is posted by @NYCASP on a time interval.  Their account is pretty consistent so I chose every four hours and used the ‘from:@NYCASP’ query language as the search text.

Screenshot of Twitter connector

That’s it for Twitter.  No code successfully so far!  So now every 4 hours this will check for a new tweet from that account.  Now let’s do something with it!  In Jessica’s scenario we need to look at the tweet and act upon it only if a specific condition was met.  So let’s use that little “+” symbol on the designer and add a new action.  Search for 'Condition’ and you will see ‘Control’ come up as an option…select that, then you will see the Condition connector…choose that.  The condition connector gives us a simple decision tree: what is the condition, what do you want to do if true, what do you want to do if false:

Screenshot of Condition action connector

In the condition area where it says ‘Choose a value’ when you click in to there, data from the previous trigger will be made available to you and you can see all the details it exposes!  We will scroll and look for Tweet text and select that.  Change the oeprator to ‘contains’ and type in ‘suspended’ as the value.  It should look like this:

Screenshot of Condition action connector

Now we know that the tweet is telling us something about suspending…but Jessica wants to know if she has to move her car for tomorrow.  Let’s add another condition to now check to see if it contains ‘tomorrow’ in the text.  We follow the same steps in creating a new condition trigger and connecting it. 

At this point we have a nested condition.  Could we have put them in the same one?  Probably, but I’m just following similarly Jessica’s flow. 

It should look like this:

Screenshot of Condition Action connector

Now we know if both of those are true we need to send a text message.  Click the ‘Add an action’ button in the True condition and search for Twilio and select the ‘Send a Text Message’ action.  This will add the provided Twilio connector which provides this functionality (and more).  Similarly like twitter you will see it and after selecting have to authenticate to your account to get the credentials.

Screenshot of Twilio search for connector

After doing that you now simply enter the details of your text message.  For Twilio the From number must be your account SMS number unless you have premium services that enables you to do something more.  If you try to get fancy without premium services from them, this will fail.  Don’t get fancy…we’re just moving cars across the street remember?  Enter the text you want to send and to the phone number you want to send it to…done!

Screenshot of Twilio connector

Now on the false conditions we do still have to tell the Logic App what to do.  In these cases, unless you want to do more, again add an action and choose System, then look for ‘Terminate’ – it’s a simple action that basically just stops the flow and can log a message.  I configured mine like this on BOTH false conditions:

Screenshot of Terminate action connector

That’s it, I’m done.  No code.  This took me longer to write this post then it did to actually do the Logic App the first time.  Now I waited.  And waited.  And waited.  I completed the logic app right after reading Jessica’s article and wanted to ‘naturally’ test this with the real deal tweets.  But no parking regulations were suspended.  My logs looked like this:

Screenshot of Azure log files

UNTIL A FEW DAYS AGO!  My phone buzzed, I looked down and BOOM:

Screenshot of SMS text message stating 'NYC Alt Parking Suspended Tomorrow'

Animated GIF image of people happily dancing

I wonder if Jessica got a text message too!  I was so happy and I don’t even live in NYC or have to worry about moving my car to a different side of the street!  The logs of Logic Apps are pretty cool as well and also graphically follow your flow and show you input/output state along the way:

Screenshot of Logic App log

So with no code and only a few steps to authenticate to Twitter and Twilio, I was able to use only a browser and complete the same task.  Did I win Name That Tune?  Who cares…doing software isn’t a competition, it’s fun and we get to use the tools and technology we feel most productive with.  For me, this was just a gut reaction to see if could be done as easily as I thought and indeed it could.  So yeah I won :-D.

Check out more about the pieces I used to put this together and get your custom logic apps working:

Hope this helps!

Making my NuGet Package more trusted with signing and doing it all in Azure DevOps Pipelines

Like most things in life, this all started with someone’s message on Twitter.

You can code-sign a NuGet Package?  I don’t know why I didn’t know that other than I figured I had no idea that I should care or have even given a second thought to it.  But Phil’s article and subsequent discussion on Twitter made me just realize I should take a look at the flow.  After all more sage wisdom from Phil kicked me over the edge:

You’re right Phil, and how can you argue with that forehead…so let’s do this. 

What is NuGet? NuGet is a package management system that was primarily developed for .NET developers and has now become a de-facto package/release mechanism for that ecosystem.  What npm is to Node.js developers, NuGet is to .NET developers.  More info at https://www.nuget.org

I’ve got a little library for helping .NET developers be more productive with creating Alexa apps, Alexa.NET.  When I started this project I used to just have this on my local box and would build things using Visual Studio and manually upload the NuGet package.  Then people made fun of me.  And I ate my sorrows in boxes of Moon Pies.  Luckily I work with a bunch of talented folks and helped me see the light in DevOps and helped me establish a CI/CD pipeline using Azure DevOps.  Since then I’ve got my library building, a release approval flow, automated packaging/publishing to the NuGet servers.  I simply just check-in code and a new version is released.  Perfect.  Now I just want to add code-signing to the package.  Naturally I do what every professional developer does and Google’d went to read the docs about code signing NuGet packages.  Luckily there is some pretty good documentation on Signing NuGet Packages

The first thing you need is a code-signing certificate.  There are many providers of these and different prices so pick your preferred provider.  I chose to use DigiCert for this one but have used other providers in the past.  The process for getting a code-signing cert is a bit more than just an average SSL certificate so be sure to follow the steps carefully.  Once you have that in place, export the DER and PFX versions as you will need both of these for this process.  Your provider should provide instructions on how to do this for you.

Next up was to modify my Azure DevOps Pipeline.  I do my NuGet activity in a Release pipeline after a successful build and an approval step to actually complete the deployment.  My simple release pipeline looks like this:

Screenshot of Azure DevOps NuGet Push definition

The signing is provided by the NuGet CLI and so I just needed to add another task to this flow right?  I added another one in and was going to just choose the ‘sign’ command as the configured option.  Well, the ‘sign’ command isn’t a selectable option in the task.  There is a request to make this one of the default options for the Azure Pipeline Task right now but it isn’t in there yet.  So for this we will use the ‘custom’ option which allows us to pass in any command and args.  The docs already told me the commands that I would need: a certificate file being the minimum I would have to have.  Hmm, how am I going to have a certificate file in my CD pipeline?!  As it turns out there is a secure file storage in Azure Pipelines I can use! This allows me to upload a file that I can later reference in a pipeline using an argument.  Remember that PFX file we exported?  In your DevOps project under Pipelines there is a ‘Library’ menu option.  Going there takes you to where you can upload files and I uploaded my PFX file there:

Screenshot of Azure DevOps Library

The next thing I need to do is also provide my password for that exported PFX file (you did export it with a password right!).  To do this I made use of variable groups in Azure DevOps, created a group called CertificateValues and added my name/value pair there, marking the value as a secret.  As a variable group I can ‘link’ this group to any build/release definition without explicitly having the variable in those definitions.  This is super handy to share across definitions.  You can now link to an Azure KeyVault for secrets (more to come on that in a part 2 blog post here).  I’ve got my code signing cert (PFX) and my certificate password stored securely.  With these two things now I’m ready to continue on my definition.

Now how will I get the file from the secure storage?!  As I read in the docs, there is a Download Secure File task that I can add to my pipeline.  The configuration asks me what file to use and then in the Reference Name of the Output Variables area I give it a name I can use, in this case ‘Certificate’:

Screenshot of Azure DevOps Download Secure File

This variable name allows me to use it later in my definitions as $(Certificate.secureFilePath) so I don’t have to fiddle around guessing where it downloaded to on the agent machine.  Now that we have that figured out, let’s move back to the signing task…remember that ‘custom’ one we talked about earlier.  In the custom task I specify in the Command and Arguments section the full command + arguments I need according to the docs.  My full definition looks like this:

sign $(System.ArtifactsDirectory)\$(Release.PrimaryArtifactSourceAlias)\drop\*.nupkg 
    -CertificatePath $(Certificate.secureFilePath) 
    -CertificatePassword $(CertificatePassword)  
    -Timestamper http://timestamp.digicert.com

To explain a bit I’m using some pre-defined variables System.ArtifactsDirectory and Release.PrimaryArtifactSourceAlias to help build the path to where the drop folder is on the agent machine.  The others are from the secure files (Certificate.secureFilePath) and variable group (CertificatePassword) previously defined.  These translate to real values in the build (the secret is masked in the logs as shown below) and complete the task. 

Screenshot of Azure DevOps release definition

Here was my log output from today in fact:

2019-04-04T19:10:44.4785575Z ##[debug]exec tool: C:\hostedtoolcache\windows\NuGet\4.6.4\x64\nuget.exe
2019-04-04T19:10:44.4785807Z ##[debug]arguments:
2019-04-04T19:10:44.4786015Z ##[debug]   sign
2019-04-04T19:10:44.4786248Z ##[debug]   D:\a\r1\a\_Alexa.NET-master\drop\*.nupkg
2019-04-04T19:10:44.4786476Z ##[debug]   -CertificatePath
2019-04-04T19:10:44.4786687Z ##[debug]   D:\a\_temp\timheuer-digicert.pfx
2019-04-04T19:10:44.4786916Z ##[debug]   -CertificatePassword
2019-04-04T19:10:44.4787190Z ##[debug]   ***
2019-04-04T19:10:44.4787449Z ##[debug]   -Timestamper
2019-04-04T19:10:44.4787968Z ##[debug]   http://timestamp.digicert.com
2019-04-04T19:10:44.4789380Z ##[debug]   -NonInteractive
2019-04-04T19:10:44.4789939Z [command]C:\hostedtoolcache\windows\NuGet\4.6.4\x64\nuget.exe sign D:\a\r1\a\_Alexa.NET-master\drop\*.nupkg -CertificatePath D:\a\_temp\timheuer-digicert.pfx -CertificatePassword *** -Timestamper http://timestamp.digicert.com -NonInteractive
2019-04-04T19:10:52.6357013Z 
2019-04-04T19:10:52.6357916Z 
2019-04-04T19:10:52.6358659Z Signing package(s) with certificate:
<snip to remove cert data>
2019-04-04T19:10:52.6360408Z Valid from: 4/4/2019 12:00:00 AM to 4/7/2020 12:00:00 PM
2019-04-04T19:10:52.6360664Z 
2019-04-04T19:10:52.6360936Z Timestamping package(s) with:
2019-04-04T19:10:52.6361268Z http://timestamp.digicert.com
2019-04-04T19:10:52.6361576Z Package(s) signed successfully.

Done!  Some simple added tasks and reading a few docs to get me having a signed NuGet package.  Now re-reading the docs on signed packages I have to upload my certificate to my NuGet profile to get it to be recognized.  This time I only need to provide the DER export.  Once provided and my package is published, I get a little badge next to the listing showing me that this is a signed package:

Screenshot of NuGet version history listing

This was a good exercise in helping me learn a few extra steps in Azure DevOps working with files and custom task variables.  Immediately as I was doing this, my friend Oren Novotny couldn’t help but chastise me for this approach. 

So stay tuned for a secondary approach using Azure KeyVault completely to complete this without having to upload a certificate file.

Microsoft announced Build 2019 and you won’t believe what they chose to do!

Last week, the Microsoft Build team announced the dates of the conference as well as announced that there is a Call for Speakers.  Yes, that’s right, the premier MIcrosoft-provided developer event is asking the public to be a part of the show!

I was pretty excited about this as one who has been involved in the Professional Developer’s Conference (PDC), the previous name for this conference, and Build for the past number of years.  I shared on Twitter to get excited too!

After that I got a few questions about the process.  I should be clear that I don’t own the process, have no influence over your submission, probably don’t even have a say in the selection of sessions, but generally want to see people try and get excited about the conference and sharing your passion to others on a broader scale opportunity than usual.  A few on my team have discussed some of this topic of ‘what’ should people submit as well.  I’ll try to change your mind and share my opinion and some thoughts from some colleagues.

First and foremost: TELL A STORY!

I’ve been too guilty myself of being the one to share just all the ‘stuff’ my team did over time.  Sometimes these can be fun sharing all the hard work the team put in to releasing a product.  These ‘lap around my features’ sessions work well for presenters, but I’ve learned over time may not be the best for the audience.  When all I’m doing is telling you about a cool tech feature I’m doing a disservice to your time and the product capabilities usually.  I should be telling you how to solve a problem, how to innovate in your project/process, or how to make you more productive.  To do this I need to think of you, dear reader/attendee, and not me/my team/my product.  So to me, this is the best advice I can give you…put yourself in an attendee perspective and think about what you can share that will help them get to their ‘next,’ whatever that may be.

Some thoughts on how to do this…

  • How does your title read?  Is it “New network settings for Azure Virtual Machines” or could it be “Be smart and secure your cloud networks” – both probably end up showing a similar tech aspect, but the second focuses more on the problem, putting yourself in the audience shoes.  Think of titles as the problem question.  And hey, I often give advice as well to ‘think clickbait’ – some people hate that but it puts you in a mindset to entice the reader/user.  “Migrating your APIs to serverless infrastructure will save you time and money, I’ll prove it!"
  • Brief but specific abstract.  Your abstract shouldn’t be full of buzzwords and non-sentences.  If it reads like an analyst brochure, re-think it.  Start with leading off your title.  Someone is reading the abstract because the title grabbed them.  Get more practical in the first 1-2 sentences.  Keep it brief between there explaining what you’ll show, then end with what THEY will get out of it.  Some call this a ‘call to action’ but think about it in those terms like When you leave you’ll know how to save time deploying your containers to the cloud and have more time for code!
  • Think of the audience.  I said this already but sincerely think of them.  Most of the time I’ve seen conference sessions think of the speaker.  *I* want to show you something.  *I* know better than you so listen to me.  Flip that.  Put yourself in the other seat and tell them how you are going to help them.  If you tell me you’re going to help me in my job in a way that entices me in your language, you’re speaking to me…I’m relating more.  I may not know I need to figure out my container clusters in a specific node configuration…but I do know that I have a problem managing my apps at scale, for example.

These aspects speak both to the inevitable session but also to the Call for Speakers selection team.  The Build team (and other conferences) will receive lots of submissions.  Your title will be the first thing that needs to grab attention.  Then your first 2 sentences.  Telling a story through those helps be unique and specific to help someone learn and not come across as you wanting to ‘tell them stuff.’  Niall Merrigan has a good post about the process that he has been a part of that touches on some of these.  It’s a helpful read from a different perspective.  And different perspectives matter!  So another pro-tip…run your idea by a few that might be your target audience.  That will help!

So with all that said, good luck and submit an idea.  We’ve already got a LOT of submissions.  I have no idea how many will be chosen from the public group, but force the team to be in a hard place to choose the best and maybe expand the number of them…submit well thought out sessions!  Best of luck to you and I look forward to seeing you at Build 2019!

I hope this helps!

Microsoft Ignite for .NET Developers

This was the first time in a long time (I think maybe 10+ years) that I didn’t go to TechEd…err, I mean Ignite :-).  Was I sad to miss seeing old friends and hearing about TwoWay binding woes?  Sure.  Did I miss Orlando in the summer…nope (I get, it’s an easy shot, but yeah no).  I watched from afar though and found some really great stuff for .NET developers on different spectrums.  Ignite is Microsoft’s opportunity to share what is happening in the tech now versus only focus on futures.  I think for .NET developers we long for the ‘vNext’ of everything, but there is a lot of great things happening NOW!  And if you are a .NET developer who has been tip-toeing into the cloud development area, Ignite was a great place to start learning.

Here’s a list of things that I found for .NET Developers from Ignite. 

If you watch no-other ones, make sure to review these two higher-level sessions:

Aside from these broader sessions, here’s a list that might be relevant to you, dear .NET developer:

Lots of great stuff and maybe I’m missing a few.  Be sure to check out all the stuff from Channel 9 as well where a broad set of topics were covered in-between sessions with experts that were there.  Great smaller nuggets of knowledge that you should check out…here they are: Channel 9 Live at Ignite.

Another huge thing announced at Ignite was Microsoft Learn!

logo for Microsoft Learn

Microsoft Learn is a new online way of learning a bunch of new technology.  Through guided learning paths you complete modules and earn points and achievements.  It’s been fun a bit to compete with some co-workers on who can achieve the most badges!  One of the great things also is that you don’t need much to get started.  For the Azure content you do all the tasks IN THE LESSON through the cloud shell environment and with a free sandbox (no sign-up, no credit card)!  Check it out and see how many achievements you can get this week.

Sorry I missed some of the ‘hallway’ sessions (which are the best), but I’ll be looking for you all at the next one!  What were your favorite sessions?

Hope this helps!

Using machine learning to tune your SQL database in Azure

I’m presently working on posting my insight in moving a recent app of mine from an on-premise (colocated server) server to the Azure cloud.  My app is a pretty typical (and OLD) ASP.NET app with a SQL Server database backend.  There was some interesting things I learned in moving the web app portion to Azure App Service, but I’ll save that for a later post…this one is about Azure SQL Server.

My database was actually a SQLExpress database that has been humming along for a while.  It’s also an older schema and a typical relational database system.  The first step for me was to ensure I could move my data before I moved the site…I wanted a full move to a cloud platform.  There are a few ways of migrating databases to Azure as noted in this blog post Differentiating Microsoft’s Database Migration Tools and Services.  Recently one of our Cloud Developer Advocates, Scott Cate, demonstrated the newest full migration strategy, Data Migration Service (DMS), at the Azure Red Shirt Dev Tour.  Because this isn’t generally available I didn’t want to use it and as well my database didn’t warrant the need for managed instance features.  So I went with the Data Migration Assistant tool.

First was to get the tool and install it on my source server.  Because this is an on-prem server I just logged in remotely (RDP) as an admin.  You can choose to first run an assessment, but for me I went crazy and just wanted to migrate (don’t worry, that actually runs an assessment first as well):

After connecting to my SQL db instance I select the database I want to migrate.

NOTE: Use the “trust server certificate” checkbox when doing this migration from local db or you will see some failures in trying to connect to Azure.

After this I need to choose the destination and I can either select an already-created Azure SQL database or create one within my Azure subscription.  This link will launch instructions on how to create a new Azure SQL database on your subscription using the Azure Portal.  You will want to select your server size, etc. based on your needs.  There is some pricing guidance on the selections to help you understand your cost.  After this, return to the tool, enter the server you just created (or already had) and authenticate using your credentials for the server.  Then choose which database is the target for the migration:

Then the next step will show you the assessment and flag things that may need attention.  You need to examine these to assess whether they will be impactful to your app and either accept the script or not.  Once done you have 2 more steps: Deploy Schema and then (assuming that was successful) Migrate Data.  For me, this was rather quick and it was done.  I verified the data and was good to go!

Post-migration Tuning

After deploying the database and site I made sure that on the database I turned on the Automatic Tuning feature provided to me as a service for hosting in Azure:

And then I went away.  Immediately after a few days I returned to see some automatic tuning being done and analyzed.  Azure had analyzed my database under real conditions and made recommendations to actually alter the database to improve performance.  This is then automatically applied if Azure determines it will benefit my performance.  Here were the recommendations:

And notice the determination of impact for one of them:

You’ll see that Azure’s machine learning was smart enough to realize that one of the recommendations wasn’t going to improve (in fact it assessed it would actually regress a query) and decided not to apply the initial tuning recommendation.  Pretty awesome.  Taking a look at my performance profile of the database you can tell very quickly when these recommendations were applied:

This is awesome.  I’ve got some tuning still to go, but thankfully Azure did all the hard work of helping me identify the performance bottlenecks of my database, suggest and analyze some automatic tuning it could do, but also still give me all the data I need to further analyze troublesome queries.

You can learn more about this feature in a recent Channel 9 video talking about this:

It really is an amazing feature and combined with the easy migration, I’m really excited about moving this app to Azure App Service + Azure SQL!

Hope this helps! 

Build 2017 UI Recap

Well that was fun!  It was really exciting to share with the world what our team has been working on in designing and developing over the past few years with regard to Windows UI platform advancements.  Build 2017 was a culmination of a lot of efforts across the company in various areas, but for UI it was the introduction of our evolution of design, the Fluent Design System.  This represents a wave of UI innovations over time, with Build 2017 showing the first views of Wave 1.  There was a lot of great buzz about Fluent, but for a great introduction be sure to check out my colleague Paul Gusmorino’s session introducing the design system:

Of course as developers sometimes we wince at the word ‘design’ because we don’t have the skills, maybe don’t understand it, or want to ensure we can achieve it with maximum ROI of our own developer time!  We agree!  In defining the Fluent Design System, we ensured that a lot of these new innovations are ‘default’ in the platform.  Starting now with the Fall Creator’s Update Insider SDKs you can start seeing some of these appear in the common controls.  When you use the common controls as-is, you will get the best of Fluent incorporated into your app.  James Clarke joined Paul later to explain and demonstrate this in practice showing how the new (and some existing) common controls take this design system into account and help you get it by default:

In addition to what we are doing *now* we also wanted to share what is on the horizon.  I was able to join Ashish Shetty at Build and talk about what is new in XAML and Composition platform areas for developers.  We shared more of the ‘default’ that is exhibited in the common controls but also explained some of the ‘possible’ in the platform that you can achieve with great improvements to our animation system.  We also shared the vision for the future in this space around semantic animations and vector shape micro-animations.  Check out our session on this area:

We had so much to talk about that I wasn’t able to show the simplicity of enabling the pull-to-refresh pattern in the new controls area.  Not wanting you to feel ripped off, I recorded a quick demo of a few of the things we weren’t able to demo.  Take a look here at my impromptu demo insert for you!

There is a lot of great new things coming in the Windows UI platform area for UWP:

  • NavigationView
  • ParallaxView
  • RefreshContainer
  • SwipeContainer
  • TreeView
  • ColorPicker
  • RatingsControl
  • Improved text APIs: CharacterRecieved, CharacterCasing, IsTrimmed
  • Improved input APIs like PreviewInput
  • Implicit animations
  • Connected animations improvements for ListViewBase
  • Advanced color and HDR for Image
  • SVG support for Image
  • Keytips support for XAML
  • ContentDialog and MenuFlyout improvements
  • Context menu support everywhere
  • UI analysis and Edit-and-continue in Visual Studio
  • Narrator developer mode
  • and more!

It is so great to be a part of this latest release and continue to deliver value (hopefully) to you, our developer customer.  Please be sure to let us know how you are using these new improvements and the Fluent Design System.  Share your creations with us at @windowsui so we can share with others as well!

We also announced a vision for defining a common dialect for UI everywhere around XAML.  We call this XAML Standard and are drafting a v1 specification now.  We will want your input on this and have established an open process to encourage community collaboration.  Please join the conversation at http://aka.ms/xamlstandard.  This is at very early stages but with your help we will establish the right fundamentals first and evolve over time.  Getting the core right is critically important…you can’t unify on a set of control APIs if the foundation isn’t solid and makes sense.  In addition to this, .NET Standard 2.0 for UWP was announced as well and is a HUGE advancement for .NET developers writing apps for UWP.  Oh no big deal, just about 20K more APIs you have access to now.  Yowza.  Listen to Scott Hunter, Miguel and myself talk about these areas on Channel 9:

I’m excited to see the creativity unleased by our developer community.  Thanks for letting me be a small part of it!

Implementing a type converter in UWP XAML

Verbose XAML, we all love it right?  What?!  You don’t like writing massive amounts of angle brackets to get to define certain properties?  I mean who doesn’t love something like this:

<MapControl>
    <MapControl.Center>
        <Location>
            <Location.Latitude>47.669444</Location.Latitude>
            <Location.Longitude>-122.123889</Location.Longitude>
        </Location>
    </MapControl.Center>
</MapControl>

What’s not to love there?  Oh I suppose you prefer something like this?

<MapControl Center="47.669444,-122.123889" />

In the XAML dialect this is what we refer to as a ‘type converter’ or more affectionately at times ‘string to thing’ as the declarative markup is just a string representation of some structure.  In WPF and Silverlight this was implemented through requiring to use the System.ComponentModel.TypeConverter class model where you would attribute your class with a pointer to an implementation of TypeConverter that would override the common things you need, most of the time ConvertFrom capabilities.

In UWP where we currently could not rely on the exact same implementation of System.ComponentModel.TypeConverter as it is not a part of the API exposure to UWP apps at this time as well as being a .NET concept which wouldn’t be available to other WinRT developers.  In looking at ways to achieve the same primary scenario, we can now look at the Creator’s Update to deliver the functionality for us.  In the markup compiler for Creator’s Update we now leverage the metadata CreateFromString attribute in WinRT to generate the correct metdata to do the conversion.  The responsibility lies in the owner of the class (looking at you ISVs as you update) to add this metadata capabilities.

NOTE: To enable this capability, the consuming app must currently have minimum target to the Creator’s Update.

Let’s use an example following my pseudo map control I used above.  Here is my class definition for my MyMap control

using Windows.UI.Xaml.Controls;

namespace CustomControlWithType
{
    public class MyMap : Control
    {
        public MyMap()
        {
            this.DefaultStyleKey = typeof(MyMap);
        }

        public string MapTitle { get; set; }
        public Location CenterPoint { get; set; }
    }
}

Notice it has a Location type.  Here’s the definition of that type:

using System; namespace CustomControlWithType { public class Location { public double Latitude { get; set; } public double Longitude { get; set; } public double Altitude { get; set; } } }

Now without a type converter I can’t use the ‘string to thing’ concept in markup…I would have to use verbose markup.  Let’s change that and add an attribute to my Location class, and implement the conversion function:

using System;

namespace CustomControlWithType
{
    [Windows.Foundation.Metadata.CreateFromString(MethodName = "CustomControlWithType.Location.ConvertToLatLong")]
    public class Location
    {
        public double Latitude { get; set; }
        public double Longitude { get; set; }
        public double Altitude { get; set; }

        public static Location ConvertToLatLong(string rawString)
        {
            string[] coords = rawString.Split(',');
            
            var position = new Location();
            position.Latitude = Convert.ToDouble(coords[0]);
            position.Longitude = Convert.ToDouble(coords[1]);

            if (coords.Length > 2)
            {
                position.Altitude = Convert.ToDouble(coords[2]);
            }

            return position;
        }
    }
}

As you can see in the highlighted lines, I added two things.  First I added an attribute to my class to let it know that I have a CreateFromString method and then provided the fully qualified name to that method.  The second obvious thing is to implement that method.  It has to be a public static method and you can see my simple example here.

Now when using the MyMap control I can specify the simpler markup:

And the result would be converted and my control that binds to those values in it’s template are able to see them just fine

Yes, my control is quite lame but just meant to illustrate the point.  The control binds to the CenterPoint.Latitude|Longitude|Altitude properties of the type.

If you are in this scenario of providing APIs that are used in UI markup for UWP apps, try this out and see if it adds delighters for your customers.  I’ve uploaded the full sample of this code to my GitHub in type-converter-sample if you want to see it in full.  Hope this helps! 

Write your Amazon Alexa Skill using C# on AWS Lambda services

After a sick day a few weeks ago and writing my first Alexa Skill I’ve been pretty engaged with understanding this voice UI world with Amazon Echo, Google Home and others.  It’s pretty fun to use and as ‘new tech’ it is pretty fun to play around with.  Almost immediately after my skill was certified, I saw this come across my Twitter stream:

I had spent a few days getting up-to-speed on Node and the environment (I’ve been working in client technologies for a long while remember) and using VS Code, which was fun.  But using C# would have been more efficient for me (or so I thought).  AWS Lambda services just announced they will support C# as the authoring environment for a Lambda service.  As it turns out, the C# Lambda support is pretty general so there is not compatibility in the dev experience for creating a C# Lambda backing a skill as there presently is for Node.JS development…at least right now.  I thought it would be fun to try and was eventually successful, so hopefully this post finds others trying as well.  Here’s what I’ve learned in the < 4 hours (I time-boxed myself for this exercise) spent trying to get it to work.  If there is something obvious I missed to make this simpler, please comment!

The Tools

You will first need a set of tools.  Here was my list:

With these I was ready to go.  The AWS Toolkit is the key here as it provides a lot of VS integration that will help make this a lot easier.

NOTE: You can do all of this technically with VS Code (even on a Mac) but I think the AWS Toolkit for VS makes this a lot simpler to initially understand the pieces and WAY simpler in the publishing step to the AWS Lambda service itself.  If there is a VS Code plugin model, that would be great, but I didn’t find one that did the same things here.

Armed with these tools, here is what I did…

Creating the Lambda project

First, create a new project in VS, using the AWS Lambda template:

This project name doesn’t need to map to your service/function names but it is one of the parameters you will set for the Lambda configuration, so while it doesn’t entirely matter, maybe naming it something that makes sense would help.  We’re just going to demonstrate a dumb Alexa skill for addition so I’m calling it NumberFunctions.

NOTE: This post isn’t covering the concepts of an Alexa skill, merely the ability to use C# to create your logic for the skill if you choose to use AWS Lambda services.  You can, of course, use your own web server, web service, or whatever hosted on whatever server you’d like and an Alexa skill can use that as well. 

Once we have that created you may see the VS project complain a bit.  Right click on the project and choose to restore NuGet packages and that should clear it up.

Create the function handler

The next step is to write the function handler for your skill.  The namespace and public function name matter as these are also inputs to the configuration so be smart about them.  For me, I’m just using the default namespace, class and function name that the template provided.  The next step is to gather the input from the Alexa skill request.  Now a Lambda service can be a function for anything…it is NOT limited to serve Alexa responses, it can do a lot more.  But this is focused on Alexa skills so that is why I’m referring to this specific input.  Alexa requests will come in the form of a JSON payload with a specific format.  Right now if you accept the default signature of the function handler of string, ILambdaContext it will likely fail due to issues you can read about here on GitHub.  So the best way is to really understand that the request will come in with three main JSON properties: request, version, and session.  Having an object with those properties exposed will help…especially if you have an object that understands how to automatically map the JSON payload to a C# object…after all that’s one of the main benefits of using C# is more strongly-typed development you may be used to.

Rather than create my own, I went on the hunt for some options.  There doesn’t exist yet an Alexa Skills SDK for .NET yet (perhaps that is coming) but there are two options I found.  The first seemed a bit more setup/understanding and I haven’t dug deep into it yet, but might be viable.  For me, I just wanted to basically deserialize/serialize the payload into known Alexa types.  For this I found an Open Source project called Slight.Alexa.  This was build for the full .NET Framework and won’t work with the Lambda service until it was ported to .NET Core, so I forked it and moved code to shared and created a .NET Core version of the library. 

NOTE: The port of the library was fairly straight forward sans for a few project.json things (which will be going away) as well as finding some replacements for things that aren’t in .NET Core like System.ComponentModel.DataAnnotations.  Luckily there were replacements that made this simple.

With my fork in place I made a quick beta NuGet package of my .NET Core version so I could use it in my Lambda service (.NET Core projects can’t reference DLLs so they need to be in NuGet packages).  You can get my beta package of this library by adding a reference to it via your new Lambda project:

This now gives me a strongly-typed OM against the Alexa request/response payloads.  You’ll also want to add a NuGet reference to the JSON.NET library (isn’t every project using this now…shouldn’t it be the default reference for any .NET project???!!!).  With these both in place now you have what it takes to process.  The requests for Alexa come in as Launch, Intent and Session requests primarily (again I’m over-simplifying here but for our purposes these are the ones we will look at).  The launch request is when someone just launches your skill via the ‘Alexa, open <skill name>’ command.  We’ll handle that and just tell the user what our simple skill does.  Do do this, we change the function handler input from string to SkillRequest from our newly-added Slight.Alexa.Core library we added:

public string FunctionHandler(SkillRequest input, ILambdaContext context)

Because SkillRequest is an annotated type the library knows how to map the JSON payload to the object model from the library.  We can now work in C# against the object model rather than worry about any JSON path parsing.

Working with the Alexa request/response

Now that we have the SkillRequest object, we can examine the data to understand how our skill should respond.  We can do this by looking at the request type.  Alexa skills have a few request types that we’ll want to look at.  Specifically for us we want to handle the LaunchRequest and IntentRequest types.  So we can examine the type and let’s first handle the LaunchRequest:

Response response;
IOutputSpeech innerResponse = null;
var log = context.Logger;

if (input.GetRequestType() == typeof(Slight.Alexa.Framework.Models.Requests.RequestTypes.ILaunchRequest))
{
    // default launch request, let's just let them know what you can do
    log.LogLine($"Default LaunchRequest made");

    innerResponse = new PlainTextOutputSpeech();
    (innerResponse as PlainTextOutputSpeech).Text = "Welcome to number functions.  You can ask us to add numbers!";
}

You can see that I’m just looking at the type and if a LaunchRequest, then I’m starting to provide my response, which is going to be a simple plain-text speech response (with Alexa you can use SSML for speech synthesis, but we don’t need that right now).  If the request is an IntentRequest, then I first want to get out my parameters from the slots and then execute my intent function (which in this case is adding the parameters):

else if (input.GetRequestType() == typeof(Slight.Alexa.Framework.Models.Requests.RequestTypes.IIntentRequest))
{
    // intent request, process the intent
    log.LogLine($"Intent Requested {input.Request.Intent.Name}");

    // AddNumbersIntent
    // get the slots
    var n1 = Convert.ToDouble(input.Request.Intent.Slots["firstnum"].Value);
    var n2 = Convert.ToDouble(input.Request.Intent.Slots["secondnum"].Value);

    double result = n1 + n2;

    innerResponse = new PlainTextOutputSpeech();
    (innerResponse as PlainTextOutputSpeech).Text = $"The result is {result.ToString()}.";

}

With these in place I can now create my response object (to provide session management, etc.) and add my actual response payload, using JSON.NET to serialize it into the correct format.  Again, the Slight.Alexa library does this for us via that annotations it has on the object model.  Please note this sample code is not robust, handles zero errors, etc…you know, the standard ‘works on my machine’ warranty applies here.:

response = new Response();
response.ShouldEndSession = true;
response.OutputSpeech = innerResponse;
SkillResponse skillResponse = new SkillResponse();
skillResponse.Response = response;
skillResponse.Version = "1.0";

return skillResponse;

I’ve now completed my function, let’s upload it to AWS!

Publishing the Lambda Function

Using the AWS Toolkit for Visual Studio this process is dead simple.  You’ll first have to make sure the toolkit is configured with your AWS account credentials which are explained here in the Specifying Credentials information.  Right click on your project and choose Publish to AWS Lambda:

You’ll then be met with a dialog that you need to choose some options.  Luckily it should be pretty self-explanatory:

You’ll want to make sure you choose a region that has the Alexa Skill trigger enabled.  I don’t know how they determine this but the US-Oregon one does NOT have that enabled, so I’ve been using US-Virginia and that enables me just fine.  The next screen will ask you to specify the user role (I am using the basic execution role).  If you don’t know what these are, re-review the Alexa skills SDK documentation with Lambda to get started there.  These are basically IAM roles in AWS that you have to choose.  After that you click Upload and done.  The toolkit takes care of bundling all your stuff up into a zip, creating the function (if you didn’t already have one – as if you did you can choose it from the drop-down to update an existing one) and uploading it for you.  You can do all this manually, but the toolkit really, really makes this simple.

Testing the function

After you publish you’ll get popped the test window basically:

This allows you to manually test your lambda.  In the pre-configured requests objects you can see a few Alexa request object specified there.  None of them will be the exact one you need but you can start with one and manually modify it easily to do a quick test.  If you notice my screenshot I modified to specify our payload…you can see the payload I’m sending here:

{
  "session": {
    "new": false,
    "sessionId": "session1234",
    "attributes": {},
    "user": {
      "userId": null
    },
    "application": {
      "applicationId": "amzn1.echo-sdk-ams.app.[unique-value-here]"
    }
  },
  "version": "1.0",
  "request": {
    "intent": {
      "slots": {
        "firstnum": {
          "name": "firstnum",
          "value": "3"
        }, "secondnum" : { "name": "secondnum", "value": "5" }
      },
      "name": "AddIntent"
    },
    "type": "IntentRequest",
    "requestId": "request5678"
  }
}

That is sending an IntentRequest with two parameters and you can see the response functioned correctly!  Yay!

Of course the better way is to use Alexa to test it so you’ll need a skill to do that.  Again, this post isn’t about how to do that, but once you have the skill you will have a test console that you can point to your AWS Lambda function.  I’ve got a test skill and will point it to my Lambda instance:

UPDATE: Previously this wasn’t working but thanks to user @jpkbst in the Alexa Slack channel he pointed out my issue.  All code above updated to reflect working version.

Well I had you reading this far at least.  As you can see the port of the Slight.Alexa library doesn’t seem to quite be working with the response object.  I can’t pinpoint why the Alexa test console feels the response is valid as the schema looks correct for the response object.  Can you spot the issue in the code above?  If so, please comment (or better yet, fix it in my sample code).

Summary (thus far)

I set out to spend a minimal amount of time getting the C# Lambda service + Alexa skill working.  I’ve uploaded the full solution to a GitHub repository: timheuer/alexa-csharp-lambda-sample for you to take a look at.  I’m hopeful that this is simple and we can start using C# more for Alexa skills.  I think we’ll likely see some Alexa Skills SDK for .NET popping up elsewhere as well. 

Hope this helps!



DISCLAIMER:

The opinions/content expressed on this blog are provided "ASIS" with no warranties and are my own personal opinions/content (unless otherwise noted) and do not represent my employer's view in any way.