Chris Sainty's avatar
Hi, I'm Chris
Software engineer, author, speaker, and open source..r
MVP logo

Posts

Blazor in .NET 8: Server-side and Streaming Rendering

When it comes to modern web development, performance and user experience are at the forefront of every developer’s mind. With .NET 8 introducing various rendering modes to Blazor, developers will be armed with an array of choices. Among these, server-side rendering and streaming rendering stand out, primarily due to their efficacy in delivering optimized web experiences.

In this post, we’ll delve deeper into these two modes and explore their significance in the new Blazor ecosystem of .NET 8.

Server-side Rendering: A Classic Powerhouse

In part 1 of this series I gave an overview of what server-side rendering (SSR) is, but now it’s time to wade in a little deeper and explore this render mode in more detail.

With .NET 8 comes a new template for Blazor applications simply called Blazor Web App, and by default all components use server-side rendering. This is worth mentioning as you can think of the various render modes as progressive enhancements to each other, with server-side rendering as the base level and auto mode the most advanced.

Server-side rendered page components in Blazor are going to produce the same experience as Razor Pages or MVC apps. Each page is going to be processed and rendered on the web server, once all operations to collect data and execute logic have completed, then the HTML produced will be sent to the browser to be rendered.

render-modes-ssr

There is no interactivity in this mode making applications very fast to load and render. This makes it an excellent choice for apps that deal with a lot of static data. I’m sure many line of business applications would fall into this category as well as online shopping apps where rendering speed is key.

Another boon of this mode is it’s ability to allow excellent search engine optimisation (SEO). Applications just produce regular HTML, so there will be no issues with web crawlers indexing pages, as can be the case when using single page applications (SPA).

But you might be wondering why would I choose to do that with Blazor when I could already use Razor Pages or MVC? You wouldn’t be wrong for asking, here are a couple of reasons why.

First, Razor Pages and MVC don’t offer very good options for building reusable components. Blazor on the other hand, has an excellent component model.

Second, when choosing either Razor Pages or MVC you are locked into a server-side rendering application. If you want to add any client-side interactivity, you either have to resort to JavaScript, or you could do it by adding in Blazor components. If you choose the latter, then why not just use Blazor for everything?

We’ve covered a lot of theory so far, let’s put those learnings into action by creating and running a Blazor app using the new SSR mode.

Configuring server-side rendering

As previously mentioned, there is a new template for Blazor applications in .NET 8. This template removes the Blazor WebAssembly and Blazor server specifics and creates a standard starting point using SSR. To create a new Blazor app using this template we can use the following command via the .NET CLI.

dotnet new blazor -n BlazorSSR

That’s it! We now have an application ready to go using server-side rendering. Remember, SSR is now the default mode for new Blazor applications and components. The other render modes are enhancements on top of it.

The key bits of configuration that make this work are contained in the Program.cs file.

using BlazorSSR;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddRazorComponents(); // 👈 Adds services required to server-side render components

var app = builder.Build();

if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();
app.MapRazorComponents<App>(); // 👈 Discovers routable components and sets them up as endpoints
app.Run();

The AddRazoreComponents method will register the services needed to render server-side components in the application. The other bit of configuration is the MapRazorComponents<T> middleware. This takes in a root component that’s used to identify the assembly to scan for routable components. For each routable component found, an endpoint is setup. Yes, you read that correctly. Each routable component is represented as an endpoint and this middleware generates those endpoints on application start up. Under the hood, this uses the minimal API technology and a new result type called RazorComponentResult. You can also setup your own endpoints to return the result of executing arbitrary Blazor components.

app.MapGet("/greeting", () => new RazorComponentResult<GreetingComponent>());

But that’s something we’ll delve deeper into in another post.

The other notable change in the new template is that there is no longer an index.html or _Host.cshtml page. The page markup that would normally be found in those pages is now all contained in the App.razor component along with the Router component. Previously, Blazor Server apps, or Blazor WebAssembly apps with pre-rendering enabled, would need to be hosted in a Razor Page as we couldn’t route directly to a Blazor component. But with the new endpoint approach we just saw, that is no longer the case.

If you look at the App.razor component you will notice that there is a JS file referenced at the bottom.

<script src="_framework/blazor.web.js" suppress-error="BL9992"></script>

This is not needed for SSR. By default, the template can do streaming rendering as well and that’s why the script is referenced. If you only want to use SSR then it can be safely deleted.

Running the app

Let’s see what this looks like when we run the app. Before we do, we’re going to make one other quick adjustment. The Weather.razor component is configured to use streaming rendering by default. Right now, we’re just focusing on SSR, so we’re going to delete the reference to the streaming rendering attribute at the top of the page.

@attribute [StreamRendering(true)]

We’re now ready to run that application.

ssr-page1

At this point, things don’t look much different to a normal Blazor application. The key thing to spot is that there is no JavaScript being downloaded in this case. It’s just HTML and CSS files.

Let’s navigate to the Weather page and see what happens…

ssr-page2

When we navigated there was a full page refresh. As you can see from the dev tools, all of the assets have been downloaded again, along with the new page we’ve navigated to. There has been no client-side navigation or routing involved, just classic request and response.

By the way, if you’re curious about the load time and haven’t looked at the code, the Weather page has a built-in 1 second delay to simulate getting data from a database. Something we’ll leverage when looking at streaming rendering next.

Streaming Rendering: A Modern Marvel

As previously mentioned, render modes can be thought of as progressive enhancements over each other, the base mode being SSR. The next layer is streaming rendering.

Streaming rendering is a small enhancement over SSR. It allows pages that need to execute long running requests to load quickly via an initial payload of HTML. That HTML can contains placeholders for the areas to be filled with the results of the long running call. Once that long running call completes, the remaining HTML is streamed to the browser and is seamlessly patched into the existing DOM on the client. The cool thing here is that this is all done in the same response. There are no additional calls involved.

render-modes-streaming-rendering

Streaming rendering is going to have the same types of use cases as SSR does. But really any application that is happy to be server-rendered overall, but might need a little help to improve loading times, is a good candidate.

What about caching? Can’t we use that to solve the long running call issue on the server? Why do we need a new render mode?

Fair questions, the answer here is that not all long running calls can be cached. For example, a page might need to load live data from an external API, such as stocks or currency exchange rates. It’s also perfectly reasonable that live data needs to be loaded from the apps database. In these cases, streaming rendering offers a much nicer experience for the user compared to waiting on a blank screen for a few seconds.

Configuring streaming rendering

In order to configure streaming rendering, we need to have the same basic configuration used for SSR. The two additional things required are the inclusion of the Blazor Web javascript file in App.razor.

<script src="_framework/blazor.web.js" suppress-error="BL9992"></script>

The other is the addition of the streaming rendering attribute on any page that we want to be streaming rendered.

@attribute [StreamRendering(true)]

If you are keen to do as little typing as possible, the attribute is defaulted to true, meaning that just the presence of it is enough to enable streaming rendering. So the following is also valid:

@attribute [StreamRendering]

As previously mentioned, these are both included by default in the new Blazor template with .NET 8. We just removed them when we were looking at SSR so we could see exactly how that mode behaved in isolation. So if you’re following along, you can just add those two bits back into your existing project and you’ll be good to go.

Running the app

When running a streaming rendered app, the initial load isn’t much different to that of an SSR one. The key difference is the inclusion of the blazor.web.js file.

streaming-page1

The differences start to show when navigating around. If we navigate to the Weather page as we did before, we’ll see a couple of interesting changes.

The first is that we now see some placeholder text when that page first renders.

streaming-page2a

This wasn’t something we saw before when using SSR. If we take a look at the code for the page we can see where this comes from.

@page "/weather"
@attribute [StreamRendering(true)]

<PageTitle>Weather</PageTitle>

<h1>Weather</h1>

<p>This component demonstrates showing data from the server.</p>

@if (forecasts == null)
{
    <p><em>Loading...</em></p> @* 👈 Placeholder while data is loaded *@
}
else
{
    <table class="table">
        <thead>
            <tr>
                <th>Date</th>
                <th>Temp. (C)</th>
                <th>Temp. (F)</th>
                <th>Summary</th>
            </tr>
        </thead>
        <tbody>
            @foreach (var forecast in forecasts)
            {
                <tr>
                    <td>@forecast.Date.ToShortDateString()</td>
                    <td>@forecast.TemperatureC</td>
                    <td>@forecast.TemperatureF</td>
                    <td>@forecast.Summary</td>
                </tr>
            }
        </tbody>
    </table>
}

@code {
    private static readonly string[] Summaries = new[]
    {
        "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
    };

    private WeatherForecast[]? forecasts;

    protected override async Task OnInitializedAsync()
    {
        // Simulate retrieving the data asynchronously.
        await Task.Delay(1000);

        var startDate = DateOnly.FromDateTime(DateTime.Now);
        forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = startDate.AddDays(index),
            TemperatureC = Random.Shared.Next(-20, 55),
            Summary = Summaries[Random.Shared.Next(Summaries.Length)]
        }).ToArray();
    }
}

If forecasts is null then the page renders the loading markup. We can in the OnInitializedAsync method that forecasts gets populated after a simulated 1 seconds delay. Once this happens, the else condition of the if statement is triggered. The table containing the forecasts is generated and streamed to the client where it’s patched into the existing DOM.

streaming-page2b

The second interesting change can only be seen when checking the dev tools. When navigating to the Weather page we didn’t do a full page refresh, it was actually a fetch request handled by Blazor.

streaming-devtools

Looking at the network tab, we can clearly see the request for the index page and the site assets (JS/CSS) are still present, even though Preserve log is disabled. We can also see that the request for the weather URL is of type fetch and was initiated by the blazor.web.js script. So what’s going on here?

This is Blazor’s enhanced page navigation in action. Once we include the blazor.web.js script, Blazor can intercept our page requests and apply the response to the existing DOM, keeping as much of what exists as possible. This mimics the routing experience of a SPA, resulting in a much smoother page load experience for the user, even though the page is being rendered on the server.

This enhanced navigation also works with a purely SSR application as well. It’s not tied to streaming rendering in any way. You just need to include the blazor.web.js script.

Summary

In this post, we’ve explored two of the new rendering modes coming to Blazor in .NET 8: Server-side rendering, and Streaming rendering.

Server-side rendering, or SSR, is the new default for Blazor applications and components going forward. Using the classic approach of processing all logic on the server and producing static HTML that is sent to the client. This render mode is excellent for applications that need no client side logic. They just need to render pages quickly and efficiently.

Streaming rendering can be thought of as an SSR+ mode. If your application has to make long running calls when constructing a page, and those calls don’t work well with caching approaches, then streaming rendering is going to be a great option. It will send the initial HTML for the page down to the browser to be rendered just as quickly as an SSR page. However, where the content from the long running call should be, placeholders will be rendered instead. Once the long running call has completed, the remaining HTML will be streamed down to the browser over the existing response where it will be seamlessly patched into the DOM.

We also saw a perk of including the blazor.web.js script–enhanced navigation. This allows Blazor to provide a SPA style page navigation experience, even though pages are being rendered on the server.

What do you think to these two new render modes? Got any ideas what sort of applications you’re going to use them for? I’d love to know your thoughts, leave a comment below.

Blazor in .NET 8: Full stack Web UI

.NET 8 is bringing the biggest shake-up to Blazor since its debut. Remember the days when choosing a hosting model felt like being stuck between a rock and a hard place? Good news! Those days are almost over. With .NET 8, Blazor’s WebAssembly and Server models will come together in a harmonious union, accompanied by some other exciting surprises.

Welcome to the first in a series where we’re diving deep into the Blazor enhancements coming with .NET 8. Today, we’re exploring the sea change in how we’ll construct Blazor apps, the fading relevance of the hosting model dichotomy, and a sparkling new concept termed rendering modes.

A quick history of hosting models

There was a time, very early in Blazor’s experimental days, when hosting models weren’t a thing. Blazor was going to run on WebAssembly and execute entirely inside the client browser, and that was that. However, around July 2018 server-side Blazor was announced, briefly known as ASP.NET Core Razor Components, then ultimately renamed to Blazor Server.

This new way of running Blazor had the application hosted on the server with clients connecting over a SignalR connection. All interactions on the client flowed through this connection to be processed on the server with UI updates sent back to the client where they were applied to the DOM.

And so, hosting models were born.

Currently, we have 4 hosting models for Blazor:

  1. WebAssembly (web apps)
  2. Server (web apps)
  3. Hybrid (desktop & mobile apps)
  4. Mobile Blazor Bindings (experimental)

The great thing? Components from the top three models are usually interchangeable. A component running in a Blazor WebAssembly app, can be lifted and run in a Blazor Hybrid application and vice versa (generally speaking). However, Mobile Blazor Bindings uses a different approach with components inspired from Xamarin Forms. This slight change of approach makes components written for the other hosting models incompatible with MBB.

The Hosting Model Dilemma

The problem that has been a thorn in the side of Blazor developers since almost the beginning: Which hosting model to pick. I’m specifically talking about building web applications now. Should it be WebAssembly or Server? What if I want to change further down the line? How much work would that be?

Of course, as developers, we’re notorious for staking our claim and championing our chosen approach. Remember the age-old tabs vs. spaces feud? Yeah, same energy when it comes to hosting models!

The reality is that neither hosting model is perfect. Each of them have their pros and cons. Let’s take a look.

Blazor Server

The server hosting model has some very compelling advantages:

However, it’s not all roses. There are trade-offs to picking this hosting model:

As you can see, there are some great pros, but the cons are not insignificant. Now let’s take a look at Blazor WebAssembly.

Blazor WebAssembly

Here are some of the advantages WebAssembly offers:

But what’s the cost?

As you can see, both hosting models have some fantastic advantages, but neither is without it’s compromises.

Wouldn’t it be marvellous if we could cherry-pick the best of both worlds? Imagine harnessing Blazor Server’s lightning speed and combining it with Blazor WebAssembly’s resilience. And here’s the twist: .NET 8 seems to be promising exactly that!

Full Stack Web UI with Blazor

.NET 8 is ushering in a new era where you won’t be boxed into a single hosting model. Think flexibility, modularity, and personalization. You can tailor how each page or even individual component renders. It’s like having your cake and eating it too!

Instead of committing to a single hosting model for your entire application, you can now mix and match based on the needs of specific pages or even individual components. For instance, a static contact page might use server-side rendering for performance, while a real-time dashboard might use the WebAssembly mode to leverage client-side capabilities. It’s all about giving developers the tools to make the best architectural decisions for each scenario.

How is all this going to be achieved I hear you ask, the answer is a new concept called render modes. Let’s explore each of them and see what they can do.

Server-side Rendering

Channelling the power of traditional web apps, this mode is reminiscent of how Razor Pages or MVC applications function. Server-side Rendering, or SSR, is when HTML is generated by the server in response to a request.

When using this mode, applications will load extremely fast as there is no work required on the client, and no large WebAssembly assets to download. The server is just sending HTML that the browser then renders. This also means that each request for a new page results in a full page load, as summarised in the illustration below.

render-modes-ssr

At this point you might be asking what’s that point of this when Razor Pages and MVC already exist? That’s a good question, and the main reason is that neither of those frameworks offer a good story around building reusable components. Blazor has an excellent component model and this mode allows that to be leveraged for more traditional server-rendered sites.

I think another interesting angle to consider is that Microsoft are positioning Blazor as their preferred UI framework going forward. Over the last few versions of .NET, there’s been a lot of focus on making .NET more approachable to new generations of developers. Along this line, one criticism .NET has had is that there are so many options for doing things that it puts people off. This feel like an attempt to fix that. Learn Blazor and you’ll be able to build any type of UI, web (both static and dynamic sites), mobile, and desktop. It’s just a theory, but I think it has some legs.

Streaming Rendering

Think of this mode as the middle ground between server and client rendering. Let’s pretend we had a page in an applications that needed to make an async call to fetch some real-time data–either from a database or another API. If we used the SSR mode we just covered, that would mean having to wait for that async call to complete before returning any HTML to the client. This could result in a delay loading the page. Also, there isn’t any other interactivity in our page so using Server or WebAssembly mode would be a massive overkill. This is where streaming rendering comes in.

When using streaming rendering, the initial HTML for the page is generated server-side with placeholders for any content that is being fetched asynchronously. This initial response is then sent down to the browser to be rendered. However, the connection is kept open and when the async call completes, the remaining HTML is generated and sent down to the browser over the existing open connection. On the client, Blazor replaces the placeholder content with new HTML.

It’s all about enhancing the user experience by minimizing wait times.

render-modes-streaming-rendering

Server Mode

While this retains the essence of the classic Blazor Server model, its granular application is the standout feature.

When selecting this mode a page, or component, will be optionally pre-rendered on the server and then made interactive on the client via a SignalR connection. Once interactive, all events on the client will be transmitted back to the server over the SignalR connection to be processed on the server. Any updates required to the DOM will then be packaged up and sent to the client over the same SignalR connection where a small Blazor runtime will patch the updates into the DOM.

render-modes-server

If you’ve been working with Blazor for a while, none of this is new, except that you’ll now be able to decide this on a page by page or component by component basis!

WebAssembly Mode

The OG traditional SPA approach. This model is derived from the Blazor WebAssembly hosting model and fully capitalises on client-side capabilities, allowing C# code to run in the user’s browser.

Using the same example as Server mode, the stock prices page would be downloaded to the client along with the various framework DLLs and WebAssembly runtime. Once on the client, it would be bootstrapped and the page loaded. Any API calls to get data would be made and the UI would be re-rendered as necessary to display any data returned.

render-modes-webassembly

One thing to note about any components marked as RenderMode.WebAssembly is that they need to be referenced in a separate Blazor WebAssembly project to the main Blazor Web project. This is so that the framework can determine what code and dependencies need to be sent down to the client. If you want a component to run on both the server and the client, then the component should be placed in a Razor Class Library project that is references from the main Blazor Web project and the Blazor WebAssembly project.

Auto Mode

If there were an MVP among the rendering modes, this might just be it.

Since the early days of Blazor, developers have been asking for a way to combine the benefits of Blazor Server and Blazor WebAssembly. And with .NET 8 that request will become a reality. When setting a page or component to use Auto mode, the initial load of that component will be via server mode making it super fast. But in the background Blazor will download the necessary assets to the client so that on the next load it can be done using WebAssembly mode.

render-modes-auto

While this rendering mode will address the biggest pain point for developers when embarking on a new Blazor project, what hosting model should we use? There are no free lunches.

Auto mode will increases the complexity of applications. Every component marked with RenderMode.Auto will need to execute on both the server and the client. Meaning that there will need to be some form of abstraction in place if the component needs to fetch any data.

Other things that spring to mind are pre-rendering and security. However, at the time of writing, Auto mode isn’t available as part of the .NET 8 previews so I’ve not been able to delve into these topics just yet, but as soon as I can, I will.

Summary

In this post, we’ve taken a first look at the major change coming to Blazor in .NET 8: Full stack web UI. Full stack web UI represents the biggest shift in the Blazor eco-system since the introduction of hosting models and is set to position Blazor as the “go to UI framework” for modern web applications built with .NET.

The new render modes give developers a huge amount of flexibility with their applications. We’ll be able to control how our applications are rendered at a per-component level. And Auto mode addresses one of the major pain points for developers getting started with new Blazor projects. But it’s not all sunshine and rainbows.

With great power comes great responsibility. Potentially having multiple render modes per page will create additional cognitive load for developers. If using Auto mode, developers will have to write server based code for fetching data, as well as traditional APIs as components will be run on both the client and server.

Personally, I’m extremely excited for what’s coming, but I’d love to know what your thoughts are? Leave a comment and let me know.

Adding Tailwind CSS v3 to a Blazor app

There is no denying the increasing popularity of Tailwind CSS and it’s no secret I’ve been a big fan of it for quite some time now, using it in both personal projects–this blog is styled using Tailwind–as well as professional ones.

Back in March 2020 (yes, that March! Right around the time the world went to s**t) I wrote a two part mini-series, Integrating Tailwind CSS with Blazor using Gulp. A lot has changed since then, Tailwind CSS v3 has been released and Gulp is not the tool I would use anymore for integrating Tailwind into a Blazor project. So, I thought it was the right time for a new post coving the current state of play. This also marks my first blog post since finishing my book, Blazor in Action!

In this post, I’m going to give an overview of the new features available in Tailwind v3 and what’s changed about how Tailwind is used. I’m then going to show you several ways that you can get Tailwind CSS integrated into your Blazor application.

Let’s get to it!

What’s new in Tailwind CSS v3

The biggest change to Tailwind in v3 is the move to the new JiT (Just-in-Time) compiler. This was originally introduced in v2 but was something you could opt-into. However, with v3, it’s now the default way to use Tailwind. This is an important change when integrating with Blazor. When developing a Blazor app, in addition to running the app via Visual Studio or dotnet watch, we’ll need to have an additional process running that watches for usage of Tailwind classes in our app and recompiles the CSS whenever a new class is found.

Another cool feature is the introduction of a new Play CDN. In previous versions of Tailwind, the CDN version was a set number of styles based on the default Tailwind configuration. If you wanted any form of customisation you were out of luck.

The Play CDN is actually a JavaScript library hosted on a CDN, add a reference to the script and you can use every Tailwind feature. However, it’s important to point out that the Tailwind team do not recommend this for production applications–only for development purposes or demo applications.

As well as the two major features listed above, there were a whole host of new styles and effects added to help make our applications look even better. Here are my highlights:

For the full list of changes in v3, check out the release blog post on the Tailwind site.

Trying things out using the new Tailwind CDN

If you’re thinking about using Tailwind for the first time, or perhaps, like me, you build a lot of demos apps or test apps and want to take advantage of Tailwind without having to do too much setup work, the new Play CDN is something you’ll want to checkout.

Unlike previous version of the Tailwind CDN, where a CSS file is produced based on the default settings, and then hosted on the CDN for us to reference. This new version allows us to take full advantage of all the features Tailwind has to offer. How does can it do that? Well, it’s because it’s actually a JavaScript library rather than a static CSS file. Let’s take a look at how to set it up.

Adding the Play CDN to a Blazor app

To add the Play CDN to a Blazor application, we need to add the following script tag to the head element of the host page (index.html for Blazor WebAssembly or _hosts.cshtml for Blazor Server).

<head>
    ...
    <title>Tailwind via Play CDN</title>
    <base href="/" />
    <script src="https://cdn.tailwindcss.com"></script>
</head>

Once the script is in place we can run the app using hot reload, either via dotnet watch or Visual Studio, and start applying Tailwind classes. As we do, the Play CDN library will pickup the classes and generate the necessary styles into a style tag in the head element.

For example, say we made the following changes to MainLayout.razor.

<div class="flex">
    <div class="w-[250px] bg-slate-900 p-6">
        <NavMenu />
    </div>

    <main class="flex flex-col flex-grow">
        <div class="bg-slate-700 p-4 text-right shadow-lg">
            <a href="https://docs.microsoft.com/aspnet/" target="_blank">About</a>
        </div>

        <article class="p-4">
            @Body
        </article>
    </main>
</div>

The Play CDN generate the following style tag:

CSS classes generated by Tailwind's Play CDN

This alone is pretty cool, but we can also adjust pretty much anything we want. For example, we can include core plugins using the plugins query parameter.

<script src="https://cdn.tailwindcss.com?plugins=typography,line-clamp"></script>

The proceeding code will include the Typography plugin and the Line Clamp plugin.

We can also customise any part of the Tailwind configuration, just as we could when working with Tailwind locally, by including a second script tag defining any additional configuration changes.

In the following example we’re adding a custom font size.

<script>
    tailwind.config = {
      theme: {
        extend: {
          fontSize: {
            xxs: ['.65rem', '.75rem']
          }
        }
      }
    }
</script>

The new Play CDN is really impressive and quick to work with, exactly what you want from a tool like this. I’m definitely going to be using this a lot in my demo apps and prototype projects going forward.

But what about production apps? How do we integration this new version of Tailwind into those? Let’s look at that next.

Adding Tailwind CSS to a Blazor project

As we learned earlier, with Tailwind v3 JiT mode is now the default. This means that we need to run a process that watches for usage of Tailwind CSS classes and recompiles the output CSS as required. There are two options for this:

  1. Tailwind CLI
  2. PostCSS integrated into an existing build tool such as webpack

In this post, I’m going to show you two options using the Tailwind CLI. However, if you already have some form of JavaScript build system in place, then the PostCSS option might be the best option for you. I’d suggest checking out the official docs on integrating PostCSS.

With the v3 release, we now have two options when it comes to running the Tailwind CLI. The first is installing and running the CLI via NPM. The second is a new option, the standalone CLI. As the name suggest, this option doesn’t require NPM, it’s a self-contained executable. We’re going to cover both options. Let’s start with the NPM version.

Integrating using the Tailwind CLI via NPM

I guess you could call this the ’traditional’ way to run the Tailwind CLI. I would suggest using this option if your application is already using NPM or you’re just happy or comfortable with NPM.

If you don’t already have it, you will need to install NodeJs. I’d suggest grabbing the LTS (Long Term Service) version. NPM will be installed along with Node.

Once you have Node installed, you can install the Tailwind CLI using the following command:

npm install -g tailwindcss

The -g flag installs the CLI globally on your machine.

We’re now ready to add Tailwind to Blazor. From a terminal in the root of your Blazor app, run the following command:

npx tailwindcss init

This will create a new default Tailwind configuration file called tailwind.config.js, in the same folder, which looks like this.

module.exports = {
  content: [],
  theme: {
    extend: {},
  },
  plugins: [],
}

All that we need to do to this file is tell Tailwind what files contain Tailwind CSS classes so the CLI can monitor them for changes. In a Blazor app those files will be razor files then either html files for Blazor WebAssembly (index.html), or cshtml files for Blazor Server (_hosts.cshtml). The code below will cover all file types mentioned.

module.exports = {
  content: ["./src/**/*.{razor,html,cshtml}"],
  theme: {
    extend: {},
  },
  plugins: [],
}

Now we need to setup a source CSS file. Tailwind will ingest this file and output the final compiled CSS our application will reference. I like to create a folder at the root of the application called Styles and add a CSS file in there called app.css, or something similar. Inside that file, we need to add 3 lines of code:

@tailwind base;
@tailwind components;
@tailwind utilities;

These are Tailwind directives and they will be replaced with whatever classes are needed based on what we use in our application. It’s also worth pointing out that we can add arbitrary CSS classes in this file and they will appear in the final output CSS file.

At this point we can go back to our terminal and start the Tailwind CLI. This will produce the output CSS file as well as put the CLI in watch mode.

npx tailwindcss -i ./Styles/app.css -o ./wwwroot/app.css --watch

With the arguments above, the compiled CSS file will be placed into the root of the wwwroot folder. If you want it somewhere else, change the path accordingly.

The final piece of setup is to add a reference to the output CSS file in the host page of the Blazor app.

<head>
    ...
    <title>Tailwind via NPM</title>
    <base href="/" />
    <link href="app.css" rel="stylesheet" />
</head>

At this point we can run the Blazor app using dotnet watch or Visual Studio with hot reload and start adding Tailwind classes to our components. As we do the Tailwind CLI will pick up the usages and regenerate the output CSS file. Blazor’s hot reload will update the page and changes should show up almost instantly. I will caveat this with, “depending on how hot reload is feeling on that day”. I have had days where changes can take between 10-15 seconds to show on the page and other times they are instant. It’s important to remember that .NET’s hot reload is still very new and things are improving all the time. It’s just something to be aware of.

When it comes time to publish the application we can use the CLI to generate a minified version of the final CSS using the following command:

npx tailwindcss -i ./Styles/app.css -o ./wwwroot/app.css --minify

That’s about it for the NPM version. Now let’s move over to the standalone CLI.

Integrating using the new Tailwind standalone CLI

This section is going to be pretty short as everything is the same between the standalone CLI and the NPM version except for how we get it and how we run it.

We download the standalone CLI from the releases page of the Tailwind GitHub repo. You will need to pick the right version for your OS. For example, I’m on an Intel MacBook Pro, so I would download the tailwindcss-macos-x64 version. Mac and Linux users will also need to give the executable executable permissions using the following command:

chmod +x tailwindcss-macos-x64

To save a bit of typing, I’d also recommend renaming it to tailwindcss, but I’ll leave that up to you.

At this point we can copy the standalone CLI into the root of our Blazor project and then use it in the same way we did for the NPM version.

First, we can generate a new Tailwind configuration file.

./tailwindcss init

Then update it with the file types to watch.

module.exports = {
  content: ["./src/**/*.{razor,html,cshtml}"],
  theme: {
    extend: {},
  },
  plugins: [],
}

Create the source CSS file: Styles/app.css. Then add the Tailwind directives.

@tailwind base;
@tailwind components;
@tailwind utilities;

We can run the standalone CLI to generate the output CSS file and watch for changes.

./tailwindcss -i ./Styles/app.css -o ./wwwroot/app.css --watch

Finally, we can add a reference to the output CSS file in the host page.

<head>
    ...
    <title>Tailwind via Standalone CLI</title>
    <base href="/" />
    <link href="app.css" rel="stylesheet" />
</head>

As you can see, everything is exactly the same, just without the need for NPM. For those of you who are using Tailwind in multiple projects, I’d suggest moving the standalone CLI to an appropriate location on your system and then adding that to the PATH. This will save you having to put a copy into every project.

One final thing. If you’re attempting to do this with a Blazor WebAssembly Hosted solution using Hot Reload via Visual Studio 2022 you might hit this issue. However, it does appear to work fine when running via the .NET CLI.

Summary

In this post, we’ve talked about the new features introduced in Tailwind v3. The highlights being the move to JiT mode as the default, the new Play CDN, and the standalone Tailwind CLI.

We then looked at 3 different ways to configure our Blazor applications to use Tailwind. The first was using the new Play CDN. A great option for prototyping or demos apps. The second was using the Tailwind CLI via NPM. This requires installing NodeJs and executing the CLI via the npx command. The third was the new standalone Tailwind CLI. This can be downloaded from the releases area of the Tailwind GitHub page. Once downloaded, it provides the same functionality as the NPM version without the need for NodeJs to be installed.

Blazored hits 1,000,000 downloads on NuGet

At some point over the weekend of 13th and 14th February 2021, the total number of package downloads for Blazored ticked over the 1,000,000 mark. This is just a crazy number and something I’d never considered would happen when I wrote the first package, Blazored LocalStorage, a little over 2 years ago.

Over the past two years, there have been over 50 contributors to the various Blazored repos. And I’d like to take this opportunity to say a massive thank you to everyone who has ever submitted an issue, raised a PR to fix a bug, add a feature or improve the documentation.

I also want to highlight that three of the repos which make up the Blazored family are maintained by other members of the Blazor community–I’ve just helped out with some DevOps configuration. Those libraries are:

Let me finish by once again saying a huge thank you to everyone who has contributed or used any of the Blazored packages. Working in open source is a lot of hard work and it wouldn’t be possible without the help of others.

I will also take this opportunity to highlight that if you like the Blazored packages and want to show your appreciation through sponsorship, then please checkout my GitHub sponsors profile.

Right, that’s all for this post, I’ve got to get back to writing some more chapters of Blazor in Action!

Talking Blazored on the Blazor Community Stand up

At the end of last year, I was asked by Safia Abdalla if I’d be interested in coming on the Blazor Community Standup to talk about the Blazored components and libraries. I of course, jumped at the chance and agreed straight away.The show was last Tuesday (9th February) and I had a great time. We had some great questions from the chat and I didn’t seem to send anyone to sleep–at least I hope I didn’t!

If you didn’t get a chance to see it live then don’t worry. All the community stand ups are recorded and added to the dotNET YouTube channel. I’ve added a direct link to the video below.

I just want to finish up by saying a huge thank you to Safia and Jon Galloway for having me on the show. It was a blast!


P.S. A little update on the book. It’s coming along and we’ve now got 5 chapters in the bag. Which means we’re almost halfway there! I’m not going to lie, this has got to be the most challenging thing I’ve ever done but it will all be worth it when it’s finished.

Remember, you can purchase the book right now via the MEAP program and get each chapter as I write it. You can also provide feedback and help shape the final print copy.

Blazor in Action is now available on MEAP

I’ve been working really hard over the past few months on Blazor in Action and I’m pleased to announce that the book is now available via Manning’s Early Access Program (MEAP).

If you’ve not purchased a book through the MEAP program before let me tell you a bit about how it works. When purchasing via MEAP, you will get early access to the book, receiving the chapters as I write them. As of right now, I’ve written the first two chapters, which you’ll receive as soon as you purchase. Then every month you’ll receive a new chapter until the book is complete.

Apart from getting early access, you also have the ability to feedback directly to me via the books forum. Working together we can make the final printed version as useful as possible.

The awesome folks at Manning have also given me a 50% discount code to share with you all which will be valid until 25th October 2020. When checking out use the code mlsainty to receive the discount.

Building a simple tooltip component for Blazor in under 10 lines of code*

It has been so long since I’ve managed to get some time to write a blog post. Since my last post, I’ve been working hard writing the first few chapters for my book, Blazor in Action – which should be out on MEAP (Manning Early Access Program) very very soon. My wife and I also had our first child at the start of September which has been an amazing and interesting new challenge to take on. I’m loving every minute so far but I’m sooo thankful for my coffee machine!!

Anyway, back to the subject in hand, in this post I’m going to be showing you how you can build a really simple, reusable, tooltip component for your Blazor applications. I’ve used a bit of artistic license with the title of this post, while it’s less than 10 lines of Razor, that doesn’t include the CSS. But as the CSS will be minified at some point I don’t think it should count…. Anyway, moving on.

If you’re just interested in the result, then you can check out the GitHub repo which has the final working version of the code in this post. I also want to point out that I’ve used the .NET5 preview bits to build this so you will need those installed to run the sample.

Creating the Tooltip component

The first thing we’re going to do is create a new component called Tooltip.razor and add the following code:

<div class="tooltip-wrapper">
    <span>@Text</span>
    @ChildContent
</div>

@code {
    [Parameter] public RenderFragment ChildContent { get; set; }
    [Parameter] public string Text { get; set; }
}

The component has two parameters, ChildContent and Text. When we use the component, it’s going to wrap the markup where we want the tooltip to display. The wrapped markup will be captured by the ChildContent parameter and displayed inside the main div of our tooltip. The other parameter Text, is what we will use to set the text that the tooltip will display.

That’s all there is in terms of Razor code! Not bad, 9 lines including a space. The real magic to making this all work is in the CSS. I’m using the new .NET5 RC1 bits and with that I can take advantage of Blazor’s new CSS isolation feature.

Adding the CSS

To use CSS isolation in Blazor we need to create a CSS file with the same name as the component the styles are used by. In our case the component is called, Tooltip.razor. So our stylesheet needs to be called Tooltip.razor.css. Once this is done we can add the following styles:

.tooltip-wrapper {
    position: relative;
    display: inline-block;
    border-bottom: 1px dotted black;
    cursor: help;
}

span {
    visibility: hidden;
    position: absolute;
    width: 120px;
    bottom: 100%;
    left: 50%;
    margin-left: -60px;
    background-color: #363636;
    color: #fff;
    text-align: center;
    padding: 5px 0;
    border-radius: 6px;
    z-index: 1;
}

span::after {
    content: "";
    position: absolute;
    top: 100%;
    left: 50%;
    margin-left: -5px;
    border-width: 5px;
    border-style: solid;
    border-color: #555 transparent transparent transparent;
}

.tooltip-wrapper:hover span {
    visibility: visible;
}

The first class, .tooltip-wrapper is applied to the container div of the component. It makes the div render as inline-block so the document flow isn’t disrupted. It also sets it’s position as relative. This means we can then absolute position the child span (tooltip text) where we need, relative to the parent div. The final two rules add some styling to show the user there is a tooltip available.

The next set of styles apply to the span element, this contains the tooltip text that will be shown. By default, the span is hidden and it’s absolutely positioned relative to the parent div. With the styles shown above, the tooltip will be shown above the content which is contained between the start and end tags of the Tooltip component.

The next style is what is called a pseudo element. This is going to add an element to the DOM. What this style does is create a small arrow at the bottom of the tooltip text pointing to the content that the tooltip is wrapping.

The final style will show the span containing the tooltip text whenever the user hovers over the parent div with their cursor.

That’s all there is to it!

Using the Tooltip

In order to use the tooltip, just wrap it around the content you want the user to be able to hover over to show the tooltip text, and add whatever message you want to be displayed. As an example you could wrap some text like this…

Welcome to your new <Tooltip Text="Blazor is awesome!">app</Tooltip>.

If all goes well it should look something like this…

Summary

Just to recap, in this post I showed you how you can create a simple reusable tooltip component for your Blazor application. The whole component was only 9 lines of Razor code, require no JavaScript, and about 36 lines of CSS. We also had a chance to use the new CSS isolation feature which will be shipping in .NET 5 which is right around the corner.

This was a much shorter post than I normally write but it’s just nice to actually write a blog post again! I hope to be announcing the MEAP for Blazor in Action any day now so as soon as that is available I will have a blog post up about it and how you can get your hands on it.

Creating a Custom Validation Message Component for Blazor Forms

As some of you may know, I’m a big fan of Tailwind CSS. If you’ve not heard of Tailwind before then please checkout my previous posts about it which can be found here and here. The reason I mention this is because when I was using it on a recent Blazor project, I hit a bit of a snag. I wanted to style my validation messages using Tailwinds utility classes, but I couldn’t add them to the component. This is because the ValidationMessage component adds a hard-coded class which can’t be added to or overriden.

builder.OpenElement(0, "div");
builder.AddMultipleAttributes(1, AdditionalAttributes);
builder.AddAttribute(2, "class", "validation-message");
builder.AddContent(3, message);
builder.CloseElement();

The full source can be viewed here, but as you can see in the snippet above, while the component supports passing additional attributes, the component will always override any class attribute that is supplied.

I could’ve added a wrapper div around each usage of ValidationMessage and added my classes there, but that felt like a clunky solution. I’ve also seen several people in the community asking about customising the output of the ValidationSummary component. So I thought this would be a good opportunity to come up with a better way.

In this post, I’m going to show how you can create a ValidationMessage component with customisable UI. I’ll start by showing a more simplistic approach and then show a more robust and reusable solution.

How does ValidationMessage work?

Before we get into the solutions, I wanted to quickly cover how the standard ValidationMessage works.

The default component is pretty compact weighing in at about 100 lines of code. It accepts a cascaded EditContext and adds an event handler for the OnValidationStateChanged event. All this handler does is call StateHasChanged whenever the event fires.

It also creates a FieldIdentifier based on whichever model property has been specified via the For parameter. This is then used when calling the GetValidationMessages method on the EditContext. This method will return all of the current validation error messages for the given FieldIdentifier. Any messages returned are then rendered.

In summary, the componet does three things:

Now we understand what the original component does, let’s move on to the solutions.

Creating a simple replacement

The first option, which was pretty quick to build, was largly a modified version of the original component.

The default component caters for a lot of potential scenarios, things like dynamically changing EditContext’s or updates to the For parameter. Therefore there’s a lot of code which is checking and caching values to support this and be as efficient as possible.

However, my use case was very straightforward, just a standard form which wouldn’t have anything dynamically changing. This mean’t that I could do away with a good amount of code. The result is the following.

@using System.Linq.Expressions

@typeparam TValue
@implements IDisposable

@foreach (var message in EditContext.GetValidationMessages(_fieldIdentifier))
{
    <div class="@Class">
        @message
    </div>
}

@code {
    [CascadingParameter] private EditContext EditContext { get; set; }

    [Parameter] public Expression<Func<TValue>> For { get; set; }
    [Parameter] public string Class { get; set; }

    private FieldIdentifier _fieldIdentifier;

    protected override void OnInitialized()
    {
        _fieldIdentifier = FieldIdentifier.Create(For);
        EditContext.OnValidationStateChanged += HandleValidationStateChanged;
    }

    private void HandleValidationStateChanged(object o, ValidationStateChangedEventArgs args) => StateHasChanged();

    public void Dispose()
    {
        EditContext.OnValidationStateChanged -= HandleValidationStateChanged;
    }
}

I implemented the same fundamental behavor as the original component, I created a FieldIdentifier based on the model property defined via the For parameter. Registered a handler for the OnValidationStateChanged event on EdiContext. I also unregistered it to avoid any memory leaks by implementing IDisposable.

In my markup, I then output any validation messages as per the original component, but with one simple difference. I removed the hard-coded CSS applied and added a Class parameter. Now I could provide any classes I wanted to apply on a case by case basis. Here’s an example of usage.

<CustomValidationMessage For="@(() => _model.FirstName)"
                         Class="mt-2 sm:ml-4 font-semibold text-red-600" />

This implementation resolved the original issue, I could now use my Tailwind CSS classes to style my validation messages without any issues. Job done, right?

For my immediate problem, yes. But it got me thinking about some of those comments from the community I mentioned in the introduction. In those comments, developers were asking about changing the HTML that was rendered, not just adding custom CSS classes. This solution doesn’t help with that problem. This lead me to create my second solution.

ValidationMessageBase for ultimate customisation

I really love the approach the Blazor team took with building the input components for forms. Providing us with InputBase<T> is great as we can focus on building custom UI, which is what needs to be changed in 99% of cases, while the boilerplate of integrating with the form and validation system is taken care of.

Wouldn’t it be great if there was something like that for validation messages as well…?

Creating ValidationMessageBase

By creating a base class for validation messages, as per the design of the input components, it would give developers the freedom to tweak the UI output to their exact needs. Here’s the code.

public class ValidationMessageBase<TValue> : ComponentBase, IDisposable
{
    private FieldIdentifier _fieldIdentifier;

    [CascadingParameter] private EditContext EditContext { get; set; }
    [Parameter] public Expression<Func<TValue>> For { get; set; }
    [Parameter] public string Class { get; set; }

    protected IEnumerable<string> ValidationMessages => EditContext.GetValidationMessages(_fieldIdentifier);

    protected override void OnInitialized()
    {
        _fieldIdentifier = FieldIdentifier.Create(For);
        EditContext.OnValidationStateChanged += HandleValidationStateChanged;
    }

    private void HandleValidationStateChanged(object o, ValidationStateChangedEventArgs args) => StateHasChanged();

    public void Dispose()
    {
        EditContext.OnValidationStateChanged -= HandleValidationStateChanged;
    }
}

This is essentially the logic from the code block in the first solution, except I’ve added a property which returns the current validation messages instead of calling the GetValidationMessages method directly on the EditContext. This is purely to make the developer experience a little nicer when implementing markup for the validation messages.

With this base class I can implement the same markup as I had for the first solutions really easily.

@typeparam TValue
@inherits ValidationMessageBase<TValue>

@foreach (var message in ValidationMessages)
{
    <div class="@Class">
        @message
    </div>
}

And if I want to implement something different in a future project all I need to do is create a new derived component.

@typeparam TValue
@inherits ValidationMessageBase<TValue>

@if (ValidationMessages.Any())
{
    <ul class="validation-errors">
        @foreach (var message in ValidationMessages)
        {
            <li class="validation-error-message">
                @message
            </li>
        }
    </ul>
}

Summary

In this post, I’ve show a limitation with the default ValidationMessage component which comes with Blazor, specifically, the inability to customise the markup it produces. I’ve shown two potential solutions.

The first is a modified version of the original component. The hard-coded styling was removed and replaced with a Class parameter allowing CSS classes to be specified per usage.

The second solution was based on the design of Blazors input components. A base class was used to abstract away the boilerplate code, allowing developers to focus on creating the specific markup they require, for ultimate flexability.

Avoiding AccessTokenNotAvailableException when using the Blazor WebAssembly Hosted template with individual user accounts

Blazor WebAssembly has shipped with a host of new options for authentication. We now have the ability to create Blazor Wasm apps which can authenticate against Active Directory, Azure AD, Azure AD B2C, Identity Server, in fact any OIDC provider should work with Blazor. But, there is a little gotcha which has tripped a few people up when building applications which mix protected and unprotected endpoints using the Blazor WebAssembly ASP.NET Core Hosted template with Individual user accounts enabled.

The default configuration in the template uses an HTTP client with a custom message handler called BaseAddressAuthorizationMessageHandler. This handler attempts to attach an access token to any outgoing requests and if it can’t find one, it throws an exception.

This makes sense if you’re only calling protected endpoints, but if your application allows the user to call unprotected endpoints without logging in then you might see a AccessTokenNotAvailableException in the browser console.

In this post, I’m going to show you how to configure an additional HttpClient instance which can be used by unauthenticated users to call unprotected endpoints, avoiding an AccessTokenNotAvailableException.

Default Configuration

When you create a new Blazor WebAssembly application using ASP.NET Core Identity, you’ll find the following configuration in the Program.Main method.

builder.Services.AddHttpClient("BlazorApp.ServerAPI", client => client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress))
                .AddHttpMessageHandler<BaseAddressAuthorizationMessageHandler>();

This sets up the HttpClient for the application using a custom handler, BaseAddressAuthorizationMessageHandler. This handler is really useful as it saves us having to worry about adding access tokens to our API requests manually. However, as I explained in the introduction, this is also a potential issue.

If we delve into the source code, we can see that the BaseAddressAuthorizationMessageHandler inherits from another class called AuthorizationMessageHandler, this is the class where we find the potential gotcha.

var tokenResult = _tokenOptions != null ?
                        await _provider.RequestAccessToken(_tokenOptions) :
                        await _provider.RequestAccessToken();
                        
if (tokenResult.TryGetToken(out var token))
{
    _lastToken = token;
    _cachedHeader = new AuthenticationHeaderValue("Bearer", _lastToken.Value);
}
else
{
    throw new AccessTokenNotAvailableException(_navigation, tokenResult, _tokenOptions?.Scopes);
}

The piece of code above is looking for a token so it can add it to the authentication header of the outgoing request. And as you can see, if one isn’t found then an exception is thrown, AccessTokenNotAvailableExcpetion.

The default template configuration actually gives us a hint of this potential issue on the FetchData component.

protected override async Task OnInitializedAsync()
{
    try
    {
        forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>("WeatherForecast");
    }
    catch (AccessTokenNotAvailableException exception)
    {
        exception.Redirect();
    }
}

There is a try..catch setup to specifically check for this exception and redirect the user to login.

Now we can see where the problem comes from, what’s the solution?

Configuring a second HttpClient for unauthenticated requests

The answer is to add a second HTTP Client instance which doesn’t use this message handler. We can then use this instance when we want to make unprotected requests, and use the original one for everything else.

The easiest way we can achieve this is by adding the following line to our Program.Main.

builder.Services.AddHttpClient("BlazorApp.PublicServerAPI", client => client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress));

This will add another named HTTP Client instance to our application but, as you can see, there is no special message handler added this time.

Once this is done, we can get an instance of this client by injecting IHttpClientFactory into our component or service and using the Create method with the name of the client.

@inject IHttpClientFactory HttpClientFactory

...


@code {
    private async Task GetSomethingFromAPI()
    {
        var client = HttpClientFactory.Create("BlazorApp.PublicServerAPI");
        var result = client.GetFromJsonAsync<Something>("/api/something");
    }
}

Another option, which in my opinion is a bit nicer as I’m not a fan of magic strings, is to create a typed client. This means we can request an instance by type rather than use the magic string solution.

To do this we first need to add a new class, we’ll call it PublicClient and add the following code.

public class PublicClient
{
    public HttpClient Client { get; }
    
    public PublicClient(HttpClient httpClient)
    {
        Client = httpClient;
    }
}

This code is quite straightforward, we’re just accepting a instance of HttpClient through DI and saving it off to the Client property.

Then in Program.Main we add the following code.

builder.Services.AddHttpClient<PublicClient>(client => client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress));

This is going to inject an HttpClient instance, configured with the base address we’ve specified, into our PublicClient class. It will also add our PublicClient class into the DI container so we can request an instance of it with the preconfigured HttpClient.

We can then use it like this.

@inject PublicClient PublicClient

...


@code {
    private async Task GetSomethingFromAPI()
    {
        var result = PublicClient.Client.GetFromJsonAsync<Something>("/api/something");
    }
}

I think this is a much cleaner way to do things, it’s saved us a line of code in our component and it’s more obvious at a glance the type of client we are using. Using this approach also allows us to abstract the underlying HttpClient entirely, if we wish. We can write methods on the PublicClient so we never expose the HttpClient to calling code.

public class PublicClient
{
    private HttpClient _client
    
    public PublicClient(HttpClient httpClient)
    {
        _client = httpClient;
    }
    
    public async Task<SomeThing> GetSomething()
    {
        var result = await _client.GetFromJsonAsync<SomeThing>("/api/something");
        return result;
    }
}
@inject PublicClient PublicClient

...


@code {
    private async Task GetSomethingFromAPI()
    {
        var result = PublicClient.GetSomething();
    }
}

This code is now really succinct, we’ve also centralised all our API calling code which should make any future maintenance simpler.

Summary

In this post we’ve explored a potential issue with unauthenticated users calling unprotected endpoints using the default HttpClient that comes with the Blazor WebAssembly auth templates. We’ve investigated Blazors source to understand where this comes from and then we’ve applied a solution which was to add a second HttpClient instance.

Once we had this working we also looked at a different implementation of the solution which made our code a bit simpler and easier to understand, as well as potentially making API calling code easier to maintain in the long run.

Blazor News from Build 2020

Last week saw the first ever virtual Build event from Microsoft and I thought they did an amazing job with it. Apparently over 200,000 people registered for the free event with around 65% of virtual attendees from outside the US. There were loads of great announcements on products ranging from Office to Azure, but as you would expect, I was most excited for the Blazor bits.

In this post, I’ve put together a round up of all the Blazor news from Build 2020.

Blazor WebAssembly goes GA

The biggest announcement by far was that on May 19th 2020, after just over 2 years in development, Blazor WebAssembly is officially released. I’m so happy for the Blazor team to hit this milestone and huge congratulations to them for their amazing work so far.

You can read the official announcement on the ASP.NET Blog. I’ve also written a more in-depth blog post about the release for Progress Telerik which can be found on their blog.

Modern Web UI with Blazor WebAssembly

Steve Sanderson gave a great new presentation on Blazor titled Modern Web UI with Blazor WebAssembly. In the 45 minute video Steve covers some of the major features of Blazor WebAssembly:

He also shows how to host a Blazor WebAssembly app on GitHub pages before finishing with a look at what the team are working on for .NET 5.

Build a web app with Blazor WebAssembly and Visual Studio Code

There was a great guided workshop session which walks developers through building Blazor WebAssembly apps using Visual Studio Code. The workshop is available on Microsoft Learn for anyone to work through at their own pace.

The workshop is aimed at new comers to Blazor and start with the basic like getting your development environment setup, through creating a new Blazor project and adding your own logic. If you’re someone who is looking to get into Blazor for the first time, this course would be a great place to get started.

Microsoft Learn - Build a web app with Blazor WebAssembly and Visual Studio Code

.NET Multi-platform App UI

Another interesting announcement for Blazor developers who are interested in Mobile Blazor Bindings (see my blog series on it if you want to know more) was the news of .NET MAUI.

.NET MAUI is the evolution of Xamarin Forms and will be coming in the .NET 6 time frame with previews hopefully by the end of 2020. .NET MAUI will come with a whole new renderer and project setup. There will only be one project needed now to target multiple platforms, rather than the current project per target. But what’s most interesting is that developers will be able to write .NET MAUI apps using either MVVM, RxUI, MVU or Blazor.

Once this is in place, it means Blazor developers can write UI for web, desktop and mobile platforms across, Windows, macOS, iOS and Android. How amazing would that be!

Summary

That wraps things up from Build 2020, there were some great announcements and content for us Blazor fans. But there is also so much to look forward to in .NET 5 which is coming in November, only 6 months away.

Most of the content from Build 2020 is being made available on Channel 9, there were so many other great session I would really recommend checking them out if you have some time.

I'm writing a book!

Announcing Blazor In Action

If you follow me on Twitter then you probably would’ve already seen the teaser tweet I put out about this a few days ago…

But I wanted to write a quick post here as well for those that don’t and also to explain in a little more detail what’s going on.

It’s going to be called Blazor In Action and will be part of the amazing “In Action” series from Manning publishing. I’m not going to lie, I’m feeling quite a bit of imposter syndrome looking at all the great authors who have written books for Manning, Jon Skeet and Andrew Lock being two prime examples. But I’m determined to do this book justice and I’m working with a great team who are helping me learn the ropes of long form writing.

The book is going to take about a year to write and publish so this is the start of a long road but one I’m excited to travel (please remind me I said this in a few months! 😂). I’m planning on still publishing blog posts while I’m writing, but as you can imagine, they won’t be quite as frequent as they have been in the past. So please bear with me during this time.

Along the way I’m hoping to get you some sneak peaks of the chapters on here, but Manning also have a great program called MEAP (Manning Early Access Program). This allows you to buy a book while it’s being written, you literally get the chapters as I finish writing them – as well as a copy of the finished book. You can also give your feedback along the way and help make the book even better! As soon as MEAP opens for Blazor In Action I’ll let you know.

That’s about it for now, I really hope you all like the book when it’s done and I’ll keep you updated with its progress.

Auto Saving Form Data in Blazor

I got tagged in a thread on Twitter last week by Mr Rockford Lhotka. I’ve embedded the thread below so you can read it for yourself, but essentially he was voicing his frustration at losing data when an error/timeout/server error/etc occur in web applications.

Now, this type of situation very much depends on the application and how it’s been developed, but, none the less, I agree. There is nothing worse than spending ages filling in a form and then for something to blow up and lose everything you’ve entered.

He and Dan Wahlin were discussing the idea of a form auto saving data so it could be recovered if something went wrong with the application. I got tagged by Rockford suggesting it would be a great addition to Blazored – Challenge accepted! 😃

In this post, I’m going to show you a solution I’ve come up with for auto saving and rehydrating form data in Blazor. I’m going to start by outlining the goals and boundaries I set for this project. Then I’m going to tell you about some options I decided not to go with and why I didn’t pursue them. Then I’ll get into the meat of things and step through the solution I developed. I’ll show how I satisfied each goal set out at the start. Then I’ll finish by showing you how using the solution compares to the existing EditForm experience.

I want to be clear this is by no means a bullet proof, cover all bases solution. It’s more an MVP to be built on. I’m hoping to release this as a new package under Blazored after a bit more development and refinement. If you want to help out with that then I’ve included a link to the new repo at the end of the post.

Defining the Goals

To give myself some focus and boundaries I made a list of four things that needed to be achieved in order for this to be considered useful.

  1. Save data to local storage when it’s entered into each form controls
  2. Rehydrate data from local storage when the user returns to the page
  3. Clear saved data from local storage when it’s successfully submitted by the form
  4. Try to keep things as close to the current experience with EditForm as possible for developers

There were also some things that I wanted to rule out for now.

Now I had a defined scope for the work, I could start playing around with some ideas and see what I could come up with.

The Cutting Room Floor

As you would expect, I went though many failed ideas before I settled on a solution I was happy with. I thought it might be interesting to briefly highlight what they were and why I didn’t pursue them.

First attempt

I started with the idea of creating a new component which could be nested inside an EditForm, similar to the DataAnnotationValidator. I quickly got something in-place which used the cascaded EditContext, provided by EditForm, to hook into the OnFieldChanged event. From here I could intercept model updates and persist the model to local storage.

The problem came when trying to rehydrate the form. Due to where my component sat in the component tree, the UI wasn’t updating to show the values in the textboxes. I needed a StateHasChanged call higher in the tree and I just couldn’t get that to work in a nice way.

I also would need to know when the form had been submitted so I could achieve number 3 on my goals list, clearing the saved values on a successful form post. There was no way I could reliably do that using the EditContext alone.

Second attempt

After some hacking about I came to the conclusion that the best option was going to be a new form component, replacing EditForm. I wanted to save duplicating all of the code from EditForm and just inherit from it. I could then add some additional functionality to save and load the form values.

Unfortunately, that was a short lived hope. I needed access to some members which were private. I got round that with some reflection foo, but the show stopper was needing to add some additional logic in the HandleSubmitAsync method, there was no way to hack this.

Conclusion

In the end, I concluded that I would need to duplicate the EditForm component and use it as a starting point for my new form component. It wasn’t the end of the world, after my experimenting I was happy with the fact I had to change the behaviour of the original component enough that just extending it wasn’t going to work.

With this decision made I then moved on to what became the solution I ended up with.

Building the AutoSaveEditForm Component

The starting point for the new component was the existing EditForm component produced by the Blazor team. If you want to see this in its unaltered state you can find it here. Let’s go through each of the changes I made and why.

Integrating with Local Storage

The first thing I did was to add the functionality to persist and retrieve the form to and from local storage. This required injecting IJSRuntime into the component and then adding the following two methods.

private async void SaveToLocalStorage(object sender, FieldChangedEventArgs args)
{
    var model = Model ?? _fixedEditContext.Model;
    var serializedData = JsonSerializer.Serialize(model);
    await _jsRuntime.InvokeVoidAsync("localStorage.setItem", Id, serializedData);
}

private async Task<object> LoadFromLocalStorage()
{
    var serialisedData = await _jsRuntime.InvokeAsync<string>("localStorage.getItem", Id);
    if (serialisedData == null) return null;
    var modelType = EditContext?.Model.GetType() ?? Model.GetType();

    return JsonSerializer.Deserialize(serialisedData, modelType);
}

Both of these methods do exactly what they say on the tin. SaveToLocalStorage persists the current form model and LoadFromLocalStorage retrieves the form model.

In order to identify a particular form, I added an Id parameter to the component which you can see used in the code above. It’s worth pointing out that this is definitely something which will need to be improved before the component could be used in a real app – this method won’t be enough to guarantee uniqueness across an application.

Hooking into Field Changes

With the local storage interfaces in place I moved on to saving the form when fields were updated. This required making some changes to the existing OnParametersSet method. There’s an if statement at the end of the original method which looks like this.

if (_fixedEditContext == null || EditContext != null || Model != _fixedEditContext.Model)
{
    _fixedEditContext = EditContext ?? new EditContext(Model);
}

I updated it to this.

if (_fixedEditContext == null || EditContext != null || Model != _fixedEditContext.Model)
{
    _fixedEditContext = EditContext ?? new EditContext(Model);
    _fixedEditContext.OnFieldChanged += SaveToLocalStorage;
}

I added an extra line which registers a handler for the EditContexts OnFieldChanged event. Now, whenever a field on the form model is updated the SaveToLocalStorage method will be called and the form will be persisted to local storage.

With the above changes I’d satisfied number 1 on my goals list:

  1. Save data to local storage when it’s entered into each form controls
  2. Rehydrate data from local storage when the user returns to the page
  3. Clear saved data from local storage when it’s successfully submitted by the form
  4. Try to keep things as close to the current experience with EditForm as possible for developers

Rehydrating the Form Model

I played around with a lot of ideas for this and ended up settling on the use of reflection. This was because I needed to update the specific instance of the model passed in, anything else I tried ultimately ended up overwriting that instance with a new one.

I came up with the following method which is used to copy the values from the form model retrieved from local storage, to the active form model.

public void Copy(object savedFormModel, object currentFormModel)
{
    var savedFormModelProperties = savedFormModel.GetType().GetProperties();
    var currentFormModelProperties = currentFormModel.GetType().GetProperties();

    foreach (var savedFormModelProperty in savedFormModelProperties)
    {
        foreach (var currentFormModelProperty in currentFormModelProperties)
        {
            if (savedFormModelProperty.Name == currentFormModelProperty.Name && savedFormModelProperty.PropertyType == currentFormModelProperty.PropertyType)
            {
                var childValue = currentFormModelProperty.GetValue(currentFormModel);
                var parentValue = savedFormModelProperty.GetValue(savedFormModel);

                if (childValue == null && parentValue == null) continue;

                currentFormModelProperty.SetValue(currentFormModel, parentValue);
                
                var fieldIdentifier = new FieldIdentifier(currentFormModel, currentFormModelProperty.Name);
                _fixedEditContext.NotifyFieldChanged(fieldIdentifier);
                
                break;
            }
        }
    }
}

In a nutshell, this code is looping over each property on the saved form model and then finding that same property on the current form model and transferring the value.

A key thing to note is the call to _fixedEditContext.NotifyFieldChanged(fieldIdentifier);. This lets the current edit context know that the value has been updated and triggers all the right events, a key one being any validation for that field.

With that method in placed I overrode the OnAfterRender lifecycle method and added the following code.

protected override async Task OnAfterRenderAsync(bool firstRender)
{
    if (firstRender)
    {
        var savedModel = await LoadFromLocalStorage();

        if (Model is object && savedModel is object)
        {
            Copy(savedModel, Model);
            StateHasChanged();
        }
        else if (savedModel is object)
        {
            Copy(savedModel, _fixedEditContext.Model);
            StateHasChanged();
        }
    }
}

When the component first renders, this code will check local storage for an existing saved form model. If one is found then it will use the Copy method we just looked at to copy values from the saved form model to the current one. Once this has been done it will call StateHasChanged, this is important as it will refresh the UI and show the newly copied values in the form controls.

This was number 2 ticked off the list of goals, form values were now being reloaded from local storage when a user revisited a page.

  1. Save data to local storage when it’s entered into each form controls
  2. Rehydrate data from local storage when the user returns to the page
  3. Clear saved data from local storage when it’s successfully submitted by the form
  4. Try to keep things as close to the current experience with EditForm as possible for developers

Clearing saved form data on a successful submit

This one took me a bit of time to get working and for a very specific reason. If I wanted to honour number 4 on my goals list, keep things close to the current developer experience, I wanted to allow developers to still use the same form events OnSubmit, OnValidSubmit and OnInvalidSubmit.

The original method which handles the submit events is called HandleSubmitAsync and looks like this.

private async Task HandleSubmitAsync()
{
    if (OnSubmit.HasDelegate)
    {
        await OnSubmit.InvokeAsync(_fixedEditContext);
    }
    else
    {
        if (isValid && OnValidSubmit.HasDelegate)
        {
            await OnValidSubmit.InvokeAsync(_fixedEditContext);
        }

        if (!isValid && OnInvalidSubmit.HasDelegate)
        {
            await OnInvalidSubmit.InvokeAsync(_fixedEditContext);
        }
    }
}

Keeping OnInvalidSubmit doesn’t require any action in terms of the saved form model. For OnValidSubmit I only had to make the following change to clear local storage after the OnValidSubmit event has been invoked.

if (isValid && OnValidSubmit.HasDelegate)
{
    await OnValidSubmit.InvokeAsync(_fixedEditContext);

    // Clear saved form model from local storage
    await _jsRuntime.InvokeVoidAsync("localStorage.removeItem", Id);
}

The headache came when trying to keep the OnSubmit event. This event requires the developer to handle the validation of the form manually and then submit it. With the existing design, there is no way for me to know if the form had been submitted and that the saved form model could be removed.

After some back and fourth I came up with the following design which does require some extra steps from the consumer but I think is worth the trade off.

The original OnSubmit event is typed as an EventCallback<EditContext> as follows.

[Parameter] public EventCallback<EditContext> OnSubmit { get; set; }

I updated the definition of the OnSubmit parameter to this.

[Parameter] public Func<EditContext, Task<bool>> OnSubmit { get; set; }

I’m now requiring the developer to register a method which returns a bool to let me know if the form was submitted successfully or not. Making this change did break a check performed in OnParametersSet.

if (OnSubmit.HasDelegate && (OnValidSubmit.HasDelegate || OnInvalidSubmit.HasDelegate))

This just required a simple update.

if (OnSubmit is object && (OnValidSubmit.HasDelegate || OnInvalidSubmit.HasDelegate))

Once that was fixed I could then update the code in HandleSubmitAsync to this.

if (OnSubmit is object)
{
    var submitSuccess = await OnSubmit.Invoke(_fixedEditContext);
    if (submitSuccess)
    {
        await _jsRuntime.InvokeVoidAsync("localStorage.removeItem", Id);
    }
}

I could now await the bool value from the consumer code and if it’s true the saved form model is removed form local storage. The complete updated HandleSubmitAsync method looks like this.

private async Task HandleSubmitAsync()
{
    if (OnSubmit is object)
    {
        var submitSuccess = await OnSubmit.Invoke();
        if (submitSuccess)
        {
            await _jsRuntime.InvokeVoidAsync("localStorage.removeItem", Id);
        }
    }
    else
    {
        if (isValid && OnValidSubmit.HasDelegate)
        {
            await OnValidSubmit.InvokeAsync(_fixedEditContext);
            await _jsRuntime.InvokeVoidAsync("localStorage.removeItem", Id);
        }

        if (!isValid && OnInvalidSubmit.HasDelegate)
        {
            await OnInvalidSubmit.InvokeAsync(_fixedEditContext);
        }
    }
}

At this point I was happy to tick off number 3 on my goals list.

  1. Save data to local storage when it’s entered into each form controls
  2. Rehydrate data from local storage when the user returns to the page
  3. Clear saved data from local storage when it’s successfully submitted by the form
  4. Try to keep things as close to the current experience with EditForm as possible for developers

The only remaining goal was number 4, and the only way to tick that off is to check a few things from the consuming developers point of view and see how it compares to the original EditForm component.

Using the AutoSaveEditForm Component

Let’s look at a simple example of form usage first. We’ll use a model which contains two fields, FirstName and LastName. To make sure validation is working we’ll also make the LastName required.

public class MyFormModel
{
    public string FirstName { get; set; }
    
    [Required]
    public string LastName { get; set; }
}

First we’ll look at using EditForm and its OnValidSubmit event. The code is as follows.

<EditForm Model="myFormModel" OnValidSubmit="HandleValidSubmit">
    <DataAnnotationsValidator />

    <InputText @bind-Value="myFormModel.FirstName" />
    <InputText @bind-Value="myFormModel.LastName" />

    <button type="submit">Submit</button>
</EditForm>

@code {
    private MyFormModel myFormModel = new MyFormModel();

    private void HandleValidSubmit()
    {
        Console.WriteLine($"Form Submitted For: {myFormModel.FirstName} {myFormModel.LastName}");
        myFormModel = new MyFormModel();
    }
}

So how does this compare with AutoSaveEditForm?

<AutoSaveEditForm Id="form-one" Model="myFormModel" OnValidSubmit="HandleValidSubmit">
    <DataAnnotationsValidator />

    <InputText @bind-Value="myFormModel.FirstName" />
    <InputText @bind-Value="myFormModel.LastName" />

    <button type="submit">Submit</button>
</AutoSaveEditForm>

@code {
    private MyFormModel myFormModel = new MyFormModel();

    private void HandleValidSubmit()
    {
        Console.WriteLine($"Form Submitted For: {myFormModel.FirstName} {myFormModel.LastName}");
        myFormModel = new MyFormModel();
    }
}

The only changes needed were the name of the component and the addition of the Id parameter. That’s pretty cool, really minimal changes for existing developers. But what about a slightly more complex example?

Let’s look at the scenario where the developer is using the OnSubmit event and manually validating. This is what the code looks like with the EditForm component.

<EditForm Model="myFormModel" OnSubmit="HandleSubmit">
    <DataAnnotationsValidator />

    <InputText @bind-Value="myFormModel.FirstName" />
    <InputText @bind-Value="myFormModel.LastName" />

    <button type="submit">Submit</button>
</EditForm>

@code {
    private MyFormModel myFormModel = new MyFormModel();

    private void HandleSubmit(EditContext editContext)
    {
        var isValid = editContext.Validate();

        if (isValid)
        {
            Console.WriteLine($"Form Submitted For: {myFormModel.FirstName} {myFormModel.LastName}");
            myFormModel = new MyFormModel();
        }
        else
        {
            Console.WriteLine($"Form Invalid");
        }
    }
}

Here’s the code with auto save.

<AutoSaveEditForm Id="form-one" Model="myFormModel" OnSubmit="HandleSubmit">
    <DataAnnotationsValidator />

    <InputText @bind-Value="myFormModel.FirstName" />
    <InputText @bind-Value="myFormModel.LastName" />

    <button type="submit">Submit</button>
</AutoSaveEditForm>

@code {
    private MyFormModel myFormModel = new MyFormModel();

    private async Task<bool> HandleSubmit(EditContext editContext)
    {
        var isValid = editContext.Validate();

        if (isValid)
        {
            Console.WriteLine($"Form Submitted For: {myFormModel.FirstName} {myFormModel.LastName}");
            myFormModel = new MyFormModel();
        
            return true;
        }
        else
        {
            Console.WriteLine($"Form Invalid");
            StateHasChanged();
            return false;
        }
    }
}

I think you’ll agree, that’s not bad and sticks pretty close to the original developer experience.

While I appreciate these are simple examples, I’m happy to take that as a win and tick off number 4 on the goals list.

  1. Save data to local storage when it’s entered into each form controls
  2. Rehydrate data from local storage when the user returns to the page
  3. Clear saved data from local storage when it’s successfully submitted by the form
  4. Try to keep things as close to the current experience with EditForm as possible for developers

I’ve created a gif to show what this looks like from the users perspective when using the component in an application.

I just want to reiterate again, this is not a fool-proof and complete solution. It’s just a starting point. I do think it’s really awesome that all this has been possible using just C# code. As you may have noticed, I’ve not had to write any JavaScript to achieve this.

Blazored AutoSaveEditForm

As I mentioned at the start, this is the starting point for a new component I’m adding to Blazored – you can find the repo here. It contains the complete code from this post and any suggestions are welcome, just open an issue. Also if anyone wants to help out with getting the component to a state where it’s ready for public use, please let me know.

Summary

If you’ve made it this far, well done! For those looking for the complete source code I haven’t published this yet as what you’ve just read about is the starting point for a new component which is going to be added to Blazored very soon.

In this post I’ve walked you though my design for a new form component which will persist form values to local storage until the form is submitted successfully. I started by showing why I was attempting this and the goals and boundaries for this work.

Then I talked about some of the brain storming I did and some other options I decided not to go with for the end design. Before taking you through the design for the end component which is based on the original EditForm component produced by the Blazor team.

Copy to Clipboard in Blazor

Recently I was creating a new repo on GitHub, a pretty common action for most of us now-a-days. When I noticed a feature which I use everytime but had never given much thought to, the copy to clipboard button.

This is a really useful feature, as I said a second ago, I literally use it everytime. Another great example of this can be found on the Bootstrap site. Each code example has a copy button in the top right corner allowing developers to copy the sample code straight to their clipboard.

I thought this would be a cool little feature to be able to use in Blazor applications so thought I would do a bit of investigation and see how it could be replicated.

In this post I’m going to show you how to create a simple copy to clipboard feature for Blazor apps. We’re going to start by looking at the available APIs we can use for this functionality. From there we are going to create two solutions, one for short amounts of text, replicating the GitHub example above. The other for larger amounts of text, replicating the functionality from the Bootstrap docs.

Choosing the API

The first thing to understand is that we can’t create the feature using C# code alone, we’re going to have to use some JavaScript interop to achieve our goal. There are two API options available to us, Document.execCommand and Clipboard.

Document.execCommand

Historically, clipboard operations have been achieved using execCommand. A quick google of “copy to clipboard in JavaScript” will bring up numerous examples using this API. execCommand is also well supported across the different browsers, a quick check on caniuse.com shows lots of green (95.63%).

However, there is a rather large issue with this API. It’s been marked obsolete.

The good news is there’s a new API which supersedes it called the Clipboard API.

Clipboard API

The new Clipboard API has the ability to read and write to the clipboard both syncronously and asyncronously, as well as integrating with the Permissions API to ensure the user has given permission to do so.

The API breaks down into 2 interfaces, [Clipboard](https://developer.mozilla.org/en-US/docs/Web/API/Clipboard) and [ClipboardEvent](https://developer.mozilla.org/en-US/docs/Web/API/ClipboardEvent). The ClipboardEvent interface gives us access to information about the modification of the clipboard by events such as cut, copy and paste. It’s good to know this is here but the more intestesting stuff is in the Clipboard interface.

The Clipboard interface provides us the functions to interact with the clipboard in our applications and contains the following 4 functions (from the MDN docs):

The adoption of this new API isn’t anywhere near as widespread as the old execCommand. The function we’re interested in is writeText has 71.11% adoption according to caniuse.

However, the browsers that don’t support this also don’t support Blazor, so that makes things simple. Based on all the information I decided to go with the new clipboard API for this functionality.

Solution 1: Replicating GitHubs copy to clipboard

In this first solution we’re going to replicate the funcationality from GitHub. This is ideal for any small, single line amounts of text you want to allow users to copy to their clipboards.

First create a component call CopyToClipboard with the following code.

Note: I’m doing this using the standard Blazor project template which has Bootstrap included. So for styling, I’m just using classes from that and some inline styles where needed.

@inject IJSRuntime JSRuntime

<div class="form-inline">
    <input class="form-control" readonly type="text" value="@Text" />
    <button type="button" class="btn btn-primary" @onclick="CopyTextToClipboard">Copy</button>
</div>

@code {    
    [Parameter] public string Text { get; set; }

    private async Task CopyTextToClipboard()
    {
        await JSRuntime.InvokeVoidAsync("clipboardCopy.copyText", Text);
    }
}

The component takes in the text which can be copied by the user via the Text parameter. When the user click on the button the CopyTextToClipboard is invoked which calls the following JavaScript.

window.clipboardCopy = {
    copyText: function(text) {
        navigator.clipboard.writeText(text).then(function () {
            alert("Copied to clipboard!");
        })
        .catch(function (error) {
            alert(error);
        });
    }
};

The above function calls writeText to write the text provided from the CopyToClipboard component to the users clipboard.

We can use the component like this.

<CopyToClipboard Text="Copy this text" />

Which will produce the following output.

Solution 2: Replicating Bootstraps copy to clipboard

This time we’re going to replicate the functionality from Bootstrap docs. This is great for copying larger amounts of text. Here is the updated code for the CopyToClipboard component.

@inject IJSRuntime JSRuntime

<div class="position-relative" style="background-color: #f5f5f5">
    <pre>
    <code @ref="_codeElement">
            @ChildContent
        </code>
    </pre>
    <div style="position:absolute; top: 10px; right: 10px;">
        <button type="button" class="btn btn-primary" @onclick="CopyTextToClipboard">Copy</button>
    </div>
</div>

@code {

    private ElementReference _codeElement;

    [Parameter] public RenderFragment ChildContent { get; set; }

    private async Task CopyTextToClipboard()
    {
        await JSRuntime.InvokeVoidAsync("clipboardCopy.copyText", _codeElement);
    }
}

This time we’re taking in child content defined by the components consumer and rendering it inside a code tag. We’re capturing a reference to that element using Blazors @ref directive and passing that to JS when the copy button is clicked.

window.clipboardCopy = {
    copyText: function (codeElement) {
        navigator.clipboard.writeText(codeElement.textContent).then(function () {
            alert("Copied to clipboard!");
        })
        .catch(function (error) {
            alert(error);
        });
    }
}

The JavaScript code is largly the same as before. The only difference is we’re receiving an HTML element instead of a text string. When we call writeText we’re now passing in the text inside the code element using the textContent property.

We can use component like this.

<CopyToClipboard>
    @("<div class=\"clipboard-copy\">")
        @("<button type=\"button\" class=\"btn btn-primary\">Copy</button>")
    @("</div>")
</CopyToClipboard>

Which will produce the following output.

Summary

In this post I’ve show a couple of solutions for copying text to the users clipboard in Blazor. We started off by understanding the two API’s available in JavaScript for interacting with the clipboard, execCommand and the Clipboard API. We concluded that using the new Clipboard API was a better choice due to the execCommand API being marked obsolete.

We then looked at two solutions for implementing copy to clipboard functionality. The first allowed short string to be copied to the users clipboard via a simple component with a button which invoked a call into JavaScript. The second method showed how to copy larger volumes of text by passing an ElementReference to JavaScript and accessing its textContent property to retrieve the text before copying it to the clipboard.

Mobile Blazor Bindings - Navigation and Xamarin Essentials

We’ve come to the final post in this series on Mobile Blazor Bindings (MBB). In part 3, we learned about different ways to manage the state of our MBB applications. From simple state held in components to a central state container. We then looked at data, and how we could persist data locally using the SQLite database, as well as back to an API using HttpClient. Finally, we applied what we’d learned to our Budget Tracker app.

In this post, we’re going to talk about navigation in MBB. We’re also going to take a look at Xamarin Essentials, a collection of cross platform operating system and platform APIs we can use from C# to do some cool stuff in our MBB applications.

Coming from the web, we’re used to being able to navigate between pages in our apps using URLs. For example, www.mysite.com/about would take us to the about page. And generally speaking, users are free to navigate between pages in any order they choose.

Mobile apps tend to use a stack based navigation system where every time the user navigates to a new page it’s added to the top of a stack. They can then move around the app in a linear fashion by tapping on new pages to go forwards and using the devices back button or a swipe gesture to move backwards.

There are different page types, such as MasterDetailPage and TabbedPage, which give the feeling of a more free movement. But essentially you are just moving around within a defined set of sub pages.

At the time of writing, there is no formal way to navigate between pages in MBB – This infrastructure isn’t in place yet. However, there is a sample application in the MBB repo called Xaminals. This does demonstrate a way of navigating using multiple pages using something called Shell navigation.

Shell navigation is a URI-based navigation system for Xamarin applications. This is great for us developers coming from a web background as it naturally fits with the paradigms we’re used to working with. It also uses concepts like routes which fit well with Blazors web based hosting models.

However, I did some experimenting with this technique but I just couldn’t get things to do what I wanted. The biggest hurdle which I couldn’t overcome was setting a starting page for the application. Also as Shell navigation isn’t an official way to navigate, you can’t get parameters populated from the “route” as you would in web based Blazor apps. However, you could overcome this by pulling any required state from a central state container like we talked about in part 3.

I think for me, navigation is just a little to alpha right now so I’m going to wait and see what comes down the line. Luckily, our Budget Tracker app doesn’t need multiple pages so it won’t impact us in any way.

Xamarin Essentials

So far, we’ve talked a lot about the fundamentals required to start building native mobile apps with MBB. But what about doing some more advanced/cool things like Geolocation or making the device vibrate. That’s where Xamarin Essentials comes in.

Xamarin Essentials offers a set of operating system and platform APIs which we can use from C#. Those of you who’ve been following this series will know that we’ve briefly mentioned Xamarin Essentials in part 3 where we talked about using the Connectivity feature to decide whether to save data locally or to an API. But that was just the tip of the iceberg, here’s the full list of features available from Xamarin Essentials right now.

I think you’ll agree there is a lot of cool stuff there to explore. I’m not going to talk about each item here, I’ll leave it to you to explore the particular features you’re interested in. I’ve included a link with each feature to it’s docs page to get you started.

Adding Dark Mode to Budget Tracker

To finished things off we’re going to use one of the features from Xamarin Essentials to add a dark mode options to Budget Tracker. I mean, let’s face it, it’s not a real app unless it has a dark mode! 😆

We’re going to use the App Theme feature which will allow us to ask for the current theme set on the OS. This will return one of three options to us, Dark, Light or Unspecified. Based on this, we can load a different style sheet for either light or dark mode. It’s worth pointing out that this feature only works on the newer versions of Android and iOS, here are the specifics:

Creating a Dark Theme

To start we’ll rename the existing stylesheet to BudgetTrackerLight.css and then make a copy of it and call that BudgetTrackerDark.css. In here we’ll update the various styles with the new dark colour scheme.

entry {
    font-size: 30;
    color: #F7FAFC;
    background-color: #718096;
}

frame {
    border-radius: 10;
    background-color: #4A5568;
}

label {
    color: #CBD5E0;
    -xf-horizontal-text-alignment: center;
}

button {
    background-color: #718096;
    color: #E2E8F0;
}

.textSmall {
    font-size: 12;
}

.homeContainer {
    padding: 10;
    background-color: #1A202C;
}

.balanceContainer {
    margin: 0 0 10 0;
    -xf-spacing: 0;
}

    .balanceContainer > .currentBalance {
        font-size: 30;
        color: #F7FAFC;
        -xf-vertical-text-alignment: center;
        -xf-horizontal-text-alignment: center;
    }

.budgetContainer {
    -xf-spacing: 0;
}

    .budgetContainer > .currentBudget {
        font-size: 20;
        color: #48BB78;
        -xf-vertical-text-alignment: center;
        -xf-horizontal-text-alignment: center;
    }

.expensesContainer {
    -xf-spacing: 0;
}

    .expensesContainer > .currentExpenses {
        font-size: 20;
        color: #E53E3E;
        -xf-vertical-text-alignment: center;
        -xf-horizontal-text-alignment: center;
    }

.currencySymbol {
    color: #718096;
    font-size: 30;
    -xf-vertical-text-alignment: center;
}

.createExpenseContainer {
    margin: 20 0 0 0;
    -xf-orientation: horizontal;
}

    .createExpenseContainer > entry {
        font-size: initial;
        background-color: #718096;
        color: #F7FAFC;
        -xf-placeholder-color: #CBD5E0;
    }

.expenseListItem {
    margin: 10 0 0 0;
    -xf-orientation: horizontal;
}

    .expenseListItem > label {
        font-size: 20;
        color: #F7FAFC;
    }

.noExpenses {
    font-size: 16;
    color: #A0AEC0;
    padding: 10;
    -xf-horizontal-text-alignment: center;
}

Now we have the styles sorted out we just need to add a bit of logic to the HomePage component to load the right stylesheet based on the device’s theme.

Loading the correct stylesheet

We’re going to add a new field to the HomePage component which will store the theme. Then in the OnInitializedAsync method we’re going to set the field using the AppInfo.RequestedTheme which is provided by Xamarin Essentials.

private AppTheme theme;

protected override async Task OnInitializedAsync()
{
    theme = AppInfo.RequestedTheme;
    
    // ... other code omitted
}

Now we have that in place it’s just a case of using it to load the correct stylesheet. At the top of the component where we declare the stylesheet we’re going to replace it with the following code.

@if (theme == AppTheme.Light)
{
    <StyleSheet Resource="BudgetTrackerLight.css" Assembly="GetType().Assembly" />
}
else if (theme == AppTheme.Dark)
{
    <StyleSheet Resource="BudgetTrackerDark.css" Assembly="GetType().Assembly" />
}
else
{
    <StyleSheet Resource="BudgetTrackerLight.css" Assembly="GetType().Assembly" />
}

I appreciate this isn’t the most sophisticated code in the world, but for our simple app it will work nicely. We can now run the app and see how things have turned out.

We now have a nice dark mode for our Budget Tracker app which will adjust based on the users preference set on the device. And it was all pretty painless to setup thanks to the features provided by Xamarin Essentials!

You can find the full source code on GitHub.

Summary

In this post, we’ve talked about navigation in Mobile Blazor Bindings, covering the current limitations in the framework. We then talked about Xamarin Essentials and what that offers us. We finished up by applying the App Theme feature from Xamarin Essentials to the Budget Tracker app so that we could show a dark mode if the user had that set as their display preference.

I hope you’ve enjoyed this series on Mobile Blazor Bindings and I’ve piqued your interest in the technology. As I’ve said many times through this series, this is an experiment and there is no official commitment to deliver this. So please only use it for fun right now. But if you’re interested in seeing this become a real product, let the team know either via GitHub or on Twitter.

Mobile Blazor Bindings - State Management and Data

Last time, we looked at layout and styling options for Mobile Blazor Bindings (MBB). We learned about the various page types and layout components available to us, as well as how we could style our components using either parameter styling or the more familiar CSS approach. We finished up by applying what we learned to our Budget Tracker app, adding various layout components and styling to make the app look a bit more appealing.

In this post, we’re going to explore state management and data in MBB. We’ll look at some option to manage the state of our applications ranging from simple to more complex. Then we’ll talk about data, specifically, how to persist it. We’ll cover how to deal with local persistence as well as persisting back to a server. Just as before, we’ll finish up by applying what we’ve learned to our Budget Tracker app.

State Management

Just as with web applications, we need to be able to manage the state of our mobile apps. Even in a simple app like Budget Tracker, we have various bits of state to keep track of. The current budget, the current balance and the expenses that have been entered. Let’s explore a couple of options we could use.

Storing state in components

The simplest thing we can do when it comes to state is to manage it within a component. The Budget Tracker at the end of the last post stores and manages its state this way – all state is kept on the HomePage component.

@code {

    private decimal _budget;
    private List<Expense> _expenses = new List<Expense>();

    private decimal _currentBalance => _budget - _expenses.Sum(x => x.Amount);
    private decimal _expensesTotal => _expenses.Sum(x => x.Amount);

}

The other components in the app update these values using Blazors EventCallback approach.

This method works really well in simple scenarios such as the Budget Tracker app where there is only a single page component. But, depending on the app, in multi page apps this stops being a good option and we would need to look at something a bit more advanced.

AppState class

The next level up would be implementing an AppState class. We can record all the bits of state we need to keep track of across our application in this one place. This class is registered with the DI container as a Singleton, giving all components we inject it into access to the same data.

Say for example we had an ecommerce application. We could use an AppState class to keep track of the contents of the shopping basket. It might look something like this.

public class AppState
{
    private readonly List<Item> _basket = new List<Item>();
    public IReadOnlyList<Item> Basket => _basket.AsReadOnly();

    public event Action OnChange;

    public void AddItem(Item newItem)
    {
        _basket.Add(newItem);
        NotifyStateChanged();
    }
    
    public void RemoveItem(Item item)
    {
        _basket.Remove(item);
        NotifyStateChanged();
    }

    private void NotifyStateChanged() => OnChange?.Invoke();
}

You’ll notice that the Basket property is read only, this is important as we wouldn’t want it being changed randomly. We want changes to go through the public methods, AddItem and RemoveItem, so we can ensure the OnChange event is raised. All the components in our app that care about the state of the basket can subscribe to this event and be notified when updates happen.

@inject AppState AppState
@implements IDisposable

<StackLayout>
    <Label Text="@($"{AppState.Basket.Count} Items in Basket")" />
</StackLayout>

@code {

    protected override void OnInitialized()
    {
        AppState.OnChange += StateHasChanged;
    }

    public void Dispose() => AppState.OnChange -= StateHasChanged;
}

In the code above, whenever an item is added or removed from the basket the OnChange event will trigger the component to re-render by calling StateHasChanged. We’re also using implementing IDisposable so we can safely unsubscribe from the OnChange event when the component is destroyed.

This method works really well and is pretty simple to implement and get going with. However, you may find it doesn’t work well in large applications. The more state that’s tracked the bigger the class gets as we add more methods to update the various values. Eventually, it will become quite difficult to navigate and maintain.

At this point it would be worth looking at breaking the state down into smaller chunks, alternatively, you could also look at doing a Redux or MobX implementation for you MBB app. There some example out there of implementing Redux in a Xamarin Forms app which should be fairly easy to port to MBB. I think the Fluxor library from Peter Morris would also work, although I’ve not tested it myself.

Data

Let’s talk about what we can do with our data. In mobile apps we have three scenarios we potentially need to cater for. Online, offline and a mix of both.

Online - Saving data to an API

Saving data back to an API in MBB isn’t much different to what we would do in a web based Blazor app, which is handy.

There isn’t an HttpClient configured out of the box, so we need to install the [Microsoft.Extensions.Http](https://www.nuget.org/packages/Microsoft.Extensions.Http) NuGet package. Once installed we add the following line to register the various services with the DI container.

.ConfigureServices((hostContext, services) =>
{
    // Register app-specific services
    services.AddHttpClient();
})

We can also install the same HttpClient helper methods we’re used to from Blazor by adding the System.Net.Http.Json package. This is currently a pre-release package, if you’re not familiar with it you can read more about it in this blog post. We now have access to GetFromJsonAsync, PostAsJsonAsync and PutAsJsonAsync.

We can now inject an IHttpClientFactory into any component and use the CreateClient method to get an instance of the HttpClient to make our API calls with. From here things are the same as they would be in a web based Blazor app.

Offline - Storing data on the device

The most common option for storing data locally on the device is SQLite. It’s a small, fast, self-contained, high-reliability, full-featured, SQL database engine – it’s also free and open source.

Using SQLite is really easy. Once the sqlite-net-pcl NuGet package is installed, we need to define entity classes, similar to how we would with Entity Framework. These represent the shape of the data we’re going to persist. For example, if we were saving cars, a Car entity could be defined like this.

public class Car
{
    [PrimaryKey, AutoIncrement]
    public int Id { get; set; }
    public string Make { get; set; }
    public string Model { get; set; }
    public int Doors { get; set; }
}

This is a simple POCO (Plain Old CLR Object/Plain Old C# Object) class the only bit of magic that’s been added is the PrimaryKey and AutoIncrement attributes. These mark the primary key for the table and ensure it has an auto-incrementing value.

Then we need to define a database class. this performs a few jobs. It ensures that the database is created along with any tables, it’s also where we define any methods for saving or retrieving data. You can think of it as a mix between an Entity Framework DB context and a repository.

public class Database
{
    readonly SQLiteAsyncConnection _database;

    public Database(string dbPath)
    {
        _database = new SQLiteAsyncConnection(dbPath);
        _database.CreateTableAsync<Car>().Wait();
    }

    public Task<List<Car>> GetCarsAsync()
    {
        return _database.Table<Car>().ToListAsync();
    }

    public Task<int> SaveCarAsync(Car newCar)
    {
        return _database.InsertAsync(newCar);
    }
}

The final job is to add an instance of the Database class into the DI container so we can using it by injecting it into our components.

public App()
{
    var database = new Database(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "cars.db3"));

    var host = MobileBlazorBindingsHost.CreateDefaultBuilder()
        .ConfigureServices((hostContext, services) =>
        {
            // Register app-specific services
            services.AddSingleton<Database>(database);
        })
        .Build();
        
        // other code omitted
}

That’s all there is to it, data can now be saved locally on the device and will persist between app restarts.

Both - Catering for offline scenarios

The final option we’ll look at is a mix of online and offline. This is probably the majority of mobile apps. And really this is about knowing if we’re online or offline. If we’re online we can make an API and if we’re offline we can save data to a local database until the network is back and then push it up to the API. Now, we could just make API calls and see if they time out, then if they do, save locally. But this doesn’t seem very elegant. Luckily for us there is a much simpler way thanks to Xamarin Essentials.

We’ll talk more about Xamarin Essentials another time but it’s a great library which gives us access to loads of OS and platform APIs from C#, one of which is the Connectivity class. This gives us a simple API we can call to determine the network status of the device.

var current = Connectivity.NetworkAccess;

if (current == NetworkAccess.Internet)
{
    // Connection to internet is available
}

We can even subscribe to an event which will tell us when the network status changes.

public ConnectivityTest()
{
    // Register for connectivity changes, be sure to unsubscribe when finished
    Connectivity.ConnectivityChanged += HandleConnectivityChanged;
}

void HandleConnectivityChanged(object sender, ConnectivityChangedEventArgs e)
{
    // Do something based on new network status
}

This gives us everything we need to be able to cater for scenarios where the app may not have a network connection.

Adding State Management & Data to Budget Tracker

Let’s put everything together and apply it to Budget Tracker. Now, as I mentioned earlier, Budget Tracker can work perfectly fine with storing all its state on the HomePage component. But using an AppState class would clean up some of the code keeping everything in sync. Another advantage would come when persisting data. We aren’t going to add an API to this project, that seems a bit overkill. But it would be great to be able to save data locally using SQLite. If we added an AppState class we could do this much easier as everything is in one place.

Adding an AppState Class

We’ll start by defining an AppState class, in the root of the project, which contains all of the state which was originally kept in the HomePage component.

public class AppState
{
    private readonly List<Expense> _expenses = new List<Expense>();

    public decimal Budget { get; private set; }
    public IReadOnlyList<Expense> Expenses => _expenses.AsReadOnly();
    public decimal CurrentBalance => Budget - _expenses.Sum(x => x.Amount);
    public decimal ExpensesTotal => _expenses.Sum(x => x.Amount);

    public event Action OnChange;

    public void SetBudget(decimal newBudget)
    {
        Budget = newBudget;
        NotifyStateChanged();
    }

    public void AddExpense(Expense newExpense)
    {
        _expenses.Add(newExpense);
        NotifyStateChanged();
    }

    private void NotifyStateChanged() => OnChange?.Invoke();
}

We’re also going to register it as a singleton in the DI container in App.cs

var host = MobileBlazorBindingsHost.CreateDefaultBuilder()
    .ConfigureServices((hostContext, services) =>
    {
        // Register app-specific services
        services.AddSingleton<AppState>();
    })
    .Build();

With those bits in place it’s just a case of updating the various components to pull their state directly from the AppState class. Here’s what the updated HomePage component looks like.

@inject AppState AppState
@implements IDisposable

<StyleSheet Resource="BudgetTracker.css" Assembly="GetType().Assembly" />

<StackLayout class="homeContainer">

    <Frame>
        <StackLayout>
            @if (AppState.Budget > 0)
            {
                <BudgetSummary />
            }
            else
            {
                <SetBudget />
            }
        </StackLayout>
    </Frame>

    @if (AppState.Budget > 0)
    {
        <Frame>
            <ScrollView>
                <StackLayout>
                    <Label Text="EXPENSES" />
                    <ExpenseList />
                    <CreateExpense />
                </StackLayout>
            </ScrollView>
        </Frame>
    }

</StackLayout>

@code {

    protected override void OnInitialized() => AppState.OnChange += StateHasChanged;

    public void Dispose() => AppState.OnChange -= StateHasChanged;
}

You can view all of the changes to the app over on the GitHub repo.

Storing Data via SQLite

Great! We now have the state of the app in a central place. Next we’re going to add in SQLite so we can save the values we enter between app reboots. First we need to define our database class.

public class BudgetTrackerDb
{
    private readonly SQLiteAsyncConnection _database;

    public BudgetTrackerDb(string dbPath)
    {
        _database = new SQLiteAsyncConnection(dbPath);
        _database.CreateTableAsync<Budget>().Wait();
        _database.CreateTableAsync<Expense>().Wait();
    }

    public async Task<int> SaveBudgetAsync(Budget newBudget)
    {
        var result = await _database.InsertAsync(newBudget);

        return result;
    }

    public async Task<decimal> GetBudgetAsync()
    {
        // This is nasty but as we're only going to have one budget for now so we'll let it slide
        var result = await _database.Table<Budget>().FirstOrDefaultAsync(x => x.Amount > 0);

        return result?.Amount ?? 0;
    }
    public async Task<int> SaveExpenseAsync(Expense newExpense)
    {
        var result = await _database.InsertAsync(newExpense);

        return result;
    }

    public Task<List<Expense>> GetExpensesAsync()
    {
        return _database.Table<Expense>().ToListAsync();
    }

}

Our DB has two tables, one to store the budget and one to store the expenses. This setup would also allow us to expand the functionality of the app at a later data and support multiple budgets. We’ve also defined a few methods to save and retrieve data from the database.

With the database class in place, next we’re going to update our AppState class.

public class AppState
{
    private readonly BudgetTrackerDb _budgetTrackerDb;

    public AppState()
    {
        _budgetTrackerDb = new BudgetTrackerDb(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "BudgetTrackerDb.db3"));
    }

    public event Func<Task> OnChange;

    public async Task SetBudget(decimal newBudget)
    {
        _ = await _budgetTrackerDb.SaveBudgetAsync(new Budget { Amount = newBudget });
        await NotifyStateChanged();
    }

    public async Task<decimal> GetBudget() => await _budgetTrackerDb.GetBudgetAsync();

    public async Task AddExpense(Expense newExpense)
    {
        _ = await _budgetTrackerDb.SaveExpenseAsync(newExpense);
        await NotifyStateChanged();
    }

    public async Task<IReadOnlyList<Expense>> GetExpenses() => await _budgetTrackerDb.GetExpensesAsync();

    public async Task<decimal> GetCurrentBalance()
    {
        var budget = await GetBudget();
        var expenses = await GetExpenses();

        return budget - expenses.Sum(x => x.Amount);
    }

    public async Task<decimal> GetTotalExpenses()
    {
        var expenses = await GetExpenses();

        return expenses.Sum(x => x.Amount);
    }

    private async Task NotifyStateChanged() => await OnChange?.Invoke();
}

Essentially what we’ve done here is written methods which retrieve the data from our database rather than just storing the values in memory. An advantage of doing this is if our app needed to support online and offline functionality. We could easily add in a check to see if the app is connected or not, if it isn’t then we could save locally to the SQLite database. But if it was then we could use a HttpClient to send and retrieve data from an API. But the rest of our app wouldn’t need to care, it could all be handled inside the AppState class.

The final job now is to make some updates to the various components to work with the new methods. Here is the final HomePage component.

@inject AppState AppState
@implements IDisposable

<StyleSheet Resource="BudgetTracker.css" Assembly="GetType().Assembly" />

<StackLayout class="homeContainer">

    <Frame>
        <StackLayout>
            @if (budgetSet)
            {
                <BudgetSummary />
            }
            else
            {
                <SetBudget />
            }
        </StackLayout>
    </Frame>

    @if (budgetSet)
    {
        <Frame>
            <ScrollView>
                <StackLayout>
                    <Label Text="EXPENSES" />
                    <ExpenseList />
                    <CreateExpense />
                </StackLayout>
            </ScrollView>
        </Frame>
    }

</StackLayout>

@code {

    private bool budgetSet;

    protected override async Task OnInitializedAsync()
    {
        await UpdateState();
        AppState.OnChange += UpdateState;
    }

    public void Dispose() => AppState.OnChange -= UpdateState;

    private async Task UpdateState()
    {
        var budget = await AppState.GetBudget();
        budgetSet = budget > 0;

        StateHasChanged();
    }
}

That’s it, we’re done. You can see the full updated source code on the GitHub repo.

Summary

In this post we’ve looked at some options around managing state and storing data in a Mobile Blazor Bindings application.

Starting with state, we looked at the most simple options first, state in components. We then moved on to a more advanced method using a central class called AppState to store all state in the application.

Next we looked at data and how we can store it locally on the device using SQLite, a popular, open source, portable SQL database. We talked about how we could configure MBB to make API calls using the HttpClient so we could persist data back to the server. We also covered how we could use the Xamarin Essentials Connectivity class to determine if the app is connected in order to decide if data should be saved locally or an API call should be attempted.

Finally, we applied some of our learnings to the Budget Tracker application. We added an AppState class to hold all of the app state centrally. We also added in SQLite to store our budget and any expenses we’d added.

Mobile Blazor Bindings - Layout and Styling

In my part 1, I covered what Mobile Blazor Bindings (MBB) is, how to get your dev environment setup and create and run your first app. Over the next few posts we’re going to exploring various topics to deepen our knowledge and help us build more feature rich, native mobile apps with MBB.

In this post, we’re going to look at layout and styling. In the web world, we’re used to creating layouts and structure in our apps using elements such as divs, spans and tables. These are then usually enhanced using CSS features such as flexbox, CSS grid or general CSS rules. When it comes to styling, everything we do uses CSS, we don’t have any alternatives. So, how do we do this stuff in MBB? That’s what we’re going to find out.

Introducing Budget Tracker

Before we get started, to give us a bit of focus over the coming posts, we’re going to be working with a simple budget tracker app I’ve created. This will allow us to apply the various features we learn to a real app.

I’m not going to go into loads of detail about the app as our goal here is to learn about Mobile Blazor Bindings, not budget tracking. I’ll give a quick summary just so we understand the layout of the project.

The app is very simple and can currently perform 3 tasks:

This is what the project structure looks like.

I’m a big fan of feature folders to organise projects, you can see them in use here. There’s a top level feature called Home which has a single page component called HomePage – I tend to postfix any page components with Page so they are easily distinguishable from regular components. Then there are two sub-features, BudgetManagement and ExpenseManagement. The contain regular components which are loaded into the HomePage based on certain logic.

The project is available on GitHub so you can check it out and play with it at your leisure. As I said, we’ll be evolving it over the next few posts so it may change from the above structure over time.

Let’s crack on and starting looking at layout options for our app.

Layout Options

When compared to the web, our options for layouts in MBB are far more structured. As I mentioned in the intro, when creating web apps we tend to use a lot of div elements with CSS to add structure to our pages. But the div element is just a divider, something that marks a section of an HTML document. Other than being a block element (display: block;), it has no predefined abilities. This can be useful as it makes it flexible, however, we have to define what we want a particular div to do every time.

With MBB this is not the case, we have access to a set of predefined structural components allowing us to rapidly create interesting and efficient UI. There are two categories of component for this, page components and layout components.

Page Components

Different page types is not something we’re used to in the web world. We have a single page type, a HTML document, and that’s it. It’s certainly possible to create the kinds of page configurations offered by MBB in HTML, but it involves us having to do all the configuration manually and can take a lot of time.

With MBB, at the time of writing, there are three ready to use page components out of the box (checkout the latest list here).

There are also two lower level components which can be used to create your own custom page types.

Layout Components

In MBB, we can use dedicated layout components. At the time of writing there are five layout components available (you can checkout the current list in the docs). They are:

Styling

Now we know about the various options available to use for structuring our pages, what about styling them? MBB offers us two options here, what I’m going to call parameter styles, and CSS styles.

Parameter Styles

Parameter based styling is very similar to what Xamarin Forms refers to as XAML styles. When styling apps using this method we set all style related settings on the component using its exposed parameters.

This is probably easier to understand with an example. Let’s say we wanted to style a Label with a font size of 30 and have it’s text coloured blue. This is how we would do that using parameter styles.

<Label Text="EXPENSES"
       FontSize="30"
       TextColor="Color.Blue" />

We can also set other types of style information for components which is specific to the underlying Xamarin Forms platform. For example, a StackLayout stacks it’s child content vertically by default. We can change this to horizontal by using the Orientation parameter.

<StackLayout Orientation="StackOrientation.Horizontal">
    <!-- ... other code omitted ... -->
</StackLayout>

This method of styling is pretty quick to get going with and it makes it very clear what styles are applied to which components. But one big downside is that we have to set styles everywhere, we can’t just predefine a style for all Labels globally, we have to set each one individually.

Let’s look at the other options now which is much more familiar to us web devs, CSS.

CSS Styles

Please Note: CSS styles don’t appear to work correctly in the current offical release (v0.2.42-preview). In order to use them, I’ve had to clone the repo and build my own NuGet packages to get the latest changes. Hopefully a new release will be out shortly and this step won’t be needed.

Styling components using CSS is probably the obvious go-to option if you’re coming from a web background, and it works pretty much how you would expect it to in MBB. We can create style classes and apply them to individual components or we can use CSS selectors to target all Labels in an app.

We create a stylesheet as normal to contain our styles and we reference it in each page of our app using a special component called StyleSheet which looks like this.

<StyleSheet Resource="BudgetTracker.css" Assembly="GetType().Assembly" />

Don’t put the StyleSheet component inside any layout components otherwise things won’t work and you’ll probably get a load of weird exceptions – trust me I lost a few hours to this 😭.

Another thing to be aware of is that right now stylesheets need to be at the root of your project, they don’t appear to work when moved into folders. There is a issue tracking this on the GitHub repo.

Using the examples we looked at with parameter styles, what do they look like using CSS. For our first example, we could create a normal style class in our stylesheet with the following rules.

.largeBlueLabel {
    font-size: 30;
    colour: blue;
}

Then apply it to the relevant component using the class parameter.

<Label Text="EXPENSES"
       class="largeBlueLabel" />

Pretty simple right?! But what about handling the Xamarin Forms specific styling such as the orientation example? Well we can do that with CSS as well!

The CSS in MBB is a special flavour which includes a load of specific selectors and properties. Which means we can manipulate setting such as orientation via CSS as well.

.horizontal {
    -xf-orientation: horizontal;
}
<StackLayout class="horizontal">
    <!-- ... other code omitted ... -->
</StackLayout>

In these examples the changes aren’t huge but imagine if we’d set several style parameters on a component, the markup starts to look noisy really quick. Using CSS also allows us to remove the issue we identified with parameter styling where we need to set the same styles over and over again. With CSS we can define the class once and just apply it to whatever components we choose.

Now we have all of this new knowledge about how to layout and style Mobile Blazor Bindings apps, let’s put some of it into action but applying it to the budget tracker app.

Adding Layout & Styling to Budget Tracker

We don’t really need more than one page in our app, at least for now. We’ll keep the default page type, which is a ContentPage, and we’ll focus on applying layout components and styling.

Just for reference, if you check the App.cs you can see where the HomePage component is set as the child of the ContentPage.

MainPage = new ContentPage();
host.AddComponent<HomePage>(parent: MainPage);

Let’s start applying some of these layout components to our app. Currently the app has no layout components at all. This is the code for the HomePage component.

@if (_budget > 0)
{
    <BudgetSummary Budget="_budget"
                   TotalExpenses="_expensesTotal"
                   CurrentBalance="_currentBalance" />
}
else
{
    <SetBudget OnBudgetSet="@((newBudget) => _budget = newBudget)" />
}

@if (_budget > 0)
{
    <Label Text="EXPENSES" />
    <ExpenseList Expenses="_expenses" />
    <CreateExpense OnExpenseAdded="@((newExpense) => _expenses.Add(newExpense))" />
}

@code {

    private decimal _budget;
    private List<Expense> _expenses = new List<Expense>();

    private decimal _currentBalance => _budget - _expenses.Sum(x => x.Amount);
    private decimal _expensesTotal => _expenses.Sum(x => x.Amount);

}

What does this look like when we run it?

Umm… not the best, I think you’ll agree. As our budget is currently 0 the SetBudget component is being displayed, which looks like this.

<Label Text="SET YOUR BUDGET" />
<Label Text="£" />
<Entry @bind-Text="Budget"
       OnCompleted="@(() => OnBudgetSet.InvokeAsync(_budget))" />

But where are all the other UI elements? All we are seeing is the Entry component – equivalent to an HTML input control – This is because the HomePage component is being displayed in a ContentPage which, as we learned earlier, can only display one child. It seems the last component wins, in this case the Entry component.

At the very least we need a single top level component to contain all our content. Let’s add a StackLayout to our HomePage and see what happens.

<StackLayout>
    @if (_budget > 0)
    {
        <BudgetSummary Budget="_budget"
                       TotalExpenses="_expensesTotal"
                       CurrentBalance="_currentBalance" />
    }
    else
    {
        <SetBudget OnBudgetSet="@((newBudget) => _budget = newBudget)" />
    }

    @if (_budget > 0)
    {
        <Label Text="EXPENSES" />
        <ExpenseList Expenses="_expenses" />
        <CreateExpense OnExpenseAdded="@((newExpense) => _expenses.Add(newExpense))" />
    }
</StackLayout>

That’s looking a bit better, we can see all 3 UI elements, the 2 Labels and the Entry. As we found out earlier, child components in a StackLayout are stacked vertically by default, but it would be much nicer if the £ symbol and the Entry were on the same line. We can add another StackLayout around the Label and the Entry and set it to display horizontally, which should achieve what we’re after.

<Label Text="SET YOUR BUDGET" />
<StackLayout Orientation="StackOrientation.Horizontal">
    <Label Text="£" />
    <Entry @bind-Text="Budget"
           OnCompleted="@(() => OnBudgetSet.InvokeAsync(_budget))" />
</StackLayout>

We now have the layout that we wanted but it’s looks pretty naff. Let’s apply some styling to try and improve things a bit.

First off we’ll apply some parameter styles as its a pretty quick way to try things out.

<Label Text="SET YOUR BUDGET" />
<StackLayout Orientation="StackOrientation.Horizontal">
    <Label Text="£"
           TextColor="@(Color.FromHex("718096"))"
           FontSize="30"
           VerticalTextAlignment="TextAlignment.Center" />
    <Entry FontSize="30"
           TextColor="@(Color.FromHex("2D3748"))"
           HorizontalOptions="LayoutOptions.FillAndExpand"
           @bind-Text="Budget"
           OnCompleted="@(() => OnBudgetSet.InvokeAsync(_budget))" />
</StackLayout>

That’s now looking loads better but as I mentioned earlier, there is a lot of markup added just for a few style tweaks. Let’s swap this over to CSS styles and see what it looks like.

We first need to create a stylesheet in the root of the project, we’ll call it BudgetTracker.css and then add a reference to it in the HomePage component.

<StyleSheet Resource="BudgetTracker.css" Assembly="GetType().Assembly" />

<StackLayout>
    <!-- ... other code omitted ... -->
</StackLayout>

We can then add the following styles to our stylesheet and adjust the markup to use these new styles.

entry {
    font-size: 30;
    color: #2D3748;
}

.currencySymbol {
    color: #718096;
    font-size: 30;
    -xf-vertical-text-alignment: center;
}
<Label Text="SET YOUR BUDGET" />
<StackLayout Orientation="StackOrientation.Horizontal">
    <Label Text="£"
           class="currencySymbol" />
    <Entry HorizontalOptions="LayoutOptions.FillAndExpand"
           @bind-Text="Budget"
           OnCompleted="@(() => OnBudgetSet.InvokeAsync(_budget))" />
</StackLayout>

The final change we will make is to add in a Frame component to make our set budget section stand out. We’ll start by adding a new CSS class with some styling, then adjust the markup.

.setBudgetContainer {
    border-radius: 10;
    background-color: #ffffff;
    margin: 10;
}
<Frame class="setBudgetContainer">
    <StackLayout>
        <Label Text="SET YOUR BUDGET" />
        <StackLayout Orientation="StackOrientation.Horizontal">
            <Label Text="£"
                   class="currencySymbol" />
            <Entry ClearButtonVisibility="ClearButtonVisibility.WhileEditing"
                   HorizontalOptions="LayoutOptions.FillAndExpand"
                   @bind-Text="Budget"
                   OnCompleted="@(() => OnBudgetSet.InvokeAsync(_budget))" />
        </StackLayout>
    </StackLayout>
</Frame>

Like other layout components we talked about earlier, the Frame component can only have a single child, so we need to add an additional StackLayout to wrap the existing Label and StackLayout – told you StackLayouts are the div of the MBB world! 😂. With the above changes our app now looks like this.

Our app now looks loads better, obviously we still need to apply layout and styling to the rest of it, but I think this blog post is long enough already! So, if you’ve made it this far then well done and thank you. I’ll carry on with styling the app and you can check it out on GitHub.

Summary

In this post I’ve introduced the layout and styling options available to us in Mobile Blazor Bindings. I briefly introduced the Budget Tracker app we’ll be developing over the course of this series.

Then I talked about the layout options available in MBB, this included all the current page components and layout components. We learned about what each one offers us before moving on to talk about styling. In terms of styling we found out we had two options, parameter styling or CSS. We looked at examples of how to use each one and talked about some potential downsides of parameter styling and how CSS styles could solves those problems.

Finally, we took what we learned and applied to the Budget Tracker app. We added some layout components to improve the look of the page and then added some styles. We first tried parameter styling and then quickly moved to CSS styles.

Mobile Blazor Bindings - Getting Started

This is the first, in what will probably be a series of blog posts, I’ve been looking forward to writing for a while – I’m going to be exploring the new experimental Mobile Blazor Bindings project.

In this post, I’m going to be giving an introduction to the Mobile Blazor Bindings (MBB) project, what it is, why you might be interesting in trying it, what is and isn’t available and we’ll finish with creating our first app.

What is Mobile Blazor Bindings?

It’s a new experimental project led by Eilon Lipton, a principal software engineer at Microsoft. The unique selling point of the project is it enables developers to author native, cross platform mobile apps using Blazors programming model.

What this means is instead of writing a mix of C# and HTML to create components, as we would in the web hosting models for Blazor, we write C# and native mobile controls. To give you an idea of what this looks like, below is a counter component written for a Blazor WebAssembly application, then below that is that same component but written for MBB.

<!-- Blazor WebAssembly -->

<p>Current count: @currentCount</p>
<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

@code {
    int currentCount = 0;

    void IncrementCount()
    {
        currentCount++;
    }
}
<!-- Mobile Blazor Bindings -->

<Label Text="@($"Current count: {currentCount}")" />
<Button Text="Click me" OnClick="@IncrementCount" />

@code
{
    int currentCount = 0;

    void IncrementCount()
    {
        currentCount++;
    }
}

As you can see the programming model is identical, it’s just the types of controls which are used that is different. This makes MBB a great stepping stone for web developers looking to get into cross platform native mobile app development, using their existing skills.

The components we use to author apps with MBB are essentially wrappers around Xamarin Forms controls. At the time of writing, the following components are available.

Page components

Layout components

View components

Specialized components

You can checkout the official docs to get the most up-to-date information on the current components available.

After reading the above, a few questions may be going round in your head, what about Xamarin? Do they know about this? Is Xamarin being replaced? These are all good questions so let’s cover those next.

What about Xamarin? Is it being replaced?

The first thing to point out is that MBB is just an experiment, there is no commitment to developing and delivering this as a product. When the first version of MBB was announced, the blog post contained the following statement.

We have heard from a set of developers that come from a web programming background that having web specific patterns to build mobile applications would be ideal for them. The goal of these bindings is to see if developers would like to have the option of writing markup and doing data binding for native mobile applications using the Blazor-style programming model with Razor syntax and features.

The key part to pull out is “if developers would like to have the option. MBB, if was taken forward, would offer an alternative to writing native mobile apps using XAML.

I think this is a great idea and I’m keen to see where it goes, the big thing keeping me away from native mobile development is XAML, I just don’t like it. That’s not to say there is anything wrong with it, I know a lot of developers really enjoy working with it. I also know a lot of developers also feel the same way about HTML.

I think giving developers the choice to write apps using the languages they enjoy, and have skills in, is a fantastic thing. The likes of Uno Platform are offering the same choice the other way, allowing developers familiar with XAML the option to write web applications using that instead of HTML. Everyone wins!

Getting Setup

Now we’ve gotten a better understanding of what MBB is and why we might want to try it out. Let’s move on and get setup so we can start playing with it.

Installing Workloads

You can try out MBB using either Visual Studio or Visual Studio for Mac, but you will need to install the following workloads:

Once you have installed the above workloads, if you don’t already have it, you will also need to download and install the latest version of the .NET SDK.

Installing the MBB Template

There is a project template we need to use to create new MBB applications, this can be installed using the dotnet CLI using the following command (you may want to check for a newer version of the templates before installing).

dotnet new -i Microsoft.MobileBlazorBindings.Templates::0.2.42-preview

Once you have the template installed, you should be able to see it when running the dotnet new command.

Enabling Windows Hypervisor Platform

This was something I bumped into when I first tried to run the Android emulator from Visual Studio. I then had to go and enable it which then required a restart of my PC. So I want to save you a bit of time and let you know about it now.

Windows Hypervisor Platform will greatly improve the performance of the Android emulator when working with MBB, or any Xamarin application for that matter. If you follow this link, full instructions are given on how to enable WHP on your machine.

Creating an MBB app

Once you’ve completed the above steps you should be ready to create your first Mobile Blazor Bindings application!

Currently there is no integration with the new project dialogue in Visual Studio so we will need to create the app from the command line using the dotnet CLI. To create a new MBB app use the following command (I’ve called my app HelloMBB but call yours whatever you want):

dotnet new mobileblazorbindings -o HelloMBB

You can now open Visual Studio and load up the solution. You should see 3 projects in the Solution Explorer, HelloMBB, HelloMBB.Android and HelloMBB.iOS.

The Android and iOS projects are essentially shells for the particular platform which our MBB app is going to load into. All of the application logic is kept in the HelloMBB project.

This is the same approach you can use to run a Blazor web app using either the Server or WebAssembly hosting model. Putting all common components into a RazorClassLibrary and removing everything but the infrastructure code from the Server and WebAssembly projects. You can find an example of that approach on my GitHub if you’ve not seen it before.

If you want to run the iOS project you’re going to need a Mac in order to compile the project. This is due to Apples licencing and there isn’t a way around it. I do have a Mac but I’m current working on a Windows machine so, for now, I’m going to set the Android project as the startup project and then hit F5 to run the application.

Creating an Android Device

If you’ve not done any Xamarin work before, after a few seconds, you will see this screen.

This is because we don’t currently have an Android device setup for the emulator to use. I’m not very familiar with Android devices so I’ve created the default device selected, Pixel 2 (+ Store). This seems to work really well, at least on my machine* 😋.

Once you create your device, it will be downloaded and then you will be able to use it. This can take a good few minutes to complete.

Running the App

You will probably ended up back at Visual Studio, at least I did, but now the Start Debugging button should contain the name of the new device you’ve created. Hit F5 again and after a few moments you should see your new MBB app.

Making Changes

Let’s make a simple change, well add in a button which updates the text to display “Hello, MBB!” instead of the default “Hello, World!”. Currently, there is no hot reload available for MBB so we are going to have to stop debugging to make our changes.

Once you’ve stopped debugging, open up the HelloWorld.razor file, it should look like this.

<ContentView>
    <StackLayout Margin="new Thickness(20)">

        <Label Text="Hello, World!"
               FontSize="40" />

        <Counter />

    </StackLayout>
</ContentView>

We’re going to update the code to match the code below.

<ContentView>
    <StackLayout Margin="new Thickness(20)">

        <Label Text="@WelcomeMessage"
               FontSize="40" />

        <Button Text="Update Message" OnClick="@(() => WelcomeMessage = "Hello, MBB!")" />

        <Counter />

    </StackLayout>
</ContentView>

@code {

    public string WelcomeMessage { get; set; } = "Hello, World!";

}

Instead of the Label text being hardcoded, it’s now using the WelcomeMessage property. When we click the Button we’re updating the value of WelcomeMessage to be “Hello, MBB!”.

Press F5 to run the updated app, when you click on Update Message you should see the new message displayed.

Congratulations, you’ve just created, modified and run your first Mobile Blazor Bindings application!

Summary

In this post I have introduced Mobile Blazor Bindings. We started off by covering what MBB is, why we might choose to try it out and what components are available. We also talked about MBB in relation to Xamarin and how it complements the existing Xamarin platform.

We then moved on to setting up a machine to use MBB, covering the workloads required by Visual Studio and Visual Studio for Mac. How to install the template for MBB and improve the performance of the Android device emulator by enabling Windows Hypervisor Platform.

Finally, we created a new MBB app and ran it on an Android device emulator. We then made a simple change to the app, adding in a button which updated the default message displayed.

I hope I’ve piqued you interest in Mobile Blazor Bindings. Next time I’m going to delve deeper by building out a more complex app.

Working with Query Strings in Blazor

In this post I’m going to be taking a look at query strings and specifically how we can work with them in Blazor, because at this point in time, Blazor doesn’t give us any tools out of the box. In fact, Blazor pretty much ignores them.

We’re going to start off by covering what query strings are and why we’d want to use them over route parameters. Then we’ll get into some code and I’ll show you a couple of options which should make working with them in your Blazor applications much easier.

Example Code: A sample project to accompany this blog post can be found on GitHub

What are query strings?

Query strings are essentially an instance or collection of key value pairs encoded into a URL. Below is an example of what a query string looks like.

mysite.com/about?name=Chris&favouritecolor=orange

The start of a query string is separated from the rest of the URL by a ?. Then comes the key value pairs, each key and value is separated by a =. If there’s more than one pair a & is used to separate them.

In the example above, the query string contains two pairs, name with a value of Chris and favouritecolour with a value of orange.

One of the original use cases for query strings was to hold form data. When a form was submitted the field names and their values were encoded into the URL as query string values.

Why use them over route parameters?

A good question, for me, query strings provide a more flexibility over route parameters when it comes to optional values. Having optional parameters in a route can be a real pain if not impossible, in my experience at least. A good example of this is when using filters on a list page.

Let’s pretend we have a page listing cars (/carsearch) and we offer the user the ability to filter that list by make, model, and colour. If we wanted to use route parameters we’d have to use multiple route templates.

@page "/carsearch"
@page "/carsearch/{make}"
@page "/carsearch/{make}/{model}"
@page "/carsearch/{make}/{model}/{colour}"

The problem is what happens if the user only selected make and colour? The route would look like this /carsearch/ford/blue. Now we have a problem, the router is going to find a match with the 3rd template we’ve defined, @page "/carsearch/{make}/{model}". So we’d be trying to find all cars with a make of ford and a model of blue, oh dear.

Now, we could work round this but using defaults for the various filters but it would be much simplier to use query strings instead.

Query strings don’t care about order of values, or even if a value is present or not, we don’t even need to define them in a route template. Which means we can go back to using a single route template for our car search page.

@page "/carsearch"

And when the user wants to filter we can just add the selected criteria to the URL using query strings.

/carsearch?make=ford&colour=blue

The only problem now is how do we actually get hold of and use the values in our query string?

Introducing WebUtilites

There is a library called Microsoft.AspNetCore.WebUtilities which contains a fantastic helpers for dealing with query strings. It will chop up a query string for us and allow us to retrieve values in a straightforward way, meaning we don’t have to get into loads of string manipulation.

We’re going to update the Counter page in the default template to look for a query string which sets the initial count for the counter. So given the url /counter?initialCount=10, we’d expect the counter to start at 10. Here’s how we can achieve that.

@page "/counter"
@inject NavigationManager NavManager

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

@code {
    int currentCount = 0;
    
    protected override void OnInitialized()
    {
        var uri = NavManager.ToAbsoluteUri(NavManager.Uri);
        if (QueryHelpers.ParseQuery(uri.Query).TryGetValue("initialCount", out var _initialCount))
        {
            currentCount = Convert.ToInt32(_initialCount);
        }
    }

    void IncrementCount()
    {
        currentCount++;
    }
}

We can inject the NavigationManager which gives us access to the current URI. Once we have that, we can pass the query string (uri.Query) to the WebUtilities helper and ask for the value of initialCount. The last thing we need to do is convert the value of initialCount to an int as all values are returned as strings by the helper.

That wasn’t actually too bad and if we just needed to do this on one page then we’re now sorted. However, if we needed to use query strings in multiple places in our app this becomes a lot of boilerplate to have kicking around. So let’s make this better.

Moving to an Extension Method

We can encapsulate all this functionality into a extension method on the NavigationManager class. This should make working with query strings all over our app trivial going forward. Here’s the code.

public static class NavigationManagerExtensions
{
    public static bool TryGetQueryString<T>(this NavigationManager navManager, string key, out T value)
    {
        var uri = navManager.ToAbsoluteUri(navManager.Uri);

        if (QueryHelpers.ParseQuery(uri.Query).TryGetValue(key, out var valueFromQueryString))
        {
            if (typeof(T) == typeof(int) && int.TryParse(valueFromQueryString, out var valueAsInt))
            {
                value = (T)(object)valueAsInt;
                return true;
            }

            if (typeof(T) == typeof(string))
            {
                value = (T)(object)valueFromQueryString.ToString();
                return true;
            }

            if (typeof(T) == typeof(decimal) && decimal.TryParse(valueFromQueryString, out var valueAsDecimal))
            {
                value = (T)(object)valueAsDecimal;
                return true;
            }
        }

        value = default;
        return false;
    }
}

A lot of the code above is the same as we just saw, except I’ve used some generics to allow the caller to specify the type they want the requested value to be converted to. I’ve then added a some checks to covert values to string int or decimal you could add whatever other ones you wish.

If we refactor our counter page to use our new extension method this is what we end up with.

@page "/counter"
@inject NavigationManager NavManager

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

@code {
    int currentCount = 0;
    
    protected override void OnInitialized()
    {
        NavManager.TryGetQueryString<int>("initialCount", out currentCount);
    }

    void IncrementCount()
    {
        currentCount++;
    }
}

That now looks much cleaner and we’ve got all our query string code in one place which is great for maintainability.

Dealing with updates to query string values

One last scenario I want to cover is how to react to updates in query string values. Lets add a few links to our counter page which set the counter to different initial values, say 10, 20 and 30.

The problem we have no is that when we click any of these links Blazor isn’t going to call the OnInitialized life cycle method again as we are already on the correct componet for the route. So how can we react to the new query string value? It turns out the NavigationManager.LocationChanged event still fires, so we can setup a handler for that event which will retrieve the new values.

@page "/counter"
@implement IDisposable
@inject NavigationManager NavManager

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

<hr />

<a href="/Counter?initialCount=10">Start counter at 10.</a> |
<a href="/Counter?initialCount=20">Start counter at 20.</a> |
<a href="/Counter?initialCount=30">Start counter at 30.</a>

@code {
    int currentCount = 0;
    
    protected override void OnInitialized()
    {
        GetQueryStringValues();
        NavManager.LocationChanged += HandleLocationChanged;
    }
    
    void HandleLocationChanged(object sender, LocationChangedEventArgs e)
    {
        GetQueryStringValues();
        StateHasChanged();
    }
    
    void GetQueryStringValues()
    {
        NavManager.TryGetQueryString<int>("initialCount", out currentCount);
    }

    void IncrementCount()
    {
        currentCount++;
    }
    
    public void Dispose()
    {
        NavManager.LocationChanged -= HandleLocationChanged;
    }
}

I think that wraps things up. We can now easily access query string values from any of our components both on initial load and when the URL is updated.

Summary

In this post I talked about working with query strings in Blazor. I started off by describing what query strings are and why you would choose to use them over route parameters.

I then suggested some options on how to achieve this in Blazor. I started off with a simple solution based on the Microsoft.AspNetCore.WebUtilities library. I then developed that into an extension method for the NavigationManager class to avoid code duplication and ease use and maintainability.

Integrating Tailwind CSS with Blazor using Gulp - Part 2

Last time, I introduced Tailwind CSS and showed how you can get it up and running with Blazor. This time I’m going to go a little deeper and show how you can customise and configure Tailwind for your application, optimise the final CSS, as well as how to integrate it all into a CI pipeline.

Customising Tailwind

I mentioned in part 1, it’s possible to custom Tailwind to your specific needs. The default settings include, a set of colour palettes, fonts, padding and margin spacing, plus many more. And you can customise all of them via the tailwind.config.js file.

The file is split into three different sections, theme, variants and plugins. The theme section allows us to customise things like fonts, colour palettes, border widths, essentially anything related to the visual style of your site. The variants section allows you to configure the responsive and pseudo-class variants which are generated. These control the generation of classes which style things like hover and focus events. Finally, the plugins section allows you to add third party plugins which can generate even more CSS classes for your app. For a full breakdown of every option you can checkout the fantastic official docs.

To see this in action we’re going to add a custom colour on top of the default colour palettes. In the root of your app, create a new file called tailwind.config.js, if you don’t have one already, and paste in the following code.

module.exports = {
  theme: {
    extend: {
      colors: {
        blazoredorange: '#ff6600'
      }
    }
  }
};

The key here is the use of the extend object, this tells Tailwind we want to ADD to the existing colours. If we had written the config like this instead.

module.exports = {
  theme: {
    colors: {
      blazoredorange: '#ff6600'
    }
  }
};

Then we would be telling Tailwind to replace the existing colour palettes. It’s worth noting that everything in Tailwinds configuration is optional, so you only need to include the things you want to override or extend.

This file is automatically picked up by Tailwind during the build process so once you’ve finished adding your custom configuration, just rerun the gulp css task to generate your new custom CSS. With the above configuration change we now get access to the following extra classes.

Optimising Your CSS

One downside of Tailwind is that there are a lot of classes generated by default and you’re potentially not using a lot of them in your application. This means that the size of your CSS file can be a bit large, relatively speaking. So what can we do about this?

Removing unused CSS with PurgeCSS

There is a great plugin which solves this exact problem called PurgeCSS. It scans through your files and tries to determine what CSS classes you are using and then removes all the ones you don’t. If we look back at the configuration we had for our Gulp file from part 1, it looks like this.

const gulp = require('gulp');

gulp.task('css', () => {
  const postcss = require('gulp-postcss');

  return gulp.src('./Styles/site.css')
    .pipe(postcss([
      require('precss'),
      require('tailwindcss'),
      require('autoprefixer')
    ]))
    .pipe(gulp.dest('./wwwroot/css/'));
});

If we look at the size of the CSS file produced from this configuration, it’s pretty large, 1073kb.

This is fine while we are working locally, in fact it’s needed so we can have access to all those utility classes while we’re developing. But once we’ve finished there will be a lot of unused classes that are now just taking up space. Let’s update our config to use PurgeCSS and see what kind of saving we can make.

First we need to install PurgeCSS using npm and the following command

npm install --save-dev gulp-purgecss

Once that has completed we can update our gulpfile.js to use PurgeCSS.

const gulp = require('gulp');

gulp.task('css', () => {
  const postcss = require('gulp-postcss');
	const purgecss = require('gulp-purgecss');

  return gulp.src('./Styles/site.css')
    .pipe(postcss([
      require('precss'),
      require('tailwindcss'),
      require('autoprefixer')
    ]))
    .pipe(purgecss({ content: ['**/*.html', '**/*.razor'] }))
    .pipe(gulp.dest('./wwwroot/css/'));
});

As part of the configuration for PurgeCSS I’ve passed in the types of files I want it to scan in order to look for CSS classes. I’ve specified it should look for any files ending in .html or .razor within my application. We can then run our Gulp task using the command gulp css, as before, to regenerate our CSS.

Wow! That’s a pretty impressive size reduction! 1073kb down to 12kb. At this point it’s worth checking your application to make sure everything still looks as it should, but so far I haven’t found any issues.

I want to point out that I have not applied some settings recommended in the Tailwind docs regarding the configuration of PurgeCSS. During my testing I didn’t find any issues with the outputted CSS after purging unused classes, but I have a relatively small app so that could be why. If you notice classes being purged which you are using, then please check out the Tailwind docs for additional settings you can apply to fix it.

We could stop there but I think there are more savings to be had. After all, we’ve not even minimised the file. Let’s do that next and see how we get on.

Minimising CSS

In order to minimise our CSS we need to install another plugin or two. First, we’ll install CleanCSS which is going to do the minimising for us.

 npm install gulp-clean-css --save-dev

Optionally, you can also install Sourcemaps which will generate sourcemap files so you can debug your CSS once it’s been minified. This isn’t essential and I’ll leave it up to you to decide if you want to include it or not.

npm install gulp-sourcemaps --save-dev

With those plugins installed we can return to our gulpfile and add some more configuration.

const gulp = require('gulp');

gulp.task('css', () => {
  const postcss = require('gulp-postcss');
  const purgecss = require('gulp-purgecss');
  const sourcemaps = require('gulp-sourcemaps');
  const cleanCSS = require('gulp-clean-css');

  return gulp.src('./Styles/site.css')
    .pipe(sourcemaps.init())
    .pipe(postcss([
      require('precss'),
      require('tailwindcss'),
      require('autoprefixer')
    ]))
    .pipe(purgecss({ content: ['**/*.html', '**/*.razor'] }))
    .pipe(cleanCSS({ level: 2 }))
    .pipe(sourcemaps.write('.'))
    .pipe(gulp.dest('./wwwroot/css/'));
});

The changes here are calling into CleanCSS once we have purged any unused classes. Then generating the sourcemap for the minified CSS file. If you don’t want to create a sourcemap file then just leave those lines out of the config.

Once you’re done rerun your gulp task gulp css, and let’s see what the savings are.

4KB! Now that is a pretty small file, I think you’ll agree. I’m pretty happy with our work, we’ve taken the original file weighing in at 1073KB and we’ve managed to reduce it to only 4KB. This is great but we do have one problem.

With the current configuration we only have one gulp task which produces our CSS and now it’s producing a version which doesn’t include all the utility classes we might need for future development. It would be much better to create two gulp tasks, one to use during development and one to use when we want to ship to production.

const gulp = require('gulp');
const postcss = require('gulp-postcss');
const sourcemaps = require('gulp-sourcemaps');
const cleanCSS = require('gulp-clean-css');
const purgecss = require('gulp-purgecss');

gulp.task('css:dev', () => {
  return gulp.src('./Styles/site.css')
    .pipe(sourcemaps.init())
    .pipe(postcss([
      require('precss'),
      require('tailwindcss'),
      require('autoprefixer')
    ]))
    .pipe(gulp.dest('./wwwroot/css/'));
});

gulp.task('css:prod', () => {
  return gulp.src('./Styles/site.css')
    .pipe(sourcemaps.init())
    .pipe(postcss([
      require('precss'),
      require('tailwindcss'),
      require('autoprefixer')
    ]))
    .pipe(purgecss({ content: ['**/*.html', '**/*.razor'] }))
    .pipe(cleanCSS({ level: 2 }))
    .pipe(sourcemaps.write('.'))
    .pipe(gulp.dest('./wwwroot/css/'));
});

Here is the final configuration, we now have two gulp tasks, css:dev and css:prod.

By running the css:dev task we can get the full CSS file with all of Tailwinds utility classes and any customisations we’ve made. The great thing is we only need to run this once to generate the initial CSS locally and then only again if we change any customisations.

The css:prod task is going to give us the super optimised version of our CSS to use in production. But we wouldn’t want to run this locally, at least in an ideal world, it would be much better to use this in a CI pipeline when we deploy our application. So to finish things up, let look at how we can integrate all our work so far into a CI pipeline using Azure DevOps.

Using with TailwindCSS in a CI pipeline

As our CSS file is compiled, we should check it into source control, we should have it be dynamically generated by our build process for each release. In order to do this we will need to install the tools we’ve been using locally via npm and then run our Gulp task to produce the production CSS.

Unsurprisingly, there are already tasks available on Azure DevOps to do this. There is an npm task which will install all our npm packages and a Gulp task we can use to build our CSS.

trigger:
- master

pool:
  vmImage: 'windows-latest'

variables:
  buildConfiguration: 'Release'

steps:
- task: UseDotNet@2
  displayName: 'Installing .NET Core SDK...'
  inputs:
    packageType: sdk
    version: 3.1.102
    installationPath: $(Agent.ToolsDirectory)/dotnet

- task: Npm@1
  displayName: 'Running NPM Install...'
  inputs:
    command: 'install'
    workingDir: 'MySite'

- task: gulp@1
  displayName: 'Running Gulp tasks'
  inputs:
    gulpFile: 'MySite/gulpfile.js' 
    targets: css:prod

- script: dotnet build --configuration $(buildConfiguration) MySite/MySite.csproj
  displayName: 'Building $(buildConfiguration)...'

The first task installs the latest .NET Core SDK. Then the npm install command is run via the npm task. Use the workingDir setting to point to where your package.json file lives in your project structure. Then the gulp css:prod task is run before building the app.

You can then publish your app where ever you wish and if you need some guidance on how to do that with Azure DevOps, I have a couple of posts which should help you out, here and here.

Summary

That bring us to an end. We now have Tailwind CSS integrated into our Blazor app allowing us to take advantage of its awesome utility based approach to styling UI.

We have learned how to customise Tailwind using its extensive configuration options. We then saw how we can optimise the CSS we produced using PurgeCSS and CleanCSS to create the smallest file possible for a production scenario. Then we finished up by integrating Tailwind into a CI pipeline using Azure DevOps.

Integrating Tailwind CSS with Blazor using Gulp - Part 1

I’ve been a fan of Tailwind CSS for quite some time now. It was created by Adam Wathan and has been developed by Jonathan Reinink, David Hemphill, and Steve Schoger. I actually discovered Tailwind due to following Steve and his excellent tweets on UI tips and tricks. In fact, if you are a developer with an interest in UI design, I couldn’t recommend Steve’s eBook, RefactoringUI, enough.

Tailwind is a utility-first CSS framework, which means that instead of providing you with a set of premade UI components, which lead to every website looking the same, you are provided with a massive array of low-level utility classes that allow you to rapidly create custom designs with ease. Let me give you an example to illustrate.

Let’s look at styling a button with bootstrap first. We can define our button and apply the btn and btn-primary CSS classes as per the code below.

<button class="btn btn-primary">Primary</button>

This will render a button that looks like this.

Next, we’ll style a button using Tailwind.

<button class="px-3 py-2 rounded-md bg-blue-600 text-white">Primary</button>

Which will render a button that looks like this.

As you can see straight away, there are no tailored CSS classes in this version. The button is built up using multiple classes which each provide a single element of the styling. The px and py classes provide padding, rounded-md rounds the corners of the button, bg-blue-600 colours the background and text-white defines the text colour. You can checkout the full array of options on the Tailwind docs.

You’re probably thinking, but Chris, that’s a lot more styles I have to apply to get the same result. I agree, but what you’re getting in return is flexibility and the option to not have your site look like the thousands of others that use the same CSS framework. I appreciate this isn’t something that interests everyone and there is nothing wrong with using frameworks like bootstrap, I have for years and it’s awesome. But when you’re trying to do something a little different Tailwind is your friend.

Another huge benefit of using Tailwind is that you pretty much never have to write any CSS classes yourself. As everything is composable using utility classes, you can create any look, any style from what’s provided by the framework. Need your site to be responsive? No problem, Tailwind has utilities for that. Need cool transitions? Tailwind has you covered there, too.

Tailwinds approach sits really well with component based UIs, such as Blazor. Building on the example above we could create a PrimaryButton component which looks like this.

<!-- PrimaryButton.razor -->

<button class="px-3 py-2 rounded-md bg-blue-600 text-white"
        @onclick="HandleClick">@Text</button>

@code {
	[Parameter] public Action HandleClick { get; set; } 
	[Parameter] public string Text { get; set; } 
}

We’ve now nicely encapsulated the styles for our primary button in a component. And if we need to update the styles later down the road, we can come to a single place to make the changes.

It doesn’t stop there either. Tailwind can also be customised. This is because Tailwind is more than just a CSS framework, it’s written in PostCSS and can be configured to your needs. We’re going to be covering how to do this in part 2.

Integrating Tailwind with Blazor

Hopefully, if you’ve gotten this far, I’ve convinced you to try Tailwind—or you’re already a fan and just want to know how to use it with your Blazor app.

In order to get the most out of Tailwind, we’re going to be using some tools you may or may not be familiar with from the JavaScript ecosystem, NPM and Gulp. If you’re not familiar with these tools, I’ve added a link to each of them so you can have a read up on what they do.

Installing NodeJS to get NPM

If you haven’t already got NodeJS installed then you will need to do that first. This is because installing node installs NPM (Node Package Manager). Head over to the Node site and grab the latest LTS version (12.16.0 at the time of writing). Run the installer, you should be able to accept all the defaults, just make sure that npm package manager is selected.

Once the installer is finished, open a terminal and type npm -v. If you see a version number printed out then you’re good to go.

Adding Gulp and Tailwind via NPM

The next thing we need to do is install the Gulp CLI globally, this will allow us to work with Gulp on the command line. At your terminal type npm install gulp-cli -g. Once the install has completed we can turn our attention to our Blazor project.

In the root of your Blazor app, add a new file called package.json and then add the following code.

{
  "devDependencies": {
    "gulp": "^4.0.2",
    "gulp-postcss": "^8.0.0",
    "precss": "^4.0.0",
    "tailwindcss": "^1.2.0",
    "autoprefixer": "^9.7.4"
  }
}

This tells NPM what packages to install for our project. We’ll see how these are used a little later in when we configure Gulp.

The next step is to setup our CSS file as per the Tailwinds docs. The way I do this is by creating a folder in the root of my app called Styles and in there add a file called site.css (you can name this file whatever you want). The reason this file is sitting outside of the wwwroot folder is because it is it going to be processed to produce the final CSS file which will be outputted to the wwwroot folder. Then add the following code.

@tailwind base;
@tailwind components;
@tailwind utilities;

#blazor-error-ui {
    background: lightyellow;
    bottom: 0;
    box-shadow: 0 -1px 2px rgba(0, 0, 0, 0.2);
    display: none;
    left: 0;
    padding: 0.6rem 1.25rem 0.7rem 1.25rem;
    position: fixed;
    width: 100%;
    z-index: 1000;
}

    #blazor-error-ui .dismiss {
        cursor: pointer;
        position: absolute;
        right: 0.75rem;
        top: 0.5rem;
    }

The three lines at the top are all we need to add to get all of those awesome Tailwind utilities. When the file is processed, those three lines will be replaced with all the generated classes from Tailwind. You can also add any custom styles you want present in the final CSS file as well. I’ve made sure to keep the styling for the Blazor error messages which come in the default project template.

The last part is to add another file to the root of the project called gulpfile.js. This is where we are going to configure Gulp to build Tailwind for us. Add the following code, remember to update the filename if you didn’t use site.css.

const gulp = require('gulp');

gulp.task('css', () => {
  const postcss = require('gulp-postcss');

  return gulp.src('./Styles/site.css')
    .pipe(postcss([
      require('precss'),
      require('tailwindcss'),
      require('autoprefixer')
    ]))
    .pipe(gulp.dest('./wwwroot/css/'));
});

We’re defining a single Gulp task called css. It takes the site.css we defined above as an input and then runs it though a plugin called postcss. Essentially, this plugin builds all of those Tailwind utility classes. Once it’s finished, the processed CSS is output into the wwwroot/css folder.

With all of the above in place, open a terminal and navigate to the root of the Blazor app, the same folder where the package.json and gulpfile.js reside. Once there, run the following commands.

npm install
gulp css

The first command will install all the packages in the package.json file we created earlier. Once that’s finished, the second command executes the css task we defined in our gulpfile.js. If all has gone well you should see a new file, site.css in your wwwroot/css folder. All that’s left to do is to reference the file in your apps index.html or _Host.cshtml.

Congratulations! You can now go and start building your UI using the utility of Tailwind.

Summary

In this post, I’ve shown how to integrate the awesome, utility-first CSS framework, Tailwind CSS into your Blazor project. We started by looking at what makes Tailwind different to other CSS frameworks such as Bootstrap and reasons why we’d want to use it. We then looked at how we can integrate it with a Blazor application using NPM and Gulp.

I hope you enjoyed this post and if you didn’t know about Tailwind before, that I’ve made you curious about it. Next time, we’re going to go a little deeper down the rabbit hole. We’ll look at how we can customise Tailwind using it’s configuration system. As well as how we can optimise our outputted CSS by removing unused classes. To finish off we’ll go through how to integrate Tailwind into a CI pipeline with Azure DevOps.

I’m using Tailwind to build a new documentation site for the Blazored UI packages. So if you want to see this in a real world application then feel free to check it out on GitHub.

Fragment Routing with Blazor

A common question I’ve been asked and I’ve seen asked many times is, how can I route to a fragment in my Blazor app? If you’re not aware what I mean by a “fragment”, let me explain.

Fragment routing, or linking, is the term given when linking to a specific element on a page, say a header for example. This technique is often used in FAQ pages or technical documentation and links using this technique look like this, www.mysite.com/faq#some-header. In this example, if an element was present on the page with an id of some-header the page would automatically scroll to that element when it loads.

In this post, I’m going to show you how you can achieve this in Blazor as it’s not something which we can do out of the box.

I’ve added a sample project on my GitHub account showing this solution in action.

The Problems

Blazor doesn’t include anything out of the box which allows us to handle fragment routing. In fact, Blazor’s router will actively ignore any fragments, or query strings for that matter, attached to a URL.

The next problem we face is that there is no feature in Blazor which enables us to scroll to a certain point on a page. Scrolling to specific place in web page is something that can only be achieved by JavaScript, currently.

The final issue we need to overcome is how to get hold of the fragment from the URL in the first place. We could use the NavigationManagers URI property, and then use some string manipulation to find a fragment and pull it out. But that sounds like a lot of hard work — surely there must be a better way.

The Solution

Now we’ve understood the problems, what’s the solution?

The first thing we’re going to do is write a small piece of JavaScript, as we identified above, it’s the only option right now.

window.blazorHelpers = {
    scrollToFragment: (elementId) => {
        var element = document.getElementById(elementId);

        if (element) {
            element.scrollIntoView({
                behavior: 'smooth'
            });
        }
    }
};

The code above takes an element ID. We then try to find an element matching that ID on the page using the getElementById function. If we find an element, then we invoke the scrollIntoView function on that element. As part of doing that we’re passing in a configuration object which sets the behaviour of the scroll to smooth. This will give us a nice smooth scrolling effect to the target element.

Now we have the JavaScript piece in place, we’re going to create an extension method for the NavigationManager class.

public static class Extensions
{
    public static ValueTask NavigateToFragmentAsync(this NavigationManager navigationManager, IJSRuntime jSRuntime)
    {
        var uri = navigationManager.ToAbsoluteUri(navigationManager.Uri);

        if (uri.Fragment.Length == 0)
          return default;

        return jSRuntime.InvokeVoidAsync("blazorHelpers.scrollToFragment", uri.Fragment.Substring(1));
    }
}

We start by getting the current URI using the NavigationManagers ToAbsoluteUri method. This returns us the URI as a URI object. This makes our life a lot easier as the Uri class allows us to easily check for a fragment in the URI using the Fragment property.

If no URI is present then we will return and do nothing. However, if there is a fragment we call our JavaScript function passing in the fragment. You may have noticed that we’re actually cutting off the first character of the fragment when we do this. This is because the Fragment property on the Uri class returns the fragment with the # symbol included. So if we had a URI which looked like this, https://mysite.com/faq#contact, the Fragment property would return #contact.

That’s it, we should now be able to navigate to fragments by doing the following.

@inject IJSRuntime _jsRuntime
@inject NavigationManager _navManager

...

@code {
    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            await _navManager.NavigateToFragmentAsync(_jsRuntime);
        }
    }
}

That’s better, but we’re not quite there yet

At first glance it looks like we’ve solved our issues but there is another use case we haven’t covered. Say we’re on the home page and navigate to a fragment on a FAQ page using our fragment helper above. All works as expected, but, if we try to navigate to another fragment on the same page, nothing happens.

This because Blazor doesn’t care about URI fragments, clicking the link updates the fragment in the URI but doesn’t trigger Blazor to re-render the page. And even if the page did re-render we’re only doing fragment navigation on the first render. This isn’t very good at all, so how can we fix it?

In order to get this scenario working we need to hook into the NavigationManagers LocationChanged event. By providing a handler for this event we can call our fragment navigation helper whenever the URI changes. Our updated implementation code now looks like this.

protected override void OnInitialized()
{
    _navManager.LocationChanged += TryFragmentNavigation;
}

protected override async Task OnAfterRenderAsync(bool firstRender)
{
    if (firstRender)
    {
    	await _navManager.NavigateToFragmentAsync(_jsRuntime);
    }
}

private async void TryFragmentNavigation(object sender, LocationChangedEventArgs args)
{
    await _navManager.NavigateToFragmentAsync(_jsRuntime);
}

void IDisposable.Dispose()
{
    _navManager.LocationChanged -= TryFragmentNavigation;
}

Now we are using event handlers our component must implement IDisposable, which has added a lot of extra code. Having to add all this code to every page that we want to enable fragment navigation on would be a real pain. So what can we do about it?

Using a base class to create a nice reusable solution

I think the best option at this point would be to put all this code into a base class, that way any pages we want to enable fragment navigation on can just implement our base class and they’re away!

public class FragmentNavigationBase : ComponentBase, IDisposable
{
    [Inject] NavigationManager NavManager { get; set; }
    [Inject] IJSRuntime JsRuntime { get; set; }

    protected override void OnInitialized()
    {
        NavManager.LocationChanged += TryFragmentNavigation;
    }

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        await NavManager.NavigateToFragmentAsync(JsRuntime);
    }

    private async void TryFragmentNavigation(object sender, LocationChangedEventArgs args)
    {
        await NavManager.NavigateToFragmentAsync(JsRuntime);
    }

    public void Dispose()
    {
        NavManager.LocationChanged -= TryFragmentNavigation;
    }
}

Now in order to have a page use fragment navigation we can simply have it inherit from FragmentNavigationBase and everything will just work.

Summary

In this post, we have created a solution for fragment navigation in Blazor. We started off by identifying the problems:

We then created a simple solution using a small amount of JavaScript and an extension method on the NavigationManager to allow navigation to a fragment. We finished by wrapping that all up in a reusable base class which our page components can inherit from.

All of the code from this post can be found on GitHub.

.NET Conf: Focus on Blazor

I was fortunate enough to be asked to speak at the first of a new series of one day virtual conferences, .NET Conf: Focus on Blazor. This event is the first of many focus events being run throughout the year by Microsoft and the .NET Conf team. Each event deep dives on a specific technology with a mix of speakers from Microsoft and the community.

If you’ve not heard of .NET Conf before, it’s a free 3 day virtual conference run by Microsoft to celebrate all things .NET. It’s a fantastic event which covers a wide range of topics from top speakers, both from Microsoft and from the community.

The idea behind the focus event series is to provide single day virtual conferences, in the same style as .NET Conf, which target a specific technology. These events will be spread out throughout the year, where-as the full .NET Conf event is only held once a year. The other great thing about these events is that they are all recorded and made available via the Channel 9 site and YouTube. So if you’re unable to tune in and watch live, then you can catch-up at your earliest convenience.

I gave a talk covering routing in Blazor. The talk is divided into two halfs, the first half is all about understanding the various parts which make up routing in Blazor. The second half is spent in Visual Studio running through various demos. There are definitely some deep dive parts, but hopefully it had something for everyone to take away.

There were so many great talks which covered a wide range of Blazor topics such as UX, state management, testing, authorization and authentication, custom component design, the list goes on. All the speakers did an amazing job, I really think the event was a great success and I’m very proud to have been a part of it.

If you didn’t get the chance to watch the event live, then don’t worry. I’ve linked the video of the whole event below so you can catch up on all the sessions - Including my talk on routing in Blazor.

Year In Review - 2019

Let me start by wishing everyone a happy new year! This post is a little late as I wasn’t 100% I was going to write it, and I kept fiddling with it and hovering over publish button, but I got there in the end!

2019 was a fantastic year and full of firsts. It’s also been tiring and stressful at times, but I really wouldn’t change a thing. So what were the highlights of 2019?

MVP Award

The most significant achievement for me this year was being nominated and receiving the MVP award from Microsoft. It was certainly not something I went out in search of, in fact, I would never have believed you if you told me that one day I would be an MVP.

I can’t thank Ed Charbeneau enough for the nomination and advice he gave me. Also to Tom Morgan, who again gave me some great advice and encouragement.

Public Speaking

My First Podcast - .NET Core Podcast

I did my first ever podcast this year, I was a guest on the .NET Core Podcast hosted by Jamie Taylor. Jamie contacted me on Twitter and asked if I’d be interested, I couldn’t reply quick enough.

I’ve been a fan of Jamie’s podcast from the start, he’s had some great guests on the show. I’d be lying if I said I wasn’t feeling a bit of imposter syndrome after the invite. We had a great chat before the recording and Jamie was very good at making me feel at ease, and before I knew it we were done!

I loved the experience, and I hope to do more podcasts in the future. If you haven’t heard the episode, then you can check it out here.

My First User Group Talk - .NET Cambridge User Group

I gave my first ever talk at .NET Cambridge in June. I’m not going to lie, I was terrified on the drive down. A good friend of mine came with me and kept pointing out that maybe I should ease off the accelerator a bit otherwise we’d be there an hour early!

I was the first speaker of the night, which really helped, I could get straight into my talk and then enjoy the rest of the meetup. The first 5-10 minutes were really nerve racking, but I’d been given some great advice from my Dad, who’s an accomplished public speaker. He told me to remember to breathe and focus on talking slowly. I also practised that first 10 minutes so I could repeat it in my sleep, this really helped as well, everything was muscle memory.

Once I got into my stride, I loved it. It was great to see people in the audience reacting to what I was saying and nodding at the times I hoped people would be nodding. All to quickly I was done, the talk was around 40 minutes, and I pretty much hit my timing spot on, which I was amazed by. I then spent the next 20 minutes fielding questions from the audience, which I really enjoyed.

I came away from the night absolutely buzzing. I had survived! I’d not burst into flames or thrown up, or screwed up one of the 1000 ways that had run through my head. As an added bonus, people had seemed to really enjoy my talk. I had some great feedback and so many questions, which even spilled over into the break and at the end of the event.

I want to say a big thank you to Andrea for allowing me the opportunity to speak, as well as giving me some great advice about writing a talk and presenting.

My First Conference Talk - DDD Reading

After the success of my talk at .NET Cambridge, I was in search of another opportunity. I was scrolling through Twitter, and I saw a tweet asking for submissions for Developer Day 2019, one of the DDD events. It was being hosted at Microsoft’s HQ in Reading which I thought was really cool. I put together an abstract and submitted.

DDD events are a bit different to other conferences as the talks submitted are voted for by the public, whichever talks get the most votes are accepted. I was on holiday at the time when I received the email telling me my talk had been voted in - I was really excited.

I travelled down the day before, there was a speakers event that night. We all met up for a drink and a really nice curry, everyone was so friendly and welcoming.

On the day of the event, everything went really well. I was first on again - which I was quite thankful. The talk seemed to go well again and resonate with people in the audience. Some of the other speakers, who I’d meet for the first time the night before, came and sat in the front row to give me some support and encouragement which was terrific.

All in all, I couldn’t recommend the DDD events enough. It’s a very welcoming experience if you’re a new speaker. Everyone is really friendly and ready to help if needed. The events are also free for attendees!

I hope I can speak at many more of these events in the future.

Blazored

I’ve managed to add a few more packages to the Blazored organisation this year. As well as help some members of the community to get going hosting their packages on Blazored. I find open source a really rewarding experience. While it can be hard fitting it all in at times, it’s definitely worth the effort.

We managed over 50k downloads across all packages in 2019, which I feel pretty proud of.

Blog

2019 has been my second year of blogging, and I would like to say it’s getting easier but it still takes me forever to write a post. I’m really not a natural writer, but I also have a terrible memory, and I need to write this stuff down! :-)

I was pretty happy with managing to publish 37 blog posts in 2019, which is about one every week and a half. In terms of stats for 2019, I had:

I was pretty stunned when I looked at these figures. I find it amazing that so many people have viewed my posts and I hope you’ve found them interesting or helped solve a problem for you.

And remember, if you haven’t already, you can subscribe to my newsletter so that you’ll never miss another post. Just use the link at the top of the page or the end of this post.

Looking Ahead To 2020

I really amazed at everything that’s happened in 2019. There have been some great highlights, but what’s the plan for 2020?

I’m definitely going to be trying my best to do more talks in 2020. I’ve already got a few planned, you can check these out on my Speaking page. An early highlight though is the .NET Conf - Focus on Blazor event which I’m really excited about.

I’ll be attending my first ever MVP Summit in March. This is going to be a fantastic experience and I’m really looking forward to meeting all the other MVPs from around the world.

I will be continuing to blog in 2020 and I’ve been working on some new ideas for posts. I’ll also be continuing my work on open source and Blazored. A goal for this year is to add better documentation for the libraries, hopefully in a central site somewhere. I also have some ideas for additional libraries I would like to add to the collection.

To finish up, I want to say a massive thank you to everyone I’ve meet/spoken to/text chatted with, in 2019. Let’s see what 2020 brings!

Introduction to Blazor Component Testing

This post is part of the third annual C# advent. Two new posts are published every day between 1st December and 25th December.

One significant area which has been lacking in Blazor is testing. There is a placeholder issue for it, but for a very long time, there wasn’t any progress. This is because testing was out of scope for the first official release of Blazor - which shipped with .NET Core 3 back in September.

However, just before the release in August, Steve Sanderson published a blog post introducing a prototype unit testing library for Blazor components. Steve’s prototype has the goal of getting the conversation going around testing in Blazor. I’ve wanted to test the library for a while, but I haven’t had the chance until now.

Let’s start by covering some of the high level questions about this prototype.

I want to stress this is a prototype library and there is ZERO support for it, there isn’t even a NuGet package right now. Therefore, anything you read here can, and most likely will, change.

What types of testing does it cover?

There are various ways to test web applications but the two most common are unit tests and end-to-end tests (E2E tests).

Unit testing

Unit tests are lightweight and fast to run if written correctly, of course. They test small “units” of code in isolation. For example, given a method which takes two numbers and returns the sum of them. You could write a unit test which checks that if you provide the inputs 2 and 2 that the method returns 4. However, checks in isolation can also be their downfall. It’s possible that various units of code could run fine in isolation, but when put together they don’t quite match up, and errors can occur.

End to end testing

E2E tests can help to combat the issues with unit tests. You can test the whole application stack using E2E testing. E2E tests often use a headless browser to run tests which assert against the DOM. This is achieved using tools such as Selenium which drive a headless browser and provide an interface to access the HTML. The drawback of these type of tests is they’re much more heavyweight. They’re also known to be brittle and slow; it takes substantial effort to both make them reliable and keep them that way.

The best of both

What Steve’s prototype library attempts to do is to bring the best of both of these testing approaches but without the drawbacks - sounds pretty good to me!

How does it work?

Steve’s library supplies a TestHost which allows components to be mounted using a TestRenderer under the hood. It’s compatible with traditional unit testing frameworks such as XUnit and NUnit and gives us a straightforward and clean way of writing tests. This is an example of what a test looks like using XUnit.

<!-- MyComponent.razor -->

<h1>Testing is awesome!</h1>
public class MyComponentTests
{
    private TestHost _host = new TestHost();

    [Fact]
    public void MyBlazingUnitTest()
    {
        var component = _host.AddComponent<MyComponent>();
        
        Assert.Equal("Testing is awesome!", component.Find("h1").InnerText);
    }
}

We use CSS selectors to locate points of interest in the rendered output. In the test above, I’m locating the h1 tag and asserting that it contains the text “Testing is awesome!”.

We can run this test like we would a traditional unit test using the Visual Studio Test Explorer.

The tests also run exactly how you would expect when using a CI pipeline. Here’s an example from Azure DevOps.

Using the library

As I mentioned earlier, there isn’t a NuGet package available for the library just yet. To use it you have to either download/clone/fork the repo from Steve’s GitHub account. I’ve created a fork of it to my GitHub, I’ve also updated all the packages to the latest versions.

Once you have a copy of the code, you can open up the solution included with the repo. You’ll find a sample application and some sample tests which give a few good examples of how to use the library.

You can play around with it from here if you just want to get to know it better. But I’ve wanted to get some tests added into my Blazored libraries for quite a while now. So I thought this would be the perfect opportunity to do that and see how the library works with a real-world project.

Testing Blazored Modal

We’re going to start by adding some tests to Blazored Modal. This is a reasonably straightforward component which is controlled via a service. By calling a method on the service and passing different options or parameters, the modal is displayed in different configurations.

I’ve decided on two test groupings, display tests and modal options tests. I like to try and group my tests to make them easier to find and maintain.

To start, we’ll add a copy of Steve’s testing library to the solution and also a new XUnit test project called Blazored.Modal.Tests.

Next, we need to add a reference to the testing library from the XUnit project and a reference to the Blazored.Modal project.

We’re going to create each of the test groupings I mentioned earlier as classes in the Blazored.Modal.Tests project. But so we don’t have to duplicate boilerplate code we’re going to create a test base class to encapsulate it.

public class TestBase
{
    protected TestHost _host;
    protected IModalService _modalService;

    public TestBase()
    {
        _host = new TestHost();
        _modalService = new ModalService();
        _host.AddService<IModalService>(_modalService);
    }
}

We start by creating a new TestHost instance, which is provided by Steve’s testing library. As the modal component relies on an IModalService to function, we also need to add an instance of one to the TestHost’s DI container. This is the place to replace any services with mocks if your components are using services which make external calls.

Now we have our TestBase sorted, let’s get cracking with our display tests. Let’s start by making sure that the modals initial state is not visible.

public class DisplayTests : TestBase
{
    [Fact]
    public void ModalIsNotVisibleByDefault()
    {
        var component = _host.AddComponent<BlazoredModal>();
        var modalContainer = component.Find(".blazored-modal-container.blazored-modal-active");

        Assert.Null(modalContainer);
    }
}

In the test we’re creating an instance of the BlazoredModal component via the TestHost. Once we have that instance, we can use it to look for particular state. In our case, we’re checking that no element has the .blazored-modal-active CSS class, as it’s this class which makes the modal visible.

We now need a test to make sure the modal becomes visible when we call the Show method on the IModalService.

[Fact]
public void ModalIsVisibleWhenShowCalled()
{
    var component = _host.AddComponent<BlazoredModal>();
    _modalService.Show<TestComponent>("");

    var modalContainer = component.Find(".blazored-modal-container.blazored-modal-active");

    Assert.NotNull(modalContainer);
}

This time we’re using the _modalService instance we setup in the TestBase. Once we’ve created the instance of our BlazoredModal component, we call the Show method on the service. We then look for an element which has the .blazored-modal-active CSS class and check it’s not null.

If you’re wondering about the TestComponent type in the Show call, it’s a simple component I created to use for these tests and looks like this.

internal class TestComponent : ComponentBase
{
    public const string TitleText = "My Test Component";

    [CascadingParameter] public ModalParameters ModalParameters { get; set; }

    public string Title
    {
        get
        {
            var cascadedTitle = ModalParameters.TryGet<string>("Title");
            return string.IsNullOrWhiteSpace(cascadedTitle) ? TitleText : cascadedTitle;
        }
    }

    protected override void BuildRenderTree(RenderTreeBuilder builder)
    {
        base.BuildRenderTree(builder);

        builder.OpenElement(1, "h1");
        builder.AddContent(2, Title);
        builder.CloseElement();
    }
}

We’ll see this used more later on.

So far so good, now we are going to test cancelling and closing the modal. Here are the tests.

[Fact]
public void ModalHidesWhenCloseCalled()
{
    var component = _host.AddComponent<BlazoredModal>();
    _modalService.Show<TestComponent>("");
    var modalContainer = component.Find(".blazored-modal-container.blazored-modal-active");

    Assert.NotNull(modalContainer);

    _modalService.Close(ModalResult.Ok("Ok"));
    modalContainer = component.Find(".blazored-modal-container.blazored-modal-active");

    Assert.Null(modalContainer);
}

[Fact]
public void ModalHidesWhenCancelCalled()
{
    var component = _host.AddComponent<BlazoredModal>();
    _modalService.Show<TestComponent>("");
    var modalContainer = component.Find(".blazored-modal-container.blazored-modal-active");

    Assert.NotNull(modalContainer);

    _modalService.Cancel();
    modalContainer = component.Find(".blazored-modal-container.blazored-modal-active");

    Assert.Null(modalContainer);
}

These two tests are very similar, the only difference is the method we call on the modal service, either Close or Cancel. As you can see, I’ve got two Asserts in each test. The first one is asserting that the modal is visible, then, after the Cancel or Close methods are called, the second assert checks that it’s not anymore.

That’s it for the display tests. Let’s move on and look at the modal options tests next. I’m not going to go through them all as I think things will get very repetitive, but I do want to look at two of them.

public class ModalOptionsTests : TestBase
{
    // Other tests omitted for brevity

    [Fact]
    public void ModalDisplaysCorrectContent()
    {
        var component = _host.AddComponent<BlazoredModal>();

        _modalService.Show<TestComponent>("");

        var content = component.Find("h1");

        Assert.Equal(content.InnerText, TestComponent.TitleText);
    }
    
    [Fact]
    public void ModalDisplaysCorrectContentWhenUsingModalParameters()
    {
        var testTitle = "Testing Components";
        var parameters = new ModalParameters();
        parameters.Add("Title", testTitle);

        var component = _host.AddComponent<BlazoredModal>();

        _modalService.Show<TestComponent>("", parameters);

        var content = component.Find("h1");

        Assert.Equal(content.InnerText, testTitle);
    }
}

The first test is checking that the correct content gets rendered by the modal component. In the test, we’re checking that the content of the TestComponent (see code from earlier) is rendered correctly inside the modal. The TestComponent just contains a simple h1 tag which will display the string “My Test Component” by default.

The next test is checking that the component being displayed renders correctly based on a value passed using ModalParameters, which are passed into child components via a CascadingParameter. In this test, we’re setting a Title parameter with the value “Testing Components”. We then check to make sure the correct title is displayed by the TestComponent.

The reason I wanted to highlight these two tests is because they show the more E2E style of testing achievable with this library. In order to test the BlazoredModal component fully we need to make sure it interacts with other components in the right way. And this is a very common scenario when building component based UIs, testing in isolation here wouldn’t give us the same amount of confidence as testing these components together.

You can view all the tests mentioned in this post at the Blazored Modal repo.

Summary

That’s where we’re going to leave things. We’ve managed to get setup with Steve’s library and write some decent tests to check the functionality of Blazored Modal.

In this post, we’ve taken a look at how to test Blazor components using Steve Sanderson’s prototype testing library. We looked at how to get up and running with the library before using it to write some tests for the Blazored Modal component library.

I hope you’ve found this post useful and if you have any feedback for Steve about his prototype, then please head over to the repo and open an issue.

Getting Started with Blazor - Experts Panel Discussion

Recently I was asked to be part of a panel discussion on Blazor by Ed Charbeneau, Senior Developer Advocate at Progress. I’ll talk about Blazor at literally any opportunity so I, of course, said yes!

The panel was made up of some great people. First, there was Daniel Roth, Program Manager for Blazor at Microsoft. Egil Hansen, who is a Managing Architect at Netcompany, and active community member, who’s doing some excellent work on testing for Blazor. Our host and moderator Ed, who’s also a fellow MVP. Finally, there was me.

We had a fantastic chat and covered some great topics, and I hope we’ve created an excellent resource for people to get to know a bit more about Blazor.

I want to say thanks to Ed for asking me to be part of this. I would also like to say thanks to everyone on the panel for the great conversation.

You can check out the full dialogue over on the Progress site.

Creating Bespoke Input Components for Blazor from Scratch

In my last post, we looked at how we could build custom input components on top of InputBase. Using InputBase saves us loads of extra work by managing all of the interactions with the EditForm component and the validation system, which is excellent.

However, sometimes, there are situations where we want or need to have a bit more control over how our input components behave. In this post, we are going to look at how we can build input components from scratch.

Why build from scratch?

When you consider all the functionality that we get from Blazor’s out-of-the-box input components. Plus, the ability to make customisations using the InputBase class that we looked at last time. Why would we want to build input components from scratch?

The biggest reason I’ve found so far is the ability to use input components outside of an EditForm component. Any input component which uses InputBase has to be inside of an EditForm component; otherwise, an exception is thrown. That’s because of a check in the OnParametersSet method of InputBase. It checks for an EditContext which is cascaded down via the EditForm component.

if (CascadedEditContext == null)
{
    throw new InvalidOperationException($"{GetType()} requires a cascading parameter " + $"of type {nameof(Forms.EditContext)}. For example, you can use {GetType().FullName} inside " + $"an {nameof(EditForm)}.");
}

Building from scratch is also beneficial if you’re looking to have total control over how your component acts. By creating everything yourself, you’ll be able to tailor every detail to work precisely the way you want. For example, you could create an EditForm replacement, and that could require you to build custom input components.

Building from scratch

We’re going to build a simple text input component which can be used both inside or outside an EditForm component. Text inputs are quite useful components to be able to use both inside a form for any text-based entry. But are equally useful outside of a form for things like search boxes where validation isn’t necessarily a concern.

<input value="@Value" @oninput="HandleInput" />

@code {
    [Parameter] public string Value { get; set; }
    [Parameter] public EventCallback<string> ValueChanged { get; set; }

    private async Task HandleInput(ChangeEventArgs args)
    {
        await ValueChanged.InvokeAsync(args.Value.ToString());
    }
}

This is the basic setup of our CustomInputText component. We have set up a couple of parameters, Value and ValueChanged. These allow us to use Blazor’s bind directive when consuming the control. We’ve hooked onto the input controls oninput event, and every time it fires the HandleInput event invokes the ValueChanged EventCallback to update the value for the consumer.

Working as part of EditForm

For an input component to work with EditForm, it has to integrate with EditContext. EditContext is the brain of a Blazor form; it holds all of the metadata regarding the state of the form. Things like whether a field has been modified. Is it valid? As well as a collection of all of the current validation messages.

EditContext is also responsible for raising events to signal that field values have been changed, or that an attempt has been made to submit the form. This triggers the validation aspect of the form.

Integrating with EditContext

To integrate with EditContext, we need to add a CascadingParameter to our component requesting it; then we need to create a FieldIdentifier.

The FieldIdentifier class uniquely identifies a specific field or property in the form. To create an instance, we need to pass in an expression which identifies the field our component is handling. To get this expression, we can add another parameter to our component called ValueExpression. Blazor populates this expression for us based on a convention in a similar way to two-way binding using Value and ValueChanged.

[Parameter] public Expression<Func<string>> ValueExpression { get; set; }

Now we have an expression we can create an instance of FieldIdentifier; we’ll do this in the OnInitialized life cycle method.

protected override void OnInitialized()
{
    _fieldIdentifier = FieldIdentifier.Create(ValueExpression);
}

We need to tell the EditContext when the value of our field has been updated. This will trigger any validation logic that needs to run against our field. We do this by calling the NotifyFieldChanged method on the EditContext.

private async Task HandleInput(ChangeEventArgs args)
{
    await ValueChanged.InvokeAsync(args.Value.ToString());
    CascadedEditContext?.NotifyFieldChanged(_fieldIdentifier);
}

Once the field has been validated it will be marked as either valid or invalid. We can use this value to assign CSS classes to the component and style it appropriately. To access these values we can use the following code.

private string _fieldCssClasses => _editContext?.FieldCssClass(_fieldIdentifier) ?? "";

This going to set the _fieldCssClasses field to some combination of modified valid or invalid, depending on the fields current state.

The final component looks like this.

<input class="_fieldCssClasses" value="@Value" @oninput="HandleInput" />

@code {

    private FieldIdentifier _fieldIdentifier;
    private string _fieldCssClasses => CascadedEditContext?.FieldCssClass(_fieldIdentifier) ?? "";

    [CascadingParameter] private EditContext CascadedEditContext { get; set; }

    [Parameter] public string Value { get; set; }
    [Parameter] public EventCallback<string> ValueChanged { get; set; }
    [Parameter] public Expression<Func<string>> ValueExpression { get; set; }

    protected override void OnInitialized()
    {
        _fieldIdentifier = FieldIdentifier.Create(ValueExpression);
    }

    private async Task HandleInput(ChangeEventArgs args)
    {
        await ValueChanged.InvokeAsync(args.Value.ToString());
        CascadedEditContext?.NotifyFieldChanged(_fieldIdentifier);
    }

}

Working without EditForm

Actually, we’ve already covered this one. You may have noticed on the last code snippet that we used the null-conditional operator (?.) when calling the NotifyFieldChanged method. The reason for this is that if the EditContext is null, then the method won’t be called.

Why would the EditContext be null? If the control wasn’t inside of an EditForm component. That simple check will allow the control to work outside of an EditForm component without any issue.

What are the costs?

When we use this component without the EditForm component we will no longer be able to use the standard validation mechanisms. Depending on your use case this may or may not matter.

For the use cases I’ve had, it doesn’t matter, things such as site searches or date pickers, things where I can use default values or not have to care. You could, of course, deal with this manually if you choose. You’re in complete control of the component after all.

For example, we could add a Required parameter to the component. When this is true, we can check if there is an EditContext. If there isn’t, we can set a private variable to show an error message if the current value is empty.

<input class="_fieldCssClasses" value="@Value" @oninput="HandleInput" />

@if (_showValidation)
{
    <div class="validation-message">You must provide a name</div>
}

@code {

    private FieldIdentifier _fieldIdentifier;
    private string _fieldCssClasses => CascadedEditContext?.FieldCssClass(_fieldIdentifier) ?? "";
    private bool _showValidation;

    [CascadingParameter] private EditContext CascadedEditContext { get; set; }

    [Parameter] public string Value { get; set; }
    [Parameter] public EventCallback<string> ValueChanged { get; set; }
    [Parameter] public Expression<Func<string>> ValueExpression { get; set; }
    [Parameter] public bool Required { get; set; }

    protected override void OnInitialized()
    {
        _fieldIdentifier = FieldIdentifier.Create(ValueExpression);
    }

    private async Task HandleInput(ChangeEventArgs args)
    {
        await ValueChanged.InvokeAsync(args.Value.ToString());

        if (CascadedEditContext != null)
        {
            CascadedEditContext.NotifyFieldChanged(_fieldIdentifier);
        }
        else if (Required)
        {
            _showValidation = string.IsNullOrWhiteSpace(args.Value.ToString());
        }
    }

}

If we don’t want the error message to be hardcoded, that’s cool too; we can add a parameter for the error message so that it can be passed in. The point here is that you can customise the behaviour as much as you like based on your needs.

Summary

In this post, we’ve looked at how we can build bespoke input components that work inside and outside of the EditForm component. We started by looking at why we would want to do this in the first place. Then we looked at how to integrate with the built in forms and validation system of Blazor. As well as how to make the component work without that system. Finally, we talked about some of the trade off of working outside of EditForm.

Building Custom Input Components for Blazor using InputBase

Out of the box, Blazor gives us some great components to get building forms quickly and easily. The EditForm component allows us to manage forms, coordinating validation and submission events. There’s also a range of built-in input components which we can take advantage of:

And of course, we wouldn’t get very far without being able to validate form input, and Blazor has us covered there as well. By default, Blazor uses the data annotations method for validating forms, which if you’ve had any experience developing ASP.NET MVC or Razor Page applications, will be quite familiar.

Out of the many things I love about Blazor, the ability to customise things which don’t quite suit your tastes or needs is one of my favourites! And forms are no exception. I’ve previously blogged about how you can swap out the default data annotations validation for FluentValidation. In this post, I’m going to show you how you can create your own input components using InputBase as a starting point.

Some issues when building real-world apps

The Blazor team have provided us with some great components to use out of the box that cover many scenarios. But when building real-world applications, we start to hit little problems and limitations.

Lots and lots of repeated code

Most applications, especially line of business applications, require quite a few forms. These often have a set style and layout throughout the application. When using the built-in input components, this means things can get verbose and repetitive quite quickly.

<EditForm Model="NewPerson" OnValidSubmit="HandleValidSubmit">
    <DataAnnotationsValidator />
    
    <div class="form-group">
        <label for="firstname">First Name</label>
        <InputText @bind-Value="NewPerson.FirstName" class="form-control" id="firstname" />
        <ValidationMessage For="NewPerson.FirstName" />
    </div>
    
    <div class="form-group">
        <label for="lastname">Last Name</label>
        <InputText @bind-Value="NewPerson.LastName" class="form-control" id="lastname" />
        <ValidationMessage For="NewPerson.LastName" />
    </div>
    
    <div class="form-group">
        <label for="occupation">Occupation</label>
        <InputText @bind-Value="NewPerson.Occupation" class="form-control" id="occupation" />
        <ValidationMessage For="NewPerson.Occupation" />
    </div>
    
    <button type="submit">Save</button>
</EditForm>

The more significant issue, however, is maintenance. This code is using Bootstrap for layout and styling, but what happens if that changed and we moved to a different CSS framework? Or for some reason stopped using a CSS framework altogether and wrote our own CSS? We’d have to go everywhere we had a form in the application and update the code. Having experienced this first hand, I can safely say this isn’t fun.

A solution: Building custom input components

The approach my team and I have taken at work is to create custom input components which suit our applications needs. By doing this, we’ve greatly reduce the amount of code we write, while also making updates to styling and functionality much quicker and simpler.

All our form components can have an optional label, input control and validation message. If we didn’t use our custom components, the code would look like this.

<!-- Control with label -->
<div class="form-control-wrapper">
    <label class="form-control-label" for="catalogue">Catalogue</label>
    <InputText class="form-control" id="catalogue" @bind-Value="Form.Catalogue" />
    <div class="form-control-validation">
        <ValidationMessage For="@(() => Form.Catalogue)" />
    </div>
</div>

<!-- Control without label -->
<div class="form-control-wrapper">
    <InputText class="form-control" id="client" @bind-Value="Form.Client" />
    <div class="form-control-validation">
        <ValidationMessage For="@(() => Form.Client)" />
    </div>
</div>

But with our custom components the same functionality is achieved using far less code.

<!-- Control with label -->
<SwInputText Label="Catalogue" @bind-Value="Form.Catalogue" ValidationFor="@(() => Form.Catalogue)" />

<!-- Control without label -->
<SwInputText @bind-Value="Form.Client" ValidationFor="@(() => Form.Client)" />

Now if we want to update the styling of the SwInputText component, we can do it in one place, and the whole of our app is updated.

How do we do this?

All of the standard input components in Blazor inherit from a single base class called InputBase. This class handles all of the heavy lifting when it comes to validation by integrating with EditContext. It also manages the value binding boilerplate by exposing a Value parameter of type T. Hence whenever you use one of the build-in form controls you bind to it like this, @bind-Value="myForm.MyValue".

Building on InputBase

We didn’t want to recreate all the integration with the built-in form component. So we took InputBase as a starting point and built our own components on top of it. This is what the code looks like for our SwInputText component.

@using System.Linq.Expressions

@inherits InputBase<string>

<div class="form-control-wrapper">
    @if (!string.IsNullOrWhiteSpace(Label))
    {
        <label class="form-control-label" for="@Id">@Label</label>
    }
    <input class="form-control @CssClass" id="@Id" @bind="@CurrentValue" />
    <div class="form-control-validation">
        <ValidationMessage For="@ValidationFor" />
    </div>
</div>

@code {

    [Parameter, EditorRequired] public Expression<Func<string>> ValidationFor { get; set; } = default!;
    [Parameter] public string? Id { get; set; }
    [Parameter] public string? Label { get; set; }

    protected override bool TryParseValueFromString(string? value, out string result, out string validationErrorMessage)
    {
        result = value;
        validationErrorMessage = null;
        return true;
    }
}

The SwInputText component inherits from InputBase and the only real work we have to do is provide an implementation for the TryParseValueFromString method and a few additional parameters.

Because all but one (InputCheckbox) of the built-in input components bind to string representations of the bound value internally. This method is required to convert the string value back to whatever the original type was. In our case, we’re only binding to strings so it’s just a case of setting the result parameter to equal the value parameter and we’re done.

The majority of the effort has gone into the markup side of the component. This is where we’re encapsulating our UI design and the logic for showing a label or not.

Summary

In this post, we talked about the issue of maintenance and maintainability of forms in real-world Blazor applications. As a solution, we looked at building our own Input component, using InputBase as a starting point. This allowed us to encapsulate the UI design in a single place making future maintenance much easier.

Building a Custom Router for Blazor

In this post we are going to build a simple custom router component which will replace the default router Blazor ships with. I just want to say from the start that this isn’t meant to be an all singing all dancing replacement for the default router. The default router is quite sophisticated and replicating all that functionality is a bit much for a blog post. Hopefully though, this will give you an idea of what’s possible and maybe provide some inspiration.

To help guide things a bit, I want to set a few requirement for our new router, they are:

All code from this post is available on GitHub

The Plan

With the requirements set, let’s start by creating a plan for how our new router will work.

The first requirement is that is should be convention based and not use the @page directive. In order to achieve this we are going to use namespaces to define a page component. Taking the default project as a base, we’ll assume any components in the ProjectName.Pages.* namespace are page components.

Taking this approach should also allow us to achieve the second requirement, handling nested routes. If a user requests https://coolblazorapp.com/admin/settings, we will look for a component called Settings.razor in the following namespace ProjectName.Pages.Admin.

As we’ll be passing parameters via the query string and only be supporting strings, we’ll have to deal with type conversions somehow. We can do this by using the getter and setter on the target parameters to convert incoming values to the correct type. Not very pretty, but it should work for our scenario.

Allowing external links to work should come for free. If you read last weeks blog post you’ll know why. Blazor’s JavaScript NavigationManager should handle this requirement for us.

We shouldn’t have to reinvent the wheel when it comes to rendering page components. Once we have located the correct page component, based on our convention, we should be able to render it using the same Found and NotFound template approach which is used in the default router. We should also be able to use the existing RouteView and LayoutView components as well. That takes care of our final requirement to reuse any existing code, if possible.

I think that’s everything covered, so lets get into the code.

Creating The New Router

We’re going to start by creating the new router component, named ConventionRouter. This is going to be defined as a C# class, the same as the default router is. Here is the full code.

public class ConventionRouter : IComponent, IHandleAfterRender, IDisposable
{
    RenderHandle _renderHandle;
    bool _navigationInterceptionEnabled;
    string _location;

    [Inject] private NavigationManager NavigationManager { get; set; }
    [Inject] private INavigationInterception NavigationInterception { get; set; }
    [Inject] RouteManager RouteManager { get; set; }

    [Parameter] public RenderFragment NotFound { get; set; }
    [Parameter] public RenderFragment<RouteData> Found { get; set; }

    public void Attach(RenderHandle renderHandle)
    {
        _renderHandle = renderHandle;
        _location = NavigationManager.Uri;
        NavigationManager.LocationChanged += HandleLocationChanged;
    }

    public Task SetParametersAsync(ParameterView parameters)
    {
        parameters.SetParameterProperties(this);

        if (Found == null)
        {
            throw new InvalidOperationException($"The {nameof(ConventionRouter)} component requires a value for the parameter {nameof(Found)}.");
        }

        if (NotFound == null)
        {
            throw new InvalidOperationException($"The {nameof(ConventionRouter)} component requires a value for the parameter {nameof(NotFound)}.");
        }

        RouteManager.Initialise();
        Refresh();

        return Task.CompletedTask;
    }

    public Task OnAfterRenderAsync()
    {
        if (!_navigationInterceptionEnabled)
        {
            _navigationInterceptionEnabled = true;
            return NavigationInterception.EnableNavigationInterceptionAsync();
        }

        return Task.CompletedTask;
    }

    public void Dispose()
    {
        NavigationManager.LocationChanged -= HandleLocationChanged;
    }

    private void HandleLocationChanged(object sender, LocationChangedEventArgs args)
    {
        _location = args.Location;
        Refresh();
    }

    private void Refresh()
    {
        var relativeUri = NavigationManager.ToBaseRelativePath(_location);
        var parameters = ParseQueryString(relativeUri);

        if (relativeUri.IndexOf('?') > -1)
        {
            relativeUri = relativeUri.Substring(0, relativeUri.IndexOf('?'));
        }

        var segments = relativeUri.Trim().Split('/', StringSplitOptions.RemoveEmptyEntries);
        var matchResult = RouteManager.Match(segments);

        if (matchResult.IsMatch)
        {
            var routeData = new RouteData(
                matchResult.MatchedRoute.Handler,
                parameters);

            _renderHandle.Render(Found(routeData));
        }
        else
        {
            _renderHandle.Render(NotFound);
        }
    }

    private Dictionary<string, object> ParseQueryString(string uri)
    {
        var querystring = new Dictionary<string, object>();

        foreach (string kvp in uri.Substring(uri.IndexOf("?") + 1).Split(new[] { '&' }, StringSplitOptions.RemoveEmptyEntries))
        {
            if (kvp != "" && kvp.Contains("="))
            {
                var pair = kvp.Split('=');
                querystring.Add(pair[0], pair[1]);
            }
        }

        return querystring;
    }
}

Let’s work through this see how it works. We won’t cover every single method, as some of it’s self explanatory.

RenderHandle _renderHandle;
bool _navigationInterceptionEnabled;
string _location;

[Inject] private NavigationManager NavigationManager { get; set; }
[Inject] private INavigationInterception NavigationInterception { get; set; }
[Inject] RouteManager RouteManager { get; set; }

[Parameter] public RenderFragment NotFound { get; set; }
[Parameter] public RenderFragment<RouteData> Found { get; set; }

We start by defining some local members and injecting some services - we’ll talk about these later. We also define two parameters Found and NotFound, these are lifted straight from the default router.

public void Attach(RenderHandle renderHandle)
{
    _renderHandle = renderHandle;
    _location = NavigationManager.Uri;
    NavigationManager.LocationChanged += HandleLocationChanged;
}

Next we have the Attach method. We have to implement this as we’re implementing the IComponent interface. Normally this is not something which we’d need to care about as it’s dealt with in the ComponentBase class which most components inherit from.

This is a low level method which attaches the component to a RenderHandle. The RenderHandle provides a link between the component and it’s renderer, allowing the component to be rendered.

Here we’re saving a reference to the RenderHandle as well as recording the current URI. We’re also registering a handler for the NavigationManager’s LocationChanged event. This handler updates the routers _location field with the new location. Then calls the Refresh method to update the view with the new page, if one is found.

public Task OnAfterRenderAsync()
{
    if (!_navigationInterceptionEnabled)
    {
        _navigationInterceptionEnabled = true;
        return NavigationInterception.EnableNavigationInterceptionAsync();
    }

    return Task.CompletedTask;
}

In the OnAfterRender method, we’re setting up navigation interception. This instructs Blazor to intercept any link click events within the application. If you want to understand how all this works, I suggest reading my last post which covered this in detail.

public Task SetParametersAsync(ParameterView parameters)
{
    parameters.SetParameterProperties(this);

    if (Found == null)
    {
        throw new InvalidOperationException($"The {nameof(ConventionRouter)} component requires a value for the parameter {nameof(Found)}.");
    }

    if (NotFound == null)
    {
        throw new InvalidOperationException($"The {nameof(ConventionRouter)} component requires a value for the parameter {nameof(NotFound)}.");
    }

    RouteManager.Initialise();
    Refresh();

    return Task.CompletedTask;
}

The SetParametersAsync method is also part of the IComponent interface. We’re doing some basic checks to make sure we have values for the Found and NotFound parameters.

We then call RouteManager.Initialise. We’ll look at the RouteManager class in detail in the next section but essentially, it’s going to go off and find all of the page components in our project and store them.

Finally, we call the Refresh method. Let’s check that out now.

private void Refresh()
{
    var relativeUri = NavigationManager.ToBaseRelativePath(_location);
    var parameters = ParseQueryString(relativeUri);

    if (relativeUri.IndexOf('?') > -1)
    {
        relativeUri = relativeUri.Substring(0, relativeUri.IndexOf('?'));
    }

    var segments = relativeUri.Trim().Split('/', StringSplitOptions.RemoveEmptyEntries);
    var matchResult = RouteManager.Match(segments);

    if (matchResult.IsMatch)
    {
        var routeData = new RouteData(
            matchResult.MatchedRoute.Handler,
            parameters);

        _renderHandle.Render(Found(routeData));
    }
    else
    {
        _renderHandle.Render(NotFound);
    }
}

Similar to the Refresh method on the default router, our version is going to look at the current URI and try and load the correct page component for it. If it can’t find a matching page component, then it will render the NotFound template.

We start by getting the relative URI and extracting any query string parameters. We store these so they can be passed to the matching page component, if one is found. Once this is complete, we remove the query string from the relative URI, if present. Then split the URI into segments removing any empty ones. The array of segments is then passed to the RouteManager’s Match method which will attempt to find a page component for that route.

A MatchResult is returned which shows if a match was found. If a match was found then the matching Route will be included. The Route and any parameters found in the query string are then used to construct a RouteData object. This is the same RouteData object from the default router implementation. The renderer is then instructed to render the Found template using the RouteData object. This results in the page component being displayed to the user.

If a match isn’t found then the renderer is instructed to render the NotFound template.

Finding Page Components With RouteManager

The RouteManager class is used to find page components when the application first starts up. It is also responsible for finding page components which match the requested route.

public class RouteManager
{
    public Route[] Routes { get; private set; }

    public void Initialise()
    {
        var pageComponentTypes = Assembly.GetExecutingAssembly()
                                         .ExportedTypes
                                         .Where(t => t.IsSubclassOf(typeof(ComponentBase))
                                                     && t.Namespace.Contains(".Pages"));

        var routesList = new List<Route>();
        foreach (var pageType in pageComponentTypes)
        {
            var newRoute = new Route
            {
                UriSegments = pageType.FullName.Substring(pageType.FullName.IndexOf("Pages") + 6).Split('.'),
                Handler = pageType
            };

            routesList.Add(newRoute);
        }

        Routes = routesList.ToArray();
    }

    public MatchResult Match(string[] segments)
    {
        if (segments.Length == 0)
        {
            var indexRoute = Routes.SingleOrDefault(x => x.Handler.FullName.ToLower().EndsWith("index"));
            return MatchResult.Match(indexRoute);
        }

        foreach (var route in Routes)
        {
            var matchResult = route.Match(segments);

            if (matchResult.IsMatch)
            {
                return matchResult;
            }
        }

        return MatchResult.NoMatch();
    }
}

The Initialise method is called in the router’s SetParametersAsync, we saw that earlier. It uses some reflection to scan the current assembly and find any components with .Pages in their namespace, as per our convention we stated at the start.

Once we have the page components we create each one as a Route. We break the full name into segments which we will use to compare to the requested route. We also store the handler for the route, which is the type of the component. Once all of the Routes are created they’re stored as an array on the RouteManager.

The Route class looks like this.

public class Route
{
    public string[] UriSegments { get; set; }
    public Type Handler { get; set; }

    public MatchResult Match(string[] segments)
    {
        if (segments.Length != UriSegments.Length)
        {
            return MatchResult.NoMatch();
        }

        for (var i = 0; i < UriSegments.Length; i++)
        {
            if (string.Compare(segments[i], UriSegments[i], StringComparison.OrdinalIgnoreCase) != 0)
            {
                return MatchResult.NoMatch();
            }
        }

        return MatchResult.Match(this);
    }
}

It’s Match method is the most interesting part. It starts by checking if the number of segments in the requested route matches the number of segments it has. If that’s not the case then a NotMatch result is returned. It then loops over each of it’s segments and compares them to the segments passed in. If they all match then a Match result is returned, if they don’t, then a NoMatch result is returned.

Back to the RouteManager and it’s Match method.

public MatchResult Match(string[] segments)
{
    if (segments.Length == 0)
    {
        var indexRoute = Routes.SingleOrDefault(x => x.Handler.FullName.ToLower().EndsWith("index"));
        return MatchResult.Match(indexRoute);
    }

    foreach (var route in Routes)
    {
        var matchResult = route.Match(segments);

        if (matchResult.IsMatch)
        {
            return matchResult;
        }
    }

    return MatchResult.NoMatch();
}

Match is called by the router’s Refresh method. It’s job is to find a page component which matches the requested route. It starts by checking if the segments length is zero. If it is, we’ll assume the request is for the root page, so https://mycoolblazorapp.com/ for example. By convention, we’ll look for a page component called Index.razor and return the MatchResult.

Otherwise, we’ll loop over each route we have stored on the RouteManager, calling it’s Match method. If a match is found, then we’ll return it. If we get through all the routes and a match isn’t found then we return a NoMatch result.

This is what the MatchResult class looks like.

public class MatchResult
{
    public bool IsMatch { get; set; }
    public Route MatchedRoute { get; set; }

    public MatchResult(bool isMatch, Route matchedRoute)
    {
        IsMatch = isMatch;
        MatchedRoute = matchedRoute;
    }

    public static MatchResult Match(Route matchedRoute)
    {
        return new MatchResult(true, matchedRoute);
    }

    public static MatchResult NoMatch()
    {
        return new MatchResult(false, null);
    }
}

This is a simple class which gives us a consistent way of returning the result of a route match.

Summary

I think that about wraps things up. We’ve built a new router to replace the existing default router. It works on a convention basis and while it is nowhere near as feature rich and flexible as the default one, we’ve manged to hit all of the requirements set out at the start of the post.

I think it’s really cool that Blazor has been built in such as way that we can easily replace parts as we choose. If you would like to see another, and far more sophisticated, example of a custom router for Blazor. I would recommend checking out this post by Shaun Walker who’s building an open source CMS using Blazor, called Oqtane.

An In-depth Look at Routing in Blazor

In this post, I want to build on my last post, Introduction to Routing in Blazor, and take a deep dive into the nuts and bolts of routing in Blazor.

We’re going to look at each part of Blazor’s routing model in detail, starting in the JavaScript world where navigation events are picked up. And following the code over the divide to the C# world, to the point of rendering either the correct page or the not found template.

Intercepting navigation events with NavigationManager (JavaScript)

We’re going to start off looking at the NavigationManager service. But this isn’t the NavigationManager we’re used to interacting with in our C# code, this is the JavaScript version.

Blazor uses something called an EventDelegator to manage the various events produced by DOM elements. This service exposes a function called notifyAfterClick, which the NavigationManager hooks into in order to intercept navigation link click events. When a navigation link click event occurs the following code is run.

if (!hasEnabledNavigationInterception) {
  return;
}

if (event.button !== 0 || eventHasSpecialKey(event)) {
  return;
}

if (event.defaultPrevented) {
  return;
}

const anchorTarget = findClosestAncestor(event.target as Element | null, 'A') as HTMLAnchorElement | null;
const hrefAttributeName = 'href';
if (anchorTarget && anchorTarget.hasAttribute(hrefAttributeName)) {
  const targetAttributeValue = anchorTarget.getAttribute('target');
  const opensInSameFrame = !targetAttributeValue || targetAttributeValue === '_self';
  if (!opensInSameFrame) {
    return;
  }

  const href = anchorTarget.getAttribute(hrefAttributeName)!;
  const absoluteHref = toAbsoluteUri(href);

  if (isWithinBaseUriSpace(absoluteHref)) {
    event.preventDefault();
    performInternalNavigation(absoluteHref, true);
  }
}

We’re going to break this code down a piece at a time so we can understand it.

First there are some checks being made before anything more invasive is done.

if (!hasEnabledNavigationInterception) {
  return;
}

if (event.button !== 0 || eventHasSpecialKey(event)) {
  // Don't stop ctrl/meta-click (etc) from opening links in new tabs/windows
  return;
}

if (event.defaultPrevented) {
  return;
}

The first check is to see if navigation interception has been enabled - this gets enabled by Blazor’s router component during it’s OnAfterRender life-cycle method.

Then there’s a check to see if the link was clicked with a modifier key being held - for example, holding ctrl when clicking a link will open the link in a new tab. If a modifier was being held, then the event is allowed to continue normally and open in a new tab. Finally, a check is made to see if the event has had its default behaviour prevented already.

Determining internal navigation

const anchorTarget = findClosestAncestor(event.target as Element | null, 'A') as HTMLAnchorElement | null;
const hrefAttributeName = 'href';

if (anchorTarget && anchorTarget.hasAttribute(hrefAttributeName)) {
  const targetAttributeValue = anchorTarget.getAttribute('target');
  const opensInSameFrame = !targetAttributeValue || targetAttributeValue === '_self';
  
  if (!opensInSameFrame) {
    return;
  }

  const href = anchorTarget.getAttribute(hrefAttributeName)!;
  const absoluteHref = toAbsoluteUri(href);

  if (isWithinBaseUriSpace(absoluteHref)) {
    event.preventDefault();
    performInternalNavigation(absoluteHref, true);
  }
}

The next section of code checks if the target of the click was an <a> tag, and if it was, that it has an href attribute. If either of these checks fail then the event will be allowed to continue as normal.

Next, a check happens to decide if the link should be opened in the same frame (tab) or not. If not, then again, the event is allowed to continue as normal.

Finally, the value of the href attribute is converted to an absolute URI - if it isn’t one already. It’s then checked to see if it falls within the scope of the base URI. This is set in the <head> tag of either the index.html (Blazor WebAssembly) or _Hosts.cshtml (Blazor Server) using the <base> element.

If the link falls within the scope of the base element, then it’s considered internal navigation. The performInternalNavigation function is called, passing the absolute URI and a boolean value to indicate it was intercepted.

Simulating browser navigation

function performInternalNavigation(absoluteInternalHref: string, interceptedLink: boolean) {
  resetScrollAfterNextBatch();

  history.pushState(null, /* ignored title */ '', absoluteInternalHref);
  notifyLocationChanged(interceptedLink);
}

The first call, resetScrollAfterNextBatch isn’t of much interest to us. It stops unwanted flickering when resetting the scroll position during navigation. But the next part is more interesting.

The new location is pushed into the browsers history. This is what allows the forward and back buttons to function as they would in a traditional web app. By adding the new location to the browsers history it’s simulating traditional app navigation. Another important function this action performs is updating the URL in the browsers address bar.

At the end, the notifyLocationChanged function is called.

The gateway to C#

async function notifyLocationChanged(interceptedLink: boolean) {
  if (notifyLocationChangedCallback) {
    await notifyLocationChangedCallback(location.href, interceptedLink);
  }
}

The final step before we head into the C# world is the notifyLocationChanged function above. This function checks if there is a notifyLocationChangedCallback and then invokes it, passing the location and whether the link was intercepted.

But where does the notifyLocationChangedCallback come from? Well, that depends.

If we’re running on WebAssembly then the callback is registered during the application startup in Boot.WebAssembly.ts.

// Configure navigation via JS Interop
window['Blazor']._internal.navigationManager.listenForNavigationEvents(async (uri: string, intercepted: boolean): Promise<void> => {
    await DotNet.invokeMethodAsync(
      'Microsoft.AspNetCore.Blazor',
      'NotifyLocationChanged',
      uri,
      intercepted
    );
});

If we’re running on .NET Core (Blazor Server) then the callback is registered in Boot.Server.ts.

// Configure navigation via SignalR
window['Blazor']._internal.navigationManager.listenForNavigationEvents((uri: string, intercepted: boolean): Promise<void> => {
    return connection.send('OnLocationChanged', uri, intercepted);
});

This leads us into the C# side of things and what responds to the location changed event.

The C# version of NavigationManager listens for the location changed event. But the NavigationManager class is abstract. There are actually two implementations, one for Blazor Server called RemoteNavigationManager. And one for Blazor WebAssembly called WebAssemblyNavigationManager.

The NavigationManager class performs lots of useful operations, but right now, we’re only interested in the LocationChanged event. This event gets invoked from different places depending on if we’re in a Blazor WebAssembly or Blazor Server application.

Blazor WebAssembly

When the NotifyLocationChanged event is invoked from the JS world it enters the C# world via a class called JSInteropMethods.

public static class JSInteropMethods
{
    /// <summary>
    /// For framework use only.
    /// </summary>
    [JSInvokable(nameof(NotifyLocationChanged))]
    public static void NotifyLocationChanged(string uri, bool isInterceptedLink)
    {
        WebAssemblyNavigationManager.Instance.SetLocation(uri, isInterceptedLink);
    }
}

The NotifyLocationChanged method calls the SetLocation method on the WebAssemblyNavigationManager which looks like this.

public void SetLocation(string uri, bool isInterceptedLink)
{
    Uri = uri;
    NotifyLocationChanged(isInterceptedLink);
}

This method records the new URI and calls the NotifyLocationChanged method on the base NavigationManager - this method invokes an event called LocationChanged.

Blazor Server

In this version the NotifyLocationChanged event enters the C# world via the ComponentHub’s OnLocationChanged method.

public async ValueTask OnLocationChanged(string uri, bool intercepted)
{
    var circuitHost = await GetActiveCircuitAsync();
    if (circuitHost == null)
    {
        return;
    }

    _ = circuitHost.OnLocationChangedAsync(uri, intercepted);
}

This method calls the CircuitHost’s OnLocationChangedAsync method.

public async Task OnLocationChangedAsync(string uri, bool intercepted)
{
    AssertInitialized();
    AssertNotDisposed();

    try
    {
        await Renderer.Dispatcher.InvokeAsync(() =>
        {
            Log.LocationChange(_logger, uri, CircuitId);
            var navigationManager = (RemoteNavigationManager)Services.GetRequiredService<NavigationManager>();
            navigationManager.NotifyLocationChanged(uri, intercepted);
            Log.LocationChangeSucceeded(_logger, uri, CircuitId);
        });
    }
    
    // Remaining code omitted for brevity
}

The interesting part for us is in the try block. Essentially, an instance of the RemoteNavigationManager is being retrieved from the DI container and then it’s NotifyLocationChanged method is called.

public void NotifyLocationChanged(string uri, bool intercepted)
{
    Log.ReceivedLocationChangedNotification(_logger, uri, intercepted);

    Uri = uri;
    NotifyLocationChanged(intercepted);
}

In much the same way as the WebAssemblyNavigationManager, the new URI is recorded and the NotifyLocationChanged method on the base NavigationManager is called.

But what’s listening?

Technically, it could be a few things. The NavigationManager’s LocationChanged event is public for anyone to handle after all. But what we’re interested in is Blazor’s Router component.

The Router Component

When the router is initialised it registers a handler for the LocationChanged event. Which looks like this.

private void OnLocationChanged(object sender, LocationChangedEventArgs args)
{
    _locationAbsolute = args.Location;
    if (_renderHandle.IsInitialized && Routes != null)
    {
        Refresh(args.IsNavigationIntercepted);
    }
}

But in order for the router to function it needs to know what components to load for a particular URI, or route. How does it do this?

Finding Page Components

We looked at the parameters the router accepts in the last post. The router accepts a parameter called AppAssembly, which is required. It also accepts another optional parameter, AdditionalAssemblies. The router passes these assemblies to a class called RouteTableFactory via it’s Create method.

public static RouteTable Create(IEnumerable<Assembly> assemblies)
{
    var key = new Key(assemblies.OrderBy(a => a.FullName).ToArray());
    if (Cache.TryGetValue(key, out var resolvedComponents))
    {
        return resolvedComponents;
    }

    var componentTypes = key.Assemblies.SelectMany(a => a.ExportedTypes.Where(t => typeof(IComponent).IsAssignableFrom(t)));
    var routeTable = Create(componentTypes);
    Cache.TryAdd(key, routeTable);
    return routeTable;
}

This method loops over each assembly and pulls out any types which implement IComponent. It then passes them to an internal version of Create for further processing.

internal static RouteTable Create(IEnumerable<Type> componentTypes)
{
    var templatesByHandler = new Dictionary<Type, string[]>();
    foreach (var componentType in componentTypes)
    {
        var routeAttributes = componentType.GetCustomAttributes<RouteAttribute>(inherit: false);

        var templates = routeAttributes.Select(t => t.Template).ToArray();
        templatesByHandler.Add(componentType, templates);
    }
    return Create(templatesByHandler);
}

This next method loops over each component and extracts any RouteAttributes. It then selects the template for each route. A template being what’s in the quotes when using a @page directive, @page "**/my/route/template**" for example.

It then adds the component type and it’s templates (there can be more than one @page directive on a component) to a dictionary which is passed to the final overload of Create.

internal static RouteTable Create(Dictionary<Type, string[]> templatesByHandler)
{
    var routes = new List<RouteEntry>();
    foreach (var keyValuePair in templatesByHandler)
    {
        var parsedTemplates = keyValuePair.Value.Select(v => TemplateParser.ParseTemplate(v)).ToArray();
        var allRouteParameterNames = parsedTemplates
            .SelectMany(GetParameterNames)
            .Distinct(StringComparer.OrdinalIgnoreCase)
            .ToArray();

        foreach (var parsedTemplate in parsedTemplates)
        {
            var unusedRouteParameterNames = allRouteParameterNames
                .Except(GetParameterNames(parsedTemplate), StringComparer.OrdinalIgnoreCase)
                .ToArray();
            var entry = new RouteEntry(parsedTemplate, keyValuePair.Key, unusedRouteParameterNames);
            routes.Add(entry);
        }
    }

    return new RouteTable(routes.OrderBy(id => id, RoutePrecedence).ToArray());
}

In this last method, a RouteTable is constructed, which is what will be used later by the Router to know which components to load for a given URI.

Essentially, this method does some house keeping to remove any duplication, checks that templates are valid, etc… Before constructing a RouteEntry, which holds the route template, component type and any unused route parameters. Finally, a new RouteTable is returned.

The router then stores the returned RouteTable so it can use it for route lookups during NavigationChanged events.

Loading Page Components

We now understand how the Router knows where to find the correct components for a given route. So let’s get back to the OnLocationChanged method.

private void OnLocationChanged(object sender, LocationChangedEventArgs args)
{
    _locationAbsolute = args.Location;
    if (_renderHandle.IsInitialized && Routes != null)
    {
        Refresh(args.IsNavigationIntercepted);
    }
}

In the code above the router stores the new URI and then performs some checks. One of which is checking that it has a RouteTable. If everything is present and correct the Refresh method is called.

private void Refresh(bool isNavigationIntercepted)
{
    var locationPath = NavigationManager.ToBaseRelativePath(_locationAbsolute);
    locationPath = StringUntilAny(locationPath, _queryOrHashStartChar);
    var context = new RouteContext(locationPath);
    Routes.Route(context);

    if (context.Handler != null)
    {
        if (!typeof(IComponent).IsAssignableFrom(context.Handler))
        {
            throw new InvalidOperationException($"The type {context.Handler.FullName} " +
                $"does not implement {typeof(IComponent).FullName}.");
        }

        Log.NavigatingToComponent(_logger, context.Handler, locationPath, _baseUri);

        var routeData = new RouteData(
            context.Handler,
            context.Parameters ?? _emptyParametersDictionary);
        _renderHandle.Render(Found(routeData));
    }
    else
    {
        if (!isNavigationIntercepted)
        {
            Log.DisplayingNotFound(_logger, locationPath, _baseUri);
            _renderHandle.Render(NotFound);
        }
        else
        {
            Log.NavigatingToExternalUri(_logger, _locationAbsolute, locationPath, _baseUri);
            NavigationManager.NavigateTo(_locationAbsolute, forceLoad: true);
        }
    }
}

We’ll work through the code a piece at a time to understand what’s going on.

var locationPath = NavigationManager.ToBaseRelativePath(_locationAbsolute);
locationPath = StringUntilAny(locationPath, _queryOrHashStartChar);
var context = new RouteContext(locationPath);
Routes.Route(context);

The code above is converting the current URL to a relative URL, then stripping off any querystrings (?name=chris) or hash strings (#my-div). Then a new RouteContext is created using the remaining path.

A RouteContext takes the string provided and splits it on each / into segments. Finally, the Route method is called on the routing table.

Inside the Route method, each route in the routing table is checked to see if it matches the route in the RouteContext being passed in. This is done by calling the Match method on each RouteEntry.

internal void Match(RouteContext context)
{
    if (Template.Segments.Length != context.Segments.Length)
    {
        return;
    }

    // Parameters will be lazily initialized.
    IDictionary<string, object> parameters = null;
    for (int i = 0; i < Template.Segments.Length; i++)
    {
        var segment = Template.Segments[i];
        var pathSegment = context.Segments[i];
        if (!segment.Match(pathSegment, out var matchedParameterValue))
        {
            return;
        }
        else
        {
            if (segment.IsParameter)
            {
                GetParameters()[segment.Value] = matchedParameterValue;
            }
        }
    }

    context.Parameters = parameters;
    context.Handler = Handler;

    IDictionary<string, object> GetParameters()
    {
        if (parameters == null)
        {
            parameters = new Dictionary<string, object>();
        }

        return parameters;
    }
}

The Match method first checks to see if the number of segments in the routes are the same. If that succeeds, then each route segment is checked individually to ensure a match.

If the segment on the RouteEntry is marked as a parameter, then the value for that segment on the RouteContext is added to a parameters collection. Once each segment has been checked, any parameters are added to the RouteContext along with the Handler for that route, which is the component type.

A match was found - load the page!

if (context.Handler != null)
{
    if (!typeof(IComponent).IsAssignableFrom(context.Handler))
    {
        throw new InvalidOperationException($"The type {context.Handler.FullName} " +
            $"does not implement {typeof(IComponent).FullName}.");
    }

    Log.NavigatingToComponent(_logger, context.Handler, locationPath, _baseUri);

    var routeData = new RouteData(
        context.Handler,
        context.Parameters ?? _emptyParametersDictionary);
    _renderHandle.Render(Found(routeData));
}

This executes if a handler was assigned i.e. a match was found for the route. A final check is made to make sure the handler component is definitely implementing IComponent. If that passes then the a RouteData object is constructed with the handler and any parameters which need to be passed to the handler.

A render is then queued which will use the Found template with the route data. This will then render the correct page component and supply it with any necessary parameters.

No match found - Load not found template

else
{
    if (!isNavigationIntercepted)
    {
        Log.DisplayingNotFound(_logger, locationPath, _baseUri);
        _renderHandle.Render(NotFound);
    }
    else
    {
        Log.NavigatingToExternalUri(_logger, _locationAbsolute, locationPath, _baseUri);
        NavigationManager.NavigateTo(_locationAbsolute, forceLoad: true);
    }
}

First a check is made to see if the navigation was intercepted. If it wasn’t intercepted, this can only occur programmatically, so the NotFound template is queued to be rendered.

If it was intercepted then a browser reload is forced to the new location, the main scenario for this would be linking to another page on the same domain which isn’t a Blazor component, for example, a standard HTML page or a Razor Page or MVC view.

Summary

That’s it! We’ve reached the end of the journey. We’ve followed the flow of navigation events from the source, starting in the JavaScript world all the way to the point of rendering either the correct page component or the not found template.

I hope you’ve found this post interesting, I’ve certainly learned a lot about how the mechanics of client-side routers work writing this post. Next time, we’ll have a go at writing our own router and replacing the default implementation.

Introduction to Routing in Blazor

Most of us probably don’t spend much time thinking about the mechanics of navigating between pages in web applications. Which is fair enough. After all, when navigation is done right and works, you shouldn’t notice it. But a few years ago there was a big change in how navigation was performed - when single page applications, also know as SPAs, came on the scene…

This is the first post, of what I’m sure will turn into a few, looking at how routing and navigation works in Blazor.

Traditional Navigation

Let’s start by understanding the traditional model for navigating between pages in web applications.

In order to load a web page we first need to make a HTTP GET request to the web server, for example by entering chrissainty.com into the address bar of a web browser. The web server is then going to respond and send us back some content such as HTML, CSS, JavaScript, etc…

The browser will parse all of the content and if all is well, load the web page. This is a simplification of the actual process, but you get the idea.

We now have a page we can view and interact with, so what happens when we want to move to another page? Most often, we click on a link to the new page which starts the whole process again. The browser makes a new request to the server and a new page and its assets are sent back and displayed.

It’s only right to point out that there are some slight changes this time. For example, some of the CSS and JavaScript might have been cached so they won’t be downloaded again. But essentially the process is the same.

In a nut shell, that’s the basics of navigation in traditional web apps. So what about SPA applications? After all, how do you navigate between pages when there is only ever one page?

SPA Navigation

The obvious point to make right from the start is that we’re not navigating between physical pages in SPA applications, as we do in traditional web apps.

What we’re doing would be better described as virtual navigation. This is achieved by dynamically adding and removing content from the DOM (document object model) depending on the route that has been requested. This is most commonly handled by some form of router provided by the particular SPA framework.

Other manipulations are also utilised to further reinforce the illusion of navigation. Things like manipulating the browsers navigation history so forward and back buttons function as expected.

So what does this look like?

When initially loading a site, not much different to traditional web apps. We make the HTTP GET request and then download the HTML, CSS, JavaScript and any other static assets.

The difference shows when we click on a link to move to a different area of the site.

This time the payload we get back is a little different, it’s just data. Why data? It’s because SPA applications tend to download the whole application when the site is first loaded, so everything is already there. When changing between pages the only additional content that is required is the data to be displayed. In fact, depending on the page, it’s perfectly possible that a page in a SPA application might not need to make any additional request to the server at all.

I appreciate this has all been quite general and high level so far, but I wanted to cover the basics first. So let’s get into specifics and talk a bit about how routing and navigation work in Blazor.

Routing in Blazor

Similar to other SPA frameworks, Blazor has a router which is responsible for performing this virtual navigation. Blazor’s router is actually a component and you can find it in the App.razor component - the default implementation looks like this.

<Router AppAssembly="typeof(Startup).Assembly">
    <Found Context="routeData">
        <RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" />
    </Found>
    <NotFound>
        <LayoutView Layout="@typeof(MainLayout)">
            <p>Sorry, there's nothing at this address.</p>
        </LayoutView>
    </NotFound>
</Router>

Discovering Page Components

In order to be able to route to different pages the router has to know what components to load for a given route. This is achieved by passing in an AppAssembly to the router. The provided assembly will be scanned when the application boots up in order to discover any components declaring a route via the @page directive.

For example, a component which had the following code declared would be loaded by the router if a request was made for mysite.com**/about**.

@page "/about"

I like to refer to these components as page components as opposed to regular components. The official docs also use the term routable components.

You can also specify additional assemblies to be scanned by using the AdditionalAssemblies parameter, which takes an IEnumerable<System.Reflection.Assembly>. This is really useful if you want to include page components from other Razor Class Libraries.

Found and NotFound Templates

Two template parameters must also be specified when declaring the router component, Found and NotFound.

The Found template is used when the router finds a page component which matches the requested route. Inside the Found template is the RouteView component. This component is responsible for rendering the correct page component based on the component type parsed to it by the router via the RouteData.

<RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" />

If the page component has a layout specified then the RouteView will makes sure that is rendered as well. It does this by using the LayoutView component under the covers. A default layout can also be declared, as demonstrated in the code above. If the page component being rendered does not specify a layout then it will be rendered using the default one.

When the router is unable to find a page component which matches the requested route the NotFound template is used. By default, this shows a simple p tag with a friendly message nested inside of the LayoutView component. As you can probably guess, the LayoutView component is responsible for rendering the specified layout component. If you prefer, you could use a component instead of the p tag.

<NotFound>
    <LayoutView Layout="@typeof(MainLayout)">
        <MyNotFoundComponent />
    </LayoutView>
</NotFound>

Handling Navigation

Now we know about how to configure the required parts of the router let’s wrap this post up by covering how it actually handles navigation. This version will be quite high level still as I’ll deep dive into this process in future posts.

When a link is clicked, that event is intercepted by Blazor, in the JavaScript world. This event is then passed into the C# world to the NavigationManager class. This in turn fires an event, with some metadata, which the router component is listening for.

The router handles this event and uses the data supplied to check for any page components which match the requested route. If the router finds a match it will render the Found template we looked at, passing it the RouteData - which contains the type of component to render and any parameters it requires. If a match couldn’t be found then the router will render the NotFound template.

There is one other scenario which I haven’t mentioned yet, which is how does the router know when to intercept a navigation event and when not to? This is controlled by the <base href> tag defined in the _Host.cshtml or index.html. If a link which was clicked has a href which falls within the base href then Blazor will intercept the event. If it doesn’t then Blazor will trigger a normal navigation.

Summary

We’ll leave things there for this post. Next time we will dive into the detail of what is going on behind the scenes and understand each part of the navigation and routing process in Blazor.

In this post, we have taken a preliminary look at routing in Blazor. We started by covering off the basics, understanding how navigation happens in traditional web applications vs SPA applications. Before looking at Blazor specifically, covering the default router setup and a high level overview of how Blazor handles a navigation event.

Blazor Roundup From .NET Conf 2019

.NET Conf has been and gone for another year, but this one was particularly special. .NET Core 3 was officially released and with it comes the first official version of Blazor!

Personally, I’m so excited to see this happen. I found Blazor in early March 2018, a few weeks before the first preview (0.1.0) was released. I immediately became a fan, and it’s fair to say, Blazor changed by life.

It inspired me to focus on this blog and to get more involved in the open source community. A year and a half later and I’ve authored more than 40 blog posts on Blazor, I’ve written about Blazor for both Visual Studio Magazine and Progress Telerik, I’ve created several open source packages for Blazor with over 20,000 downloads and started speaking. Not to mention being awarded as a Microsoft MVP! I have a lot to thank Blazor and the Blazor community for.

Finally, I’m so happy for everyone on the Blazor team, they have done fantastic job getting to this point and there are so many more great things to come. Thank you for all the work you done.

Anyway, let’s move on and talk about all the Blazor news and content from .NET Conf 2019!

Blazor Server is now released

This is the one we have all been waiting for, Blazor Server was officially released with .NET Core 3 during the keynote from Scott Hunter. I know there are a lot of people who are waiting for Blazor WebAssembly but this is a great first step for Blazor as a framework.

I’ve noticed a big spike in the amount of people talking about Blazor since .NET Conf and having the framework out there as a production supported product, I think, makes a big difference to it’s perception.

Blazor WebAssembly - Release date announced

Something a lot of us have been hoping to hear for a while was also announced - The release date for Blazor WebAssembly. There has been a lot of speculation in the community about when it might be production ready. Daniel Roth said back at Build that, unofficially, he felt we might see it around Q1/Q2 2020.

Well, the speculation is now over and Blazor WebAssembly is expected in May 2020! Which should put it right around Build time next year.

It’s great to finally have a due date for what a lot of people consider to be the main draw of Blazor. We should start seeing a lot more development on it as well, especially once .NET Core 3.1 is released in November.

I think it’s going to be an exciting time in the build up to May!

New Blazor e-book from Microsoft

Microsoft has released a new e-book, Blazor for ASP.NET Web Forms Developers. The free publication focuses on introducing Blazor and it’s concepts to ASP.NET Web Forms developers. It also has advice on how to plan a migration from Web Forms to Blazor.

Blazor is a good migration path for legacy Web Forms applications and Microsoft want to help out devs looking to take that journey as much as possible.

Here is a little summary of the e-book from the Microsoft site:

This book is for ASP.NET Web Forms developers looking for an introduction to Blazor that relates to their existing knowledge and skills. This book can help with quickly getting started on a new Blazor-based project or to help chart a roadmap for modernizing an existing ASP.NET Web Forms application.

Blazor Talks

There were 3 great Blazor talks delivered during the online event. Two came from Daniel Roth - Program Manager on the ASP.NET team. And one came from Jeff Hollan - Principal PM for Azure Serverless.

All three are worth a watch and have great information - as always.

Building Full-stack C# Web Apps with Blazor in .NET Core 3.0

Dan’s first talk, Building Full-stack C# Web Apps with Blazor in .NET Core 3.0, focuses on building Blazor applications using the newly released, Blazor Server.

The Future of Blazor on the Client

Dan’s second talk, **** The Future of Blazor on the Client, focuses on Blazor WebAssembly. Dan also shows the potential of Blazor as a framework, showing how it can be used to build desktop applications via Electron and even full native applications!

Blazor and Azure Functions for Serverless Websites

Jeff’s talk focuses on building serverless applications using Blazor and Azure Functions.

Summary

Unsurprisingly, it was a pretty fantastic event for us Blazor fans. Blazor became an officially released product and we found out when we should see Blazor WebAssembly released.

There was also great content with the three awesome Blazor sessions from Daniel Roth and Jeff Hollan. Not to mention a free e-book being released as well!

It’s been a exciting journey watching Blazor grow from the experiment it started off as, to being an officially released product ready for production. What’s more, it’s all happened in about 18 months! I don’t know about you, but if this is how far we’ve come in 18 months, I really can’t wait to see what the next 18 will bring for Blazor.

An Introduction to OwningComponentBase

In this post, we’re going to explore the OwningComponentBase class, which was added to Blazor in .NET Core 3 Preview 9. It has a single purpose, to create a service provider scope. This is really useful in Blazor as service scopes behave a bit differently to how they do in say, MVC or Razor Pages applications. Using this class as a base, we can create components which control the lifetime of the service provider scope.

Why do we need OwningComponentBase?

The reason we need OwningComponentBase is down to how service scopes behave in Blazor. Let’s start by covering what service scopes are and how they work in more traditional ASP.NET Core applications, like MVC. Then we’ll look at how they differ in Blazor.

Service Scopes

Service scopes are used to define the lifetime of a service. In .NET Core there are 3 scopes available to us.

Transient services have the shortest lifetime out of the 3 scopes available. With this scope, a new instance of the service is created every-time it’s requested. Meaning that multiple instances of the service could be created in a single request.

Scoped services are created only once per request. So within a single request you will always get the same instance of the service every-time you request it.

Singleton services are created once for the lifetime of the application. This means that every user of the application will receive the same instance of the service until the application is restarted.

In MVC or Razor Pages these scopes work well and do what they say on the tin. This is because these types of applications have a traditional request/response model. But how do these scopes work in Blazor?

The answer, probably not quite as you’d expect. I wrote a blog post a while ago, Service Lifetimes in Blazor, covering how scopes work in Blazor. Please give that a read for an in-depth explanation, but I’ll summarise the important parts here.

Blazor Server Apps

While transient and singleton services still behave the same way, service lifetimes are scoped using the SignalR connection in Blazor Server apps. This means that scoped services behave in a very similar way to Singleton.

This is because while the user is interacting with the application, clicking buttons, navigating between pages, etc. These actions are all happening on the same SignalR connection. Therefor, in terms of service scopes it’s all one request, which keeps scoped services around for much longer than they normally would be.

Blazor WebAssembly Apps

Once again, transient and singleton services behave the same way. Scoped services behave in a similar way to Blazor Server, but for a slightly different reason. This time it’s because the application lives and dies inside the browser on the client. The user is the only user and again there is only ever one request.

Effectively, there are only two service scopes available in Blazor WebAssembly, transient and singleton.

How OwningComponentBase Helps

OwningComponentBase allows us to create components which can control the lifetime of the service scope. This means that when we create a component which inherits from OwningComponentBase any service(s), and related services, we request via it are disposed of when the component is disposed.

This essentially gives us back a version of scoped lifetime which makes sense in Blazor. For example, if we used OwningComponentBase as the base for a page component, any services requested by that page would only live for the lifetime of that page.

Because OwningComponentBase will dispose of any services and related services that share its scope, it makes it a great choice when dealing with repositories or other database abstractions. In these scenarios, without OwningComponentBase, services can hang around for a very long time. As we covered earlier, scoped services are tied to the lifetime of the SignalR connection in Blazor Server apps. So a repository which uses a DbContext, for example, could live much longer than expected which may end up being quite problematic.

Using OwningComponentBase

Now we have a good idea about what the class does and the problem it solves, how do we actually use it?

For starters, it is an abstract class which means we can’t just new it up, we must inherit from it. OwningComponentBase itself inherits from ComponentBase so you can just replace any ComponentBase usages with OwnComponentBase and everything will continue to function as before.

There are two versions of the class, OwningComponentBase and OwningComponentBase<T>, and they have slightly different abilities.

OwningComponentBase

OwningComponentBase<T> provides a single service, of type T, which we can access via a property called Service. Say we had a work item repository which looked like this.

public interface IWorkItemRepository
{
    IEnumerable<WorkItem> GetAllWorkItems();
}

We can use OwningComponentBase<T> to create an instance of IWorkItemRepository and consume it like this.

@page "/workitems"
@inherits OwningComponentBase<IWorkItemRepository>

<h1>View Work Items</h1>

<ul>
    @foreach (var workItem in Service.GetAllWorkItems())
    {
        <li>@workItem.Description</li>
    }
</ul>

That’s great but what if we need to use multiple services in our component? That’s where OwningComponentBase comes in.

OwningComponentBase

While OwningComponentBase<T> provides a simple and efficient way to request and access a single service. OwningComponentBase give us the ability to request as many services as we wish, albeit with a bit more work required on our part.

Continuing with the previous example code. Let’s pretend we want to show a work order and its related work items. To do this we need both a IWorkOrderRepository instance and a IWorkItemRepository instance.

public interface IWorkOrderRepository
{
    WorkOrder GetWorkOrder(int workOrderId);
}
public interface IWorkItemRepository
{
    IEnumerable<WorkItem> GetWorkItems(int workOrderId);
}

We can request both of these using OwningComponentBase’s ScopedServices property. This property is of type IServiceProvider and contains a method called GetService. We can use this to request any services we need in our component, for example.

@page "/workorder/{WorkOrderId:int}"
@inherits OwningComponentBase

<h1>View Work Order</h1>

<p>@workOrder.Description</p>

<h2>Work Items</h2>
<ul>
    @foreach (var workItem in _workItemRepository.GetWorkItems(workOrder.Id))
    {
        <li>@workItem.Description</li>
    }
</ul>

@code {

    [Parameter] public int WorkOrderId { get; set; }

    WorkOrder workOrder = new WorkOrder();
    IWorkOrderRepository _workOrderRepository;
    IWorkItemRepository _workItemRepository;

    protected override void OnInitialized()
    {
        _workOrderRepository = (IWorkOrderRepository)ScopedServices.GetService(typeof(IWorkOrderRepository));
        _workItemRepository = (IWorkItemRepository)ScopedServices.GetService(typeof(IWorkItemRepository));

        workOrder = _workOrderRepository.GetWorkOrder(WorkOrderId);
    }
}

As you can see, there is a bit more work involved on our part but it does give us access to as many services as we need.

Summary

I think that about wraps things up for this post. We’ve had a good look at the new OwningComponentBase class which came with .NET Core Preview 9. A class which allows authoring of components which control the lifetime of a service provider scope.

We’ve taken a look at service scopes and how they differ in Blazor compared to traditional ASP.NET Core applications, such as MVC and Razor Pages. Then seen how the OwningComponentBase class can help us manage these differences. We then finished up with some examples of how to use OwningComponentBase and OwningComponentBase<T>.

Deploying Containerised Apps to Azure Web App for Containers

In the first half of this series, we covered how to containerise both Blazor Server and Blazor WebAssembly apps. In part 3, we automated that process using Azure Pipelines. We configured a CI pipeline which automatically builds code upon check-in to GitHub and publishes the Docker image to Azure Container Registry.

In this post, we’re going to learn how to deploy a Docker image from our container registry to an Azure App Service, which completes the cycle of development to release. We’re going to start by taking a look at the Web App for Containers service and why we would want to use it. Then we’ll create a Web App for Containers instance using the Azure Portal. Finally, we’ll create a build pipeline in Azure Pipelines to automate the deployment of our Docker image to that instance.

Web App for Containers - Introduction

Web App for Containers (WAC) is part of the Azure App Service platform. It allows us to “easily deploy and run containerised applications on Windows and Linux”. The service offers built-in load balancing and auto scaling as well as full CI/CD deployment from both Docker Hub and private registries such as Azure Container Registry.

It has never been easier to deploy container-based web apps…

Another great thing about this service is that it manages and maintains the underlying container orchestrator, so we can focus on building our apps - which I really like. Plus, you can even run multi-container apps now!

Creating a container instance in WAC

We’re going to start by walking through creating a container instance via the Azure Portal. Once you’re logged into your Azure account, go to App Services then click the Add button near the top of the blade or the Create app service button in the centre of the blade.

You’ll then see the Create app service blade. Start by selecting your subscription and resource group before moving on to the app service plan details section.

Give the instance a name, set the publish option to Docker Image and select the OS to use, then pick a region to host it. Finally either select an existing App Service Plan or create a new one if you wish.

Once you’re done, click Next: Docker.

On this screen we can configure our container settings. We’re only deploying a single container to we can leave the first setting alone. For Image Source, select Azure Container Registry then fill in the Registry, Image and Tag.

When you’re happy, click Review and create. You’ll be presented with a summary screen, double check the details and then click Create to finish.

After a moment you should see a message stating Your deployment is complete. Click on the Go to resource link and you can see the overview screen for the instance. You can click on the URL link in to top right of the blade to view the running instance - This does take a few seconds to fire the app up the first time.

Congratulations, you’ve now got a fully working Blazor application running in a container on Azure!

So far so good, we have successfully deployed an image from the container registry to an app service but we don’t want to have to come here and do this manually every time we update our code.

To complete our pipeline we’re going to set up a release pipeline which will automatically deploy new images to our Web App for Containers instance.

Deploying from Azure Pipelines

We’re going to continue where we left off in part 3. This time we’re going to head to the Releases option under the Pipelines menu. Then we’re going to click the New pipeline button.

To start, we need to select a template to use for our pipeline, we’re going to select empty job here.

Once we select empty job we need to click the view stage tasks link in Stage 1.

Now click the + next to agent job to add a new task.

Using the add task search box, type in Azure App Service and then add the Azure Web App for Containers task.

Click on the task to load its configuration screen. Select your Azure subscription, then the app name, which is the instance we created in the first half of the post. Finally, fill in the image name, this is the fully qualified image name (your_registry.azurecr.io/repo:tag_).

The last thing we need to configure is where to get the image we are going to deploy. Click on the Pipeline tab, then in the artifacts box click Add an artifact. Then under source type click the show more link and then select Azure Container Registry.

Go ahead and select your service connection, resource group, azure container registry and repository. Then click Add.

That should be everything we need. If you save your pipeline using the save button at the top of the screen, you can also give your pipeline a more meaningful name by clicking on the New release pipeline text in the top left.

Now it’s time to test it all out. Click the Create Release button in the top right and then click the Create button to start the release.

If all goes well, after a minute or two, you should see the stage turn green with a succeeded message.

Type in the FQDN for your app instance and you should be able to see your application working.

Enabling Continuous Deployment (optional)

To create a full CD pipeline, then you’ll want to trigger a new deployment every-time a new image is built and pushed to the container registry repository. To enable this, edit the release pipeline and click the enable Continuous deployment trigger.

From here toggle the Enabled switch to turn on continuous deployments.

This will now create a new release automatically every-time a new image is pushed to the container registry repository. However, if you want a bit of control, say you only want to create a release when the image has a certain tag, you can add a tag filter.

This is just a regex expression, in the image below I’ve added the filter ^latest$ so that only images with a tag of latest will get deployed.

That’s it, we now have a continuous deployment pipeline configured.

Summary

In this post, we started by creating a new Web App for Containers instance using the Azure Portal. Then configuring that instance to deploy and run an image from our Azure Container Registry.

Next, we built on our CI pipeline from earlier posts, adding the ability to automatically deploy new versions to our app instance via Azure Pipelines. Then finished off with an optional step to enable full continuous deployment on our pipeline.

Publishing to Azure Container Registry using Azure Pipelines

In part 1 and part 2 we looked at how to containerise Blazor applications with Docker. We can now run Blazor Server and Blazor WebAssembly apps in containers locally - which is great! But how do we go about automating the building of Docker images as part of a CI pipeline? And where do we keep our images once they’re built?

In this post, we’re going to answer at those two questions. We’re going to see how to automate the building of Docker images using Azure Pipelines, then how to publish them to Azure Container Registry. I’m going to use the Blazor Server app from Part 1 as the example project in this post. It’s hosted on GitHub so all instructions will be based on code being hosted there. If your code is hosted elsewhere don’t worry, Azure Pipelines can connect to lots of different code repositories.

Creating an Azure Container Registry

We’re going to start by creating an Azure Container Registry (ACR). ACR is a service for hosting Docker images in Azure, similar to Docker Hub, allowing us to store and manage our container images in a central place.

Start by logging into the Azure Portal and then select All Services from the left menu and search for c_ontainer registries_.

Once the blade loads click on Add at the top.

You will then see the Create container registry screen. Give your registry a name, select your subscription, resource group and location. Leave Admin user disabled, we’ll be using a service connection to connect to the registry from Azure Pipelines for the moment. Finally, select the SKU (pricing level) that fits your needs. I’m going to select Basic for now.

Once you’re done click Create at the bottom of the screen and after a minute or two the new ACR will be available. That’s all we need to do in the Azure Portal for the moment, the rest of our time is going to be spend in Azure Pipelines.

Azure Pipelines

If you’re new to Azure Pipelines, it’s a Continuous Integration (CI)/Continuous Deployment (CD) service which allows developers to build, test and deploy their code anywhere - It’s also free to use! If you don’t already have an account then you can head over to devops.azure.com to signup.

Once you have logged into your account click the New Project button in the top right hand corner and give your project a name and select its visibility.

Once you’re done click the Create button, your project will be created and you should see the project home screen.

Using the left hand menu, head to Pipelines and then Builds. Then select the New pipeline button from the main panel.

This starts the new pipeline wizard which is made up of 4 steps. The first step is to connect to source control. As I mentioned at the start, my example project is hosted on GitHub but choose whichever option you need.

You’ll then see a list of the available repositories, select the one you want to connect to.

Step three is where we configure the pipeline. You may have to click the Show More button to see the full list. Once you see the full list scroll down and select Docker.

We now get presented with a drawer asking us to specify where to find our dockerfile.

One thing to note here is that the name of the file is case sensitive. In my project, the dockerfile has a lower-case d, so I’m going to change the default value to **/dockerfile, then click Validate and configure.

The last step presents us with the final yaml file which will be used to build our image.

Go ahead and click Save and run in the top right and you will see the Save and run confirmation dialogue.

Here you can choose whether to commit the yaml file to your repositories master branch or to create a new branch and commit it there. I’m going to commit it to a new branch called azure-pipelines. That way I can play with the configuration until everything is setup how I want then raise a PR to merge it to master. Once you’re done click Save and run to complete the process.

Azure Pipelines will then commit the yaml file and start a new built. Once it’s complete you should see the build summary screen.

We now have our image being built by Azure Pipelines - Which is great! The next step is to publish it to Azure Container Registry.

We’re going to start by adding a service connection to the container registry. Go to Project settings in the bottom left of the screen and then select Service connections under the Pipelines sub-menu.

From there, click on New service connection then select Docker registry from the list.

You’ll then see the following modal.

Select Azure Container Registry and give the connection a name. Select which Azure subscription to use and then select the container registry you want to connect to. When you’re done click OK to save the connection.

We need to make some changes to our yaml file to tell it to publish to the container registry. Click on Pipelines then Builds on the main menu. Then click the Edit button in the top right of the screen.

This will load up the yaml editor. Update the Docker task to match the following code.

- task: Docker@2
  displayName: Build and push image to container registry
  inputs:
    command: buildAndPush
    repository: demos/BlazorServerWithDocker
    dockerfile: '**/dockerfile'
    containerRegistry: AzureContainerRegistry
    tags: |
      $(Build.BuildId)
      latest

We’re now using the buildAndPush command instead of the build command. Then specifying which repository to publish the image to. Repositories are a way of organising your images in the registry, similar to a GitHub account which contains repositories. If a repository doesn’t exist then it will be created for you when the image is published.

Another change is specifying the containerRegistry to use, this is where we use the name of the service connection we just setup. Finally, we’ve updated the tags section to tag images with both the build number and the latest tags. The reason for this is so in other parts of the pipeline we can specify the latest tag and always get the latest version of an image. But by using the build number as well we can always request a specific version if we need to.

Once you’re done click Save and you will see the following modal where you can add a commit message before the build file is committed.

This should trigger a build using the new build file. If it hasn’t then after you’ve committed the changes, click Run at the top right of the screen.

Hopefully, after a minute or two, you will see the build summary screen and lots of green!

You can check everything was published successfully by heading back over to Azure and checking the Repositories link in your Azure Container Registry. You should see your repository, if you click on it you will be able to view your image.

Summary

In this post I’ve shown how to automate the building of a Docker image using Azure Pipelines. As well as how to automatically publish the image to an Azure Container Registry.

I’m really impressed with how easy it’s been to automate the building and publishing of the image with Azure Pipelines, it really is a fantastic service and the fact it’s freely available to everyone is just amazing. I also want to point out that this post isn’t in any way specific to Blazor and you should be able use the information here to build any docker image.

Containerising a Blazor WebAssembly App

In part 1 of the series, we took a look Docker and some of its key concepts. Then we took the default template for a Blazor Server app and containerised it using Docker. In this post, we are going to take a look at doing the same thing but with a Blazor WebAssembly app.

All the code for this post is available on GitHub.

Different Challenges

Creating a dockerfile for a Blazor Server app was pretty trivial. In-fact, if you use Visual Studio then it generates the file automatically for you with just a couple of clicks, albeit with some quirks.

Blazor WebAssembly projects present us with a different challenge, when published they produce static files. Unlike Blazor Server apps, we don’t need the ASP.NET Core runtime to serve them. This means we can drop the ASP.NET Core runtime Docker image we used in part 1 as the base for our final image. So how are we going to serve our files? The answer is NGINX.

What is NGINX?

If you’ve not come across it before, NGINX is a free and open source web server which can also be used as a reverse proxy, load balancer and HTTP cache. It’s really great at serving static content, fast. When compared to apache it uses significantly less memory and can handle up to 4 times the number of requests per second.

Of course there’s a Docker image for NGINX, several versions in-fact, but the one we’ll be looking to use is NGINX:Alpine. This is a really tiny image, less than 5mb!! And it has everything we’ll need to serve our Blazor WebAssembly application.

Prerequisites

If you’ve not done any work with Docker before you will need to install Docker Desktop for Windows or Docker Desktop for Mac. Just follow the setup instructions and you will be up and running in a couple of minutes. For the purpose of this post we’re going to be using the default project template for a Blazor WebAssembly app. I’m going to be working in VS Code for this project but use whatever IDE/Editor you choose.

Adding NGINX Configuration

We’re going to be using NGINX to serve our application inside our container however, as our app is a SPA (Single Page Application), we need to tell NGINX to route all requests to the index.html.

In the root of the project add a new file called nginx.conf and add in the following code.

events { }
http {
    include mime.types;

    server {
        listen 80;

        location / {
            root /usr/share/nginx/html;
            try_files $uri $uri/ /index.html =404;
        }
    }
}

This is a really bare bones configuration which will allow our app to be served. But if you’re looking to move into production with this then I would highly recommend you head over to the NGINX docs site and have a read of all the options you can configure.

Essentially we’ve setup a simple web server listening on port 80 with files being served from /usr/share/nginx/html. The try_files configuration tells NGINX to serve the index.html whenever it can’t find the requested file on disk.

Above the server block we’ve included the default mime types. As NGINX configuration is all opt-in it doesn’t handle different mime types unless we tell it to.

Adding a Dockerfile

Now let’s add a dockerfile to the root of our project with the following code.

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY BlazorWasmWithDocker.csproj .
RUN dotnet restore BlazorWasmWithDocker.csproj
COPY . .
RUN dotnet build BlazorWasmWithDocker.csproj -c Release -o /app/build

FROM build AS publish
RUN dotnet publish BlazorWasmWithDocker.csproj -c Release -o /app/publish

FROM nginx:alpine AS final
WORKDIR /usr/share/nginx/html
COPY --from=publish /app/publish/wwwroot .
COPY nginx.conf /etc/nginx/nginx.conf

Just as we did in part 1, let’s break this down a section at a time to understand what is going on.

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY BlazorWasmWithDocker.csproj .
RUN dotnet restore BlazorWasmWithDocker.csproj
COPY . .
RUN dotnet build BlazorWasmWithDocker.csproj -c Release -o /app/build

This first section is going to build our app. We’re using Microsoft’s official .NET 6 SDK image as the base for the build. We set the WORKDIR in the container to /src and then COPY over the csproj file from our project. Next we run a dotnet restore before COPYing over the rest of the files from our project to the container. Finally, we build the project by RUNing dotnet build on our project file setting the configuration to release.

FROM build AS publish
RUN dotnet publish BlazorWasmWithDocker.csproj -c Release -o /app/publish

The next section publishes our app. This is pretty straightforward, we use the previous section as a base and then RUN the dotnet publish command to publish the project.

FROM nginx:alpine AS final
WORKDIR /usr/share/nginx/html
COPY --from=publish /app/publish/wwwroot .
COPY nginx.conf /etc/nginx/nginx.conf

The last section produces our final image. We use the nginx:alpine image as a base and start by setting the WORKDIR to /usr/share/nginx/html - this is the directory where we’ll serve our application from. Next, we COPY over our published app from the previous publish section to the current working directory. Finally, we COPY over the nginx.conf we created earlier to replace the default configuration file.

Building the image

Now we have our dockerfile all setup and ready to go we need to build our image.

docker build -t blazor-webassembly-with-docker .

Just as in part 1, we’re using the docker build command, the -t switch allows us to tag the image with a friendly name so we can identify it a bit easier later on. The dot (.) at the end tells docker to look for the dockerfile in the current directory.

The output from the build looks like this.

[+] Building 100.8s (17/17) FINISHED
 => [internal] load build definition from Dockerfile
 => => transferring dockerfile: 500B
 => [internal] load .dockerignore
 => => transferring context: 2B
 => [internal] load metadata for docker.io/library/nginx:alpine
 => [internal] load metadata for mcr.microsoft.com/dotnet/sdk:6.0
 => [build 1/6] FROM mcr.microsoft.com/dotnet/sdk:6.0@sha256:90b566b141a8e2747f2805d9e4b2935ce09040a2926a1591c94
 => => resolve mcr.microsoft.com/dotnet/sdk:6.0@sha256:90b566b141a8e2747f2805d9e4b2935ce09040a2926a1591c94108a83b
 => => sha256:08af7dd3c6400833072349685c6aeaf7b86f68441f75b5ffd46206924c6b0267 15.17MB / 15.17MB
 => => sha256:90b566b141a8e2747f2805d9e4b2935ce09040a2926a1591c94108a83ba10309 2.17kB / 2.17kB
 => => sha256:e86d68dca8c7c8106c1599d293fc00aabaa59dac69e4c849392667e9276d55a9 7.31kB / 7.31kB
 => => sha256:7423077999145aa09211f3b975495be42a009a990a72d799e1cb55833abc8745 31.61MB / 31.61MB
 => => sha256:148a3465a035ddc2e0ac2eebcd5f5cb3db715843d784d1b303d1464cd978a391 2.01kB / 2.01kB
 => => sha256:a2abf6c4d29d43a4bf9fbb769f524d0fb36a2edab49819c1bf3e76f409f953ea 31.36MB / 31.36MB
 => => sha256:a260dbcd03fce6db3fe06b0998f5f3e54c437f647220aa3a89e5ddd9495f707e 156B / 156B
 => => sha256:96c3c696f47eb55c55e43c338922842013fc980b21c457826fd97f625c0ab497 9.44MB / 9.44MB
 => => sha256:d81364490ceb3caecbe62b7c722959258251458e6d1ba5acfc60db679c4411f8 25.36MB / 25.36MB
 => => sha256:3e56f7c4d95f973a8cd8cf1187e56ee59c1cc1f0eb4a6c9690a1d6d6adf72b4e 136.50MB / 136.50MB
 => => sha256:9939dbdaf4a702d0243b574a728eca401402f305a80b277acbfa5b3252625135 13.37MB / 13.37MB
 => => extracting sha256:a2abf6c4d29d43a4bf9fbb769f524d0fb36a2edab49819c1bf3e76f409f953ea
 => => extracting sha256:08af7dd3c6400833072349685c6aeaf7b86f68441f75b5ffd46206924c6b0267
 => => extracting sha256:7423077999145aa09211f3b975495be42a009a990a72d799e1cb55833abc8745
 => => extracting sha256:a260dbcd03fce6db3fe06b0998f5f3e54c437f647220aa3a89e5ddd9495f707e
 => => extracting sha256:96c3c696f47eb55c55e43c338922842013fc980b21c457826fd97f625c0ab497
 => => extracting sha256:d81364490ceb3caecbe62b7c722959258251458e6d1ba5acfc60db679c4411f8
 => => extracting sha256:3e56f7c4d95f973a8cd8cf1187e56ee59c1cc1f0eb4a6c9690a1d6d6adf72b4e
 => => extracting sha256:9939dbdaf4a702d0243b574a728eca401402f305a80b277acbfa5b3252625135
 => [internal] load build context
 => => transferring context: 1.71MB
 => [final 1/4] FROM docker.io/library/nginx:alpine@sha256:eb05700fe7baa6890b74278e39b66b2ed1326831f9ec3ed4bdc636
 => [build 2/6] WORKDIR /src
 => [build 3/6] COPY BlazorWasmWithDocker.csproj .
 => [build 4/6] RUN dotnet restore BlazorWasmWithDocker.csproj
 => [build 5/6] COPY . .
 => [build 6/6] RUN dotnet build BlazorWasmWithDocker.csproj -c Release -o /app/build
 => [publish 1/1] RUN dotnet publish BlazorWasmWithDocker.csproj -c Release -o /app/publish
 => CACHED [final 2/4] WORKDIR /usr/share/nginx/html
 => CACHED [final 3/4] COPY --from=publish /app/publish/wwwroot .
 => [final 4/4] COPY nginx.conf /etc/nginx/nginx.conf
 => exporting to image
 => => exporting layers
 => => writing image sha256:c785a78daf241c7be4fde0d7335971a48901b05f9f70afca8451f5887b2e9a97
 => => naming to docker.io/library/blazor-webassembly-with-docker

Starting a container

Now we have built our image we can go ahead and start a container and check if everything is working.

docker run -p 8080:80 blazor-webassembly-with-docker

This command tells Docker to start a container with the tag blazor-webassembly-with-docker. The -p switch maps port 8080 on the host to port 80 in the container.

Once you have run the command then open a browser and navigate to http://localhost:8080 and you should be able to load the app.

Detached Mode

If you want to leave your container running but you don’t want it hogging a terminal window, you can start it in detached mode. This mode runs the container in the background so it doesn’t receive any inputs or display any outputs. To use detached mode add the -d switch to the docker run command.

docker run -d -p 8080:80 blazor-webassembly-with-docker

When executed you’ll see the unique identifier for your container appear on the screen and then you’ll be returned back to the terminal prompt.

To view any container you currently have running in the background you can use the docker ps command.

If you want to stop a container running in the background then use the docker stop command with either the containers ID or name.

docker stop youthful_wozniak

Summary

In this post, we’ve looked at the different challenges we face running a Blazor WebAssembly application in a container. We then built an image for our app which uses NGINX to serve the static content which Blazor WebAssembly applications produce. We finished up be checking everything worked by starting a container using our new image.

Next time we’ll take a look at how we can automate building and deploying with Azure DevOps and hopefully get our containers running on Azure.

Containerising a Blazor Server App

Containers are all the rage now-a-days and for good reason. They solve the problem of how to have an application work consistently regardless of the environment it is run on. This is achieved by bundling the whole runtime environment - the application, it’s dependencies, configuration files, etc… Into a single image. This image can then be shared and instances of it, known as containers, can then be run.

In this post, I’m going to show you how to run a Blazor Server application in a container. We’re going to have a look at how to create images and from there how to create containers.

All the code for this post is available on GitHub.

Before we get into things, let’s cover what Docker is and a few key concepts.

What is Docker?

Docker is a platform which provides services and tools to allow the building, sharing and running of containers. These containers are isolated from one another but run on a shared OS kernel, making them far more lightweight than virtual machines. This allows more containers to be run on the same physical hardware giving containers an advantage over traditional virtual machines.

As containers only contain what is needed to run the application it makes them extremely quick to spin up. This makes them exceptionally good at scaling on demand. Where a traditional VM would need a few minutes before additional capacity comes online, a container can be started in a few fractions of a second.

Dockerfile

You can think of a dockerfile as a blueprint which contains all the commands, in order, needed to create an image of your application. Docker images are created by running the docker build command against a dockerfile.

Image

Docker images are the result of running a dockerfile. Images are built up in layers, just like an onion, and each layer can also be cached to help speed up build times. Images are immutable once created, but they can be used as base images in a dockerfile to allow customisation. Images can be stored in an image repository such as Docker Hub or Azure Container Registry - think NuGet but for containers - which allows them to be shared with others.

Container

A container is an instance of an image. You can spin up many containers from a single image. They’re started by using the docker run command and specifying the image to use to create the container.

Containerising a Blazor Server App

Prerequisites

If you’ve not done any work with Docker before you will need to install Docker Desktop for Windows or Docker Desktop for Mac. Just follow the setup instructions and you will be up and running in a couple of minutes. For the purpose of this post we’re going to be using the default project template for a Blazor Server app.

Creating a Dockerfile

The first thing we’re going to do is create a dockerfile in the root of the project. If you’re using something other than Visual Studio, such as VS Code then just create a new file in the root of your project called dockerfile with no extension and paste in the code from a bit further down.

If you’re using Visual Studio then right click on your project and select Add > Docker Support…

You will then be asked what target OS you want.

I’m choosing Linux as I’m on a Mac anyway plus hosting is cheaper when I want to push this to Azure. If your application does require something Windows specific then make sure to chose Windows here. Once you’re done then click OK. After a few seconds you should see a Dockerfile appear in the root of the project.

A word of warning here - I’ve found this file doesn’t always seem to work properly. It seems to expect a certain folder structure where the dockerfile is one level higher than the project, if that’s not the case then things won’t work. Below is a version of the dockerfile after a couple of modifications to remove the folder structure assumption.

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["BlazorServerWithDocker.csproj", "."]
RUN dotnet restore "BlazorServerWithDocker.csproj"
COPY . .
RUN dotnet build "BlazorServerWithDocker.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "BlazorServerWithDocker.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "BlazorServerWithDocker.dll"]

You can see that there is a repeating pattern, each section starts using the FROM keyword. As I mentioned earlier, images are like onions, they’re built up with lots of layers, one on top of the other. Let’s break this all down to understand what each step is doing.

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

The first section defines the base image that we’re going to use to create our applications image, although we’re not actually going to use it till the end. It’s provided by Microsoft and contains just the ASP.NET Core runtime. We’re setting the working directory to be app and exposing ports 80 and 443 which are the ports the container will listen on at runtime. We’ll come back to this one at the end.

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["BlazorServerWithDocker.csproj", "."]
RUN dotnet restore "BlazorServerWithDocker.csproj"
COPY . .
RUN dotnet build "BlazorServerWithDocker.csproj" -c Release -o /app/build

The next section is responsible for building the application. This is based on another image provided by Microsoft which contains the full .NET SDK. The WORKDIR command sets the working directory inside the container - any actions will now be relative to that directory.

We COPY the csproj from our project to the containers working directory, then run a dotnet restore. After that the COPY command copies over all the other files in the project to the working directory before running a dotnet build in release configuration.

FROM build AS publish
RUN dotnet publish "BlazorServerWithDocker.csproj" -c Release -o /app/publish

This section publishes our app. Here we’re specifying the previous build image as the base for this layer, then calling dotnet publish.

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "BlazorServerWithDocker.dll"]

The last section is what creates our final image. Here you can see we’re using the base image from the start of the file, which was the .NET Core runtime image. We set the WORKDIR to app then copy over the published files from the previous publish layer. Finally, we set the entry point for the application. This is the instruction that tells the image how to start the process it will run for us.

Building an Image

Now we have a dockerfile which defines our image we need to use a docker command to actually create it.

docker build -t blazor-server-with-docker .

The -t switch tells docker to tag the image with blazor-server-with-docker which is useful for identifying the image later on. The dot (.) at the end tells docker to look for the dockerfile in the current directory.

This is the output when the command is run.

[+] Building 61.1s (17/17) FINISHED
 => [internal] load build definition from Dockerfile 
 => => transferring dockerfile: 590B
 => [internal] load .dockerignore
 => => transferring context: 380B 
 => [internal] load metadata for mcr.microsoft.com/dotnet/sdk:6.0
 => [internal] load metadata for mcr.microsoft.com/dotnet/aspnet:6.0
 => [build 1/6] FROM mcr.microsoft.com/dotnet/sdk:6.0@sha256:90b566b141a8e2747f2805d9e4b2935ce09040a2926a1591c94 
 => => resolve mcr.microsoft.com/dotnet/sdk:6.0@sha256:90b566b141a8e2747f2805d9e4b2935ce09040a2926a1591c94108a83b
 => => sha256:90b566b141a8e2747f2805d9e4b2935ce09040a2926a1591c94108a83ba10309 2.17kB / 2.17kB
 => => sha256:e86d68dca8c7c8106c1599d293fc00aabaa59dac69e4c849392667e9276d55a9 7.31kB / 7.31kB
 => => sha256:7423077999145aa09211f3b975495be42a009a990a72d799e1cb55833abc8745 31.61MB / 31.61MB
 => => sha256:148a3465a035ddc2e0ac2eebcd5f5cb3db715843d784d1b303d1464cd978a391 2.01kB / 2.01kB
 => => sha256:08af7dd3c6400833072349685c6aeaf7b86f68441f75b5ffd46206924c6b0267 15.17MB / 15.17MB
 => => sha256:a2abf6c4d29d43a4bf9fbb769f524d0fb36a2edab49819c1bf3e76f409f953ea 31.36MB / 31.36MB
 => => sha256:a260dbcd03fce6db3fe06b0998f5f3e54c437f647220aa3a89e5ddd9495f707e 156B / 156B
 => => sha256:96c3c696f47eb55c55e43c338922842013fc980b21c457826fd97f625c0ab497 9.44MB / 9.44MB
 => => sha256:d81364490ceb3caecbe62b7c722959258251458e6d1ba5acfc60db679c4411f8 25.36MB / 25.36MB
 => => sha256:3e56f7c4d95f973a8cd8cf1187e56ee59c1cc1f0eb4a6c9690a1d6d6adf72b4e 136.50MB / 136.50MB
 => => sha256:9939dbdaf4a702d0243b574a728eca401402f305a80b277acbfa5b3252625135 13.37MB / 13.37MB
 => => extracting sha256:a2abf6c4d29d43a4bf9fbb769f524d0fb36a2edab49819c1bf3e76f409f953ea
 => => extracting sha256:08af7dd3c6400833072349685c6aeaf7b86f68441f75b5ffd46206924c6b0267
 => => extracting sha256:7423077999145aa09211f3b975495be42a009a990a72d799e1cb55833abc8745
 => => extracting sha256:a260dbcd03fce6db3fe06b0998f5f3e54c437f647220aa3a89e5ddd9495f707e
 => => extracting sha256:96c3c696f47eb55c55e43c338922842013fc980b21c457826fd97f625c0ab497
 => => extracting sha256:d81364490ceb3caecbe62b7c722959258251458e6d1ba5acfc60db679c4411f8
 => => extracting sha256:3e56f7c4d95f973a8cd8cf1187e56ee59c1cc1f0eb4a6c9690a1d6d6adf72b4e
 => => extracting sha256:9939dbdaf4a702d0243b574a728eca401402f305a80b277acbfa5b3252625135
 => [base 1/2] FROM mcr.microsoft.com/dotnet/aspnet:6.0@sha256:edb108fddbb69db67ad136e4ffc93d5d9ddcfd28fc7f269be
 => => resolve mcr.microsoft.com/dotnet/aspnet:6.0@sha256:edb108fddbb69db67ad136e4ffc93d5d9ddcfd28fc7f269be541790
 => => sha256:edb108fddbb69db67ad136e4ffc93d5d9ddcfd28fc7f269be541790423399f55 2.17kB / 2.17kB
 => => sha256:5b4a077a17943113fee94818046e6f9839e11ec692481bf122ffacb849cf67de 1.37kB / 1.37kB
 => => sha256:8d32e18b77a4db7f10ec4985cc85c1e385dc6abd16f9573a8c2bc268cad4aab9 3.38kB / 3.38kB
 => => sha256:a2abf6c4d29d43a4bf9fbb769f524d0fb36a2edab49819c1bf3e76f409f953ea 31.36MB / 31.36MB
 => => sha256:08af7dd3c6400833072349685c6aeaf7b86f68441f75b5ffd46206924c6b0267 15.17MB / 15.17MB
 => => sha256:7423077999145aa09211f3b975495be42a009a990a72d799e1cb55833abc8745 31.61MB / 31.61MB
 => => sha256:a260dbcd03fce6db3fe06b0998f5f3e54c437f647220aa3a89e5ddd9495f707e 156B / 156B
 => => sha256:96c3c696f47eb55c55e43c338922842013fc980b21c457826fd97f625c0ab497 9.44MB / 9.44MB
 => => extracting sha256:a2abf6c4d29d43a4bf9fbb769f524d0fb36a2edab49819c1bf3e76f409f953ea
 => => extracting sha256:08af7dd3c6400833072349685c6aeaf7b86f68441f75b5ffd46206924c6b0267
 => => extracting sha256:7423077999145aa09211f3b975495be42a009a990a72d799e1cb55833abc8745
 => => extracting sha256:a260dbcd03fce6db3fe06b0998f5f3e54c437f647220aa3a89e5ddd9495f707e
 => => extracting sha256:96c3c696f47eb55c55e43c338922842013fc980b21c457826fd97f625c0ab497
 => [internal] load build context
 => => transferring context: 802.94kB
 => [base 2/2] WORKDIR /app
 => [final 1/2] WORKDIR /app
 => [build 2/6] WORKDIR /src
 => [build 3/6] COPY [BlazorServerWithDocker.csproj, .]
 => [build 4/6] RUN dotnet restore "BlazorServerWithDocker.csproj"
 => [build 5/6] COPY . .
 => [build 6/6] RUN dotnet build "BlazorServerWithDocker.csproj" -c Release -o /app/build
 => [publish 1/1] RUN dotnet publish "BlazorServerWithDocker.csproj" -c Release -o /app/publish
 => [final 2/2] COPY --from=publish /app/publish .
 => exporting to image
 => => exporting layers
 => => writing image sha256:4f2237c5ef4cd8038224f6892c7056a7412e58c41313023c5f62941f8b331396
 => => naming to docker.io/library/blazor-server-with-docker

As you can see each step in the dockerfile is executed until the final image is built and tagged.

Another great thing about Docker is it’s really efficient when building images. It caches each layer so future builds can be sped up. If you run the build command again you will see this in action.

[+] Building 0.3s (17/17) FINISHED
 => [internal] load build definition from Dockerfile
 => => transferring dockerfile: 37B
 => [internal] load .dockerignore
 => => transferring context: 35B
 => [internal] load metadata for mcr.microsoft.com/dotnet/sdk:6.0
 => [internal] load metadata for mcr.microsoft.com/dotnet/aspnet:6.0
 => [base 1/2] FROM mcr.microsoft.com/dotnet/aspnet:6.0@sha256:edb108fddbb69db67ad136e4ffc93d5d9ddcfd28fc7f269be5
 => [build 1/6] FROM mcr.microsoft.com/dotnet/sdk:6.0@sha256:90b566b141a8e2747f2805d9e4b2935ce09040a2926a1591c941
 => [internal] load build context
 => => transferring context: 2.11kB
 => CACHED [base 2/2] WORKDIR /app
 => CACHED [final 1/2] WORKDIR /app
 => CACHED [build 2/6] WORKDIR /src
 => CACHED [build 3/6] COPY [BlazorServerWithDocker.csproj, .]
 => CACHED [build 4/6] RUN dotnet restore "BlazorServerWithDocker.csproj"
 => CACHED [build 5/6] COPY . .
 => CACHED [build 6/6] RUN dotnet build "BlazorServerWithDocker.csproj" -c Release -o /app/build
 => CACHED [publish 1/1] RUN dotnet publish "BlazorServerWithDocker.csproj" -c Release -o /app/publish
 => CACHED [final 2/2] COPY --from=publish /app/publish .
 => exporting to image
 => => exporting layers
 => => writing image sha256:4f2237c5ef4cd8038224f6892c7056a7412e58c41313023c5f62941f8b331396
 => => naming to docker.io/library/blazor-server-with-docker

As nothing has changed Docker has used the cached version of all the images used during the first build, resulting in a near instant build.

Starting a container

All that’s left now is to start an instance of our new image and make sure everything works. We can start a new container using the docker run command.

docker run -p 8080:80 blazor-server-with-docker

The -p switch tell docker to map port 8080 on the host machine to port 80 on the container. Earlier, we used the EXPOSE keyword when creating the image to define which ports our container would listen on, this is where it comes into play. Also having tagged our image has made things much simpler here, we can just use the tag name to specify the image rather than its GUID.

If all goes well you should see something like this.

Open a browser and go to http://localhost:8080/ and you should see the app load.

Summary

In this post, we’ve looked at what Docker and containers are as well as what benefits they offer over more traditional virtual machines. As well as covering some of the core concepts in Docker. We then used the standard Blazor Server App template to build a Docker image by adding and configuring a dockerfile. Finally we used that image to create a container which ran our Blazor Server application.

Next time we’ll look at how we can do the same thing with a Blazor WebAssembly application.

Investigating Drag and Drop with Blazor

Drag and drop has become a popular interface solution in modern applications. It’s common to find it in productivity tools, great examples of this are Trello, JIRA and Notion. As well as being an intuitive interface for the user, it can definitely add a bit of “eye-candy” to an application.

We’ve been thinking about incorporating drag and drop into some screens of the product my team are building at work. This has given me a great opportunity to see how drag and drop can be accomplished with Blazor.

This post is going to cover what I’ve found while I’ve been experimenting and a walk through of a simple prototype app I built to test things out. Below is a sneak peak of the finished prototype.

The code for this post is available on GitHub.

The drag and drop API - A brief introduction

The drag and drop API is part of the HTML5 spec and has been around for a long time now. The API defines a set of events and interfaces which can be used to build a drag and drop interface.

Events

Certain events will only fire once during a drag-and-drop interaction such as dragstart and dragend. However, others will fire repeatedly such as drag and dragover.

Interfaces

There are a few interfaces for drag and drop interactions but the key ones are the DragEvent interface and the DataTransfer interface.

The [DragEvent](https://developer.mozilla.org/en-US/docs/Web/API/DragEvent) interface is a DOM event which represents a drag and drop interaction. It contains a single property, dataTransfer, which is a DataTransfer object.

The [DataTransfer](https://developer.mozilla.org/en-US/docs/Web/API/DataTransfer) interface has several properties and methods available. It contains information about the data being transferred by the interaction as well as methods to add or remove data from it.

Properties

Methods

Drag and drop API in Blazor

As with most UI events, Blazor has C# representations for the drag and drop API. Below is the DragEventArgs and DataTransfer classes which represents the DragEvent and DataTransfer interfaces I mentioned earlier.

/// <summary>
/// Supplies information about an drag event that is being raised.
/// </summary>
public class DragEventArgs : MouseEventArgs
{
    /// <summary>
    /// The data that underlies a drag-and-drop operation, known as the drag data store.
    /// See <see cref="DataTransfer"/>.
    /// </summary>
    public DataTransfer DataTransfer { get; set; }
}
/// <summary>
/// The <see cref="DataTransfer"/> object is used to hold the data that is being dragged during a drag and drop operation.
/// It may hold one or more <see cref="DataTransferItem"/>, each of one or more data types.
/// For more information about drag and drop, see HTML Drag and Drop API.
/// </summary>
public class DataTransfer
{
    /// <summary>
    /// Gets the type of drag-and-drop operation currently selected or sets the operation to a new type.
    /// The value must be none, copy, link or move.
    /// </summary>
    public string DropEffect { get; set; }

    /// <summary>
    /// Provides all of the types of operations that are possible.
    /// Must be one of none, copy, copyLink, copyMove, link, linkMove, move, all or uninitialized.
    /// </summary>
    public string EffectAllowed { get; set; }

    /// <summary>
    /// Contains a list of all the local files available on the data transfer.
    /// If the drag operation doesn't involve dragging files, this property is an empty list.
    /// </summary>
    public string[] Files { get; set; }

    /// <summary>
    /// Gives a <see cref="DataTransferItem"/> array which is a list of all of the drag data.
    /// </summary>
    public UIDataTransferItem[] Items { get; set; }

    /// <summary>
    /// An array of <see cref="string"/> giving the formats that were set in the dragstart event.
    /// </summary>
    public string[] Types { get; set; }
}

This was a great start to my investigation, however, it was short lived. After a quick bit of experimenting, it seems at this point in time there isn’t a way to populate these values and pass data around using them. At least from C#, which is my goal at the moment. What is available though are the various events of the drag and drop API, I just needed to come up with a way of tracking the data as it moved about.

Building the prototype - A todo list

As you have seen from the gif at the start of this post, the prototype is a highly original todo list. I set myself some goals I wanted to achieve from the exercise, they were:

Overview

My solution ended up with three components, JobsContainer, JobList and Job which are used to manipulate a list of JobModels.

public class JobModel
{
    public int Id { get; set; }
    public JobStatuses Status { get; set; }
    public string Description { get; set; }
    public DateTime LastUpdated { get; set; }
}

public enum JobStatuses
{
    Todo,
    Started,
    Completed
}

The JobsContainer is responsible for overall list of jobs, keeping track of the job being dragged and raising an event whenever a job is updated.

The JobsList component represents a single job status, it creates a drop-zone where jobs can be dropped and renders any jobs which have its status.

The Job component renders a JobModel instance. If the instance is dragged then it lets the JobsContainer know so it can be tracked.

JobsContainer Component

<div class="jobs-container">
    <CascadingValue Value="this">
        @ChildContent
    </CascadingValue>
</div>

@code {
    [Parameter] public List<JobModel> Jobs { get; set; }
    [Parameter] public RenderFragment ChildContent { get; set; }
    [Parameter] public EventCallback<JobModel> OnStatusUpdated { get; set; }

    public JobModel Payload { get; set; }

    public async Task UpdateJobAsync(JobStatuses newStatus)
    {
        var task = Jobs.SingleOrDefault(x => x.Id == Payload.Id);

        if (task != null)
        {
            task.Status = newStatus;
            task.LastUpdated = DateTime.Now;
            await OnStatusUpdated.InvokeAsync(Payload);
        }
    }
}

Its main job (no pun intended!) is to coordinate updates to jobs as they are moved about the various statuses. It takes a list of JobModel as a parameter as well as exposing an event which consuming components can handle to know when a job gets updated.

It passes itself as a CascadingValue to the various JobsList components, which are child components. This allows them access to the list of jobs as well as the UpdateJobAsync method, which is called when a job is dropped onto a new status.

JobsList Component

<div class="job-status">
    <h3>@ListStatus (@Jobs.Count())</h3>

    <ul class="dropzone @dropClass" 
        ondragover="event.preventDefault();"
        ondragstart="event.dataTransfer.setData('', event.target.id);"
        @ondrop="HandleDrop"
        @ondragenter="HandleDragEnter"
        @ondragleave="HandleDragLeave">

        @foreach (var job in Jobs)
        {
            <Job JobModel="job" />
        }

    </ul>
</div>

@code {

    [CascadingParameter] JobsContainer Container { get; set; }
    [Parameter] public JobStatuses ListStatus { get; set; }
    [Parameter] public JobStatuses[] AllowedStatuses { get; set; }

    List<JobModel> Jobs = new List<JobModel>();
    string dropClass = "";

    protected override void OnParametersSet()
    {
        Jobs.Clear();
        Jobs.AddRange(Container.Jobs.Where(x => x.Status == ListStatus));
    }

    private void HandleDragEnter()
    {
        if (ListStatus == Container.Payload.Status) return;

        if (AllowedStatuses != null && !AllowedStatuses.Contains(Container.Payload.Status))
        {
            dropClass = "no-drop";
        }
        else
        {
            dropClass = "can-drop";
        }
    }

    private void HandleDragLeave()
    {
        dropClass = "";
    }

    private async Task HandleDrop()
    {
        dropClass = "";

        if (AllowedStatuses != null && !AllowedStatuses.Contains(Container.Payload.Status)) return;

        await Container.UpdateJobAsync(ListStatus);
    }
}

There is quite a bit of code so let’s break it down.

[Parameter] JobStatuses ListStatus { get; set; }
[Parameter] JobStatuses[] AllowedStatuses { get; set; }

The component takes a ListStatus and array of AllowedStatuses. The AllowedStatuses are used by the HandleDrop method to decide if a job can be dropped or not.

The ListStatus is the job status that the component instance is responsible for. It’s used to fetch the jobs from the JobsContainer component which match that status so the component can render them in its list.

This is performed using the OnParametersSet lifecycle method, making sure to clear out the list each time to avoid duplicates.

protected override void OnParametersSet()
{
    Jobs.Clear();
    Jobs.AddRange(Container.Jobs.Where(x => x.Status == ListStatus));
}

I’m using an unordered list to display the jobs. The list is also a drop-zone for jobs, meaning you can drop other elements onto it. This is achieved by defining the ondragover event, but note there’s no @ symbol in-front of it. This isn’t a typo.

<ul class="dropzone @dropClass" 
    ondragover="event.preventDefault();"
    ondragstart="event.dataTransfer.setData('', event.target.id);"
    @ondrop="HandleDrop"
    @ondragenter="HandleDragEnter"
    @ondragleave="HandleDragLeave">

    @foreach (var job in Jobs)
    {
        <Job JobModel="job" />
    }

</ul>

The event is just a normal JavaScript event, not a Blazor version, calling preventDefault. The reason for this is that by default you can’t drop elements onto each other. By calling preventDefault it stops this default behaviour from occurring.

I’ve also defined the ondragstart JavaScript event as well, this is there to satisfy FireFoxs requirements to enable drag and drop and doesn’t do anything else.

The rest of the events are all Blazor versions. OnDragEnter and OnDragLeave are both used to set the CSS of for the drop-zone.

private void HandleDragEnter()
{
    if (ListStatus == Container.Payload.Status) return;

    if (AllowedStatuses != null && !AllowedStatuses.Contains(Container.Payload.Status))
    {
        dropClass = "no-drop";
    }
    else
    {
        dropClass = "can-drop";
    }
}

private void HandleDragLeave()
{
    dropClass = "";
}

HandleDragEnter manages the border of the drop-zone to give the user visual feedback if a job can be dropped.

If the job being dragged has the same status as the drop-zone it’s over then nothing happens. If a job is dragged over the drop-zone, and it’s a valid target, then a green border is added via the can-drop CSS class. If it’s not a valid target then a red border is added via the no-drop CSS class.

The HandleDragLeave method just resets the class once the job has been dragged away.

private async Task HandleDrop()
{
    dropClass = "";

    if (AllowedStatuses != null && !AllowedStatuses.Contains(Container.Payload.Status)) return;

    await Container.UpdateJobAsync(ListStatus);
}

Finally, HandleDrop is responsible for making sure a job is allowed to be dropped, and if so, updating its status via the JobsContainer.

Job Component

<li class="draggable" draggable="true" title="@JobModel.Description" @ondragstart="@(() => HandleDragStart(JobModel))">
    <p class="description">@JobModel.Description</p>
    <p class="last-updated"><small>Last Updated</small> @JobModel.LastUpdated.ToString("HH:mm.ss tt")</p>
</li>

@code {
    [CascadingParameter] JobsContainer Container { get; set; }
    [Parameter] public JobModel JobModel { get; set; }

    private void HandleDragStart(JobModel selectedJob)
    {
        Container.Payload = selectedJob;
    }
}

It’s responsible for displaying a JobModel and for making it draggable. Elements are made draggable by adding the draggable="true" attribute. The component is also responsible for handling the ondragstart event.

When ondragstart fires the component assigns the job to the JobsContainers Payload property. This keeps track of the job being dragged which is used when handling drop events, as we saw in the JobsList component.

Usage

Now we’ve gone through each component let’s see what it looks like all together.

<JobsContainer Jobs="Jobs" OnStatusUpdated="HandleStatusUpdated">
    <JobList ListStatus="JobStatuses.Todo" AllowedStatuses="@(new JobStatuses[] { JobStatuses.Started})" />
    <JobList ListStatus="JobStatuses.Started" AllowedStatuses="@(new JobStatuses[] { JobStatuses.Todo})" />
    <JobList ListStatus="JobStatuses.Completed" AllowedStatuses="@(new JobStatuses[] { JobStatuses.Started })" />
</JobsContainer>

@code {
    List<JobModel> Jobs = new List<JobModel>();

    protected override void OnInitialized()
    {
        Jobs.Add(new JobModel { Id = 1, Description = "Mow the lawn", Status = JobStatuses.Todo, LastUpdated = DateTime.Now });
        Jobs.Add(new JobModel { Id = 2, Description = "Go to the gym", Status = JobStatuses.Todo, LastUpdated = DateTime.Now });
        Jobs.Add(new JobModel { Id = 3, Description = "Call Ollie", Status = JobStatuses.Todo, LastUpdated = DateTime.Now });
        Jobs.Add(new JobModel { Id = 4, Description = "Fix bike tyre", Status = JobStatuses.Todo, LastUpdated = DateTime.Now });
        Jobs.Add(new JobModel { Id = 5, Description = "Finish blog post", Status = JobStatuses.Todo, LastUpdated = DateTime.Now });
    }

    void HandleStatusUpdated(JobModel updatedJob)
    {
        Console.WriteLine(updatedJob.Description);
    }
}

Looking back at the goals I set for this exercise:

I’m feel pretty happy that each one of those has been achieved with the above solution. Please keep in mind this was just a fact finding exercise and the code above is just a prototype. There are probably quite a few bits which could use a tweak or a re-factor before actually using it.

One thing which I thought about after I started was the ability to re-order using dragging and dropping. But that isn’t something I could make work in a way I would’ve been happy with. In traditional JavaScript applications, this is achieved by manipulating the DOM directly. This is something which isn’t possible right now with Blazor. I have a few ideas about ways to achieve this using C# but I’m leaving them for another time.

Summary

I had a lot of fun experimenting with drag and drop with Blazor. As usual, I found that getting something up and working was pretty quick and easy. I would definitely want to iterate on this code a bit before I started using it in a real app but I hope it will give people a good starting point.

In this post, I’ve given an overview of the HTML drag and drop API as well as showing what parts are available to us in Blazor. I then walked through a prototype for a drag and drop interface using a todo list as the example.

I’m now a Microsoft MVP!

It’s been a bit of a crazy week, on Thursday 1st August a very special e-mail appeared in my inbox. It was from Microsoft telling me I’d been awarded an MVP in developer technologies!

If I’m honest, it still hasn’t really sunk in yet and I find myself re-reading it every now and then just to prove to myself it wasn’t a dream. I’m deeply honoured and humbled to be part of such a talented group of community leaders and experts. I’d also be lying if I didn’t say imposter syndrome is running rife at the moment! 😃 But I hope to live up to the award by continuing to contribute and provide value to the community.

I’d like to say a special thank you to both Ed Charbeneau and Tom Morgan. Ed nominated me for the award and I really can’t thank him enough for that. While Tom has given me some fantastic advice and guidance along the way.

Most importantly of all, I would like to say a huge thank you to the Blazor community. I feel very lucky to be involved with such an inspiring and active community that cares so deeply about the technology.

Becoming a MVP is a really exciting step in my career and I can’t wait to see what happens next!

Configuring Policy-based Authorization with Blazor

In part 3 of this series, I showed how to add role based authorization to a client-side Blazor application. In this post, I’m going to show you how to configure the newer, and recommended, policy-based authorization with Blazor.

All the code for this post is available on GitHub.

Introduction to Policy-based Authorization

Introduced with ASP.NET Core, policy-based authorization allows a much more expressive way of creating authorization rules. The policy model is comprised of three concepts:

Policies are most commonly registered at application startup in the Startup classes ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    services.AddAuthorization(config =>
    {
        config.AddPolicy("IsDeveloper", policy => policy.RequireClaim("IsDeveloper", "true"));
    });
}

In the example above, the policy IsDeveloper requires that a user have the claim IsDeveloper with a value of true.

Just as with roles you can apply policies via the Authorize attribute.

[Route("api/[controller]")]
[ApiController]
public class SystemController 
{
    [Authorize(Policy = “IsDeveloper”)]
    public IActionResult LoadDebugInfo()
    {
        // ...
    }
}

Blazors directives and components also work with policies.

@page "/debug"
@attribute [Authorize(Policy = "IsDeveloper")]
<AuthorizeView Policy="IsDeveloper">
    <p>You can only see this if you satisfy the IsDeveloper policy.</p>
</AuthorizeView>

Easier Management

The big advantage of policy-based authorization is the improvement to managing authorization within an application. With role-based auth, if we had a couple of roles which were allowed access to protected resources - let’s say admin and moderator. We would need to go to every area they were permitted access and add an Authorize attribute.

[Authorize(Roles = "admin,moderator")]

This doesn’t seem too bad initially, but what if a new requirement comes in and a third role, superuser, needs the same access? We now need to go round every area and update all of the roles. With policy-based auth we can avoid this.

We can define a policy in a single place and then apply it once to all the resources which require it. Then when extra roles need to be added, we can just update the policy from the central point without the need to update the individual resources.

public void ConfigureServices(IServiceCollection services)
{
    services.AddAuthorization(config =>
    {
    config.AddPolicy("IsAdmin", policy => policy.RequireRole("admin", "moderator", "superuser"));
    });
}

[Authorize(Policy = "IsAdmin")]

Building Custom Requirements

Policies are very flexible, you can build requirements based on roles, claims or you can even create your own custom requirements. Let’s look at how we can create a custom requirement.

Normally custom requirements are used when you have complex logic. As mentioned above, we will need to define a requirement and a handler which we then tie together using a policy.

As an example, let’s create a requirement that checks if a users email address is using a company domain. We need to start by creating a requirement, this class needs to implement the IAuthorizationRequirement interface, which is just an empty marker interface.

public class CompanyDomainRequirement : IAuthorizationRequirement
{
    public string CompanyDomain { get; }

    public CompanyDomainRequirement(string companyDomain)
    {
        CompanyDomain = companyDomain;
    }
}

Next we need to create a handler for our requirement. This needs to inherit from AuthorizationHandler<T> where T is the requirement to be handled.

public class CompanyDomainHandler : AuthorizationHandler<CompanyDomainRequirement>
{
    protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, CompanyDomainRequirement requirement)
    {
        if (!context.User.HasClaim(c => c.Type == ClaimTypes.Email))
        {
            return Task.CompletedTask;
        }
        
        var emailAddress = context.User.FindFirst(c => c.Type == ClaimTypes.Email).Value;
        
        if (emailAddress.EndsWith(requirement.CompanyDomain))
        {
            return context.Succeed(requirement);
        }
        
        return Task.CompletedTask;
    }
}

In the code above, we check if an email claim is present. If it is, then we check if it ends with the domain specified in the requirement. If it does then we return a success, otherwise we just return.

We just need to wire up our requirement with a policy and register the CompanyDomainHandler with the dependency injection container.

public void ConfigureServices(IServiceCollection services)
{
    services.AddAuthorization(config =>
    {
        config.AddPolicy("IsCompanyUser", policy =>
            policy.Requirements.Add(new CompanyDomainRequirement("newco.com")));
    });

    services.AddSingleton<IAuthorizationHandler, CompanyDomainHandler>();
}

For more in-depth information on custom requirements I recommend checking out the official docs.

Using policies with Blazor

Now we have an understanding of what policies are, let’s look at how we can use them in an application.

We’re going to swap the client-side Blazor application from part 3 over to policy based authorization. As part of doing this we’re going to see another advantage of policy based authorization, which is the ability to define policies in a shared project and reference them on both the server and the client.

Creating shared policies

We’re going to start by creating the policies in the shared project. We need to install the Microsoft.AspNetCore.Authorization package from NuGet in order to do this.

Once that’s installed create a new class called Policies with the following code.

public static class Policies
{
    public const string IsAdmin = "IsAdmin";
    public const string IsUser = "IsUser";

    public static AuthorizationPolicy IsAdminPolicy()
    {
        return new AuthorizationPolicyBuilder().RequireAuthenticatedUser()
                                               .RequireRole("Admin")
                                               .Build();
    }

    public static AuthorizationPolicy IsUserPolicy()
    {
        return new AuthorizationPolicyBuilder().RequireAuthenticatedUser()
                                               .RequireRole("User")
                                               .Build();
    }
}

We start by defining a couple of constants - IsAdmin and IsUser. We’ll use these in a bit when registering the policies. Then there are the two policies themselves, IsAdminPolicy and IsUserPolicy. Here we’re using the AuthorizationPolicyBuilder to define each policy, both require the user to be authenticated then be in either the Admin role or User role, depending on the policy.

Configuring the server

Now we have defined our policies we need to tell our server application to use them. We’ll start by registering the policies in ConfigureServices in the Startup class. Add the following code under the existing call to AddAuthentication.

services.AddAuthorization(config =>
{
    config.AddPolicy(Policies.IsAdmin, Policies.IsAdminPolicy());
    config.AddPolicy(Policies.IsUser, Policies.IsUserPolicy());
});

The code is pretty self explanatory, we’re registering each policy and using the constants we defined in the Policies class to declare their names, which saves using magic strings.

If we move over to the SampleDataController we can update the Authorize attribute to use the new IsAdmin policy instead of the old role.

[Authorize(Policy = Policies.IsAdmin)]
[Route("api/[controller]")]
public class SampleDataController : Controller

Again, we can use our name constant to avoid the magic strings.

Configuring the client

Our server is now using the new policies we defined, all that’s left to do is to swap over our Blazor client to use them as well.

As with the server we’ll start by registering the policies in ConfigureServices in the Startup class. We already have a call to AddAuthorizationCore so we just need to update it.

services.AddAuthorizationCore(config =>
{
    config.AddPolicy(Policies.IsAdmin, Policies.IsAdminPolicy());
    config.AddPolicy(Policies.IsUser, Policies.IsUserPolicy());
});

In Index.razor, update the AuthorizeView component to use policies - still avoiding the magic strings.

<AuthorizeView Policy="@Policies.IsUser">
    <p>You can only see this if you satisfy the IsUser policy.</p>
</AuthorizeView>

<AuthorizeView Policy="@Policies.IsAdmin">
    <p>You can only see this if you satisfy the IsAdmin policy.</p>
</AuthorizeView>

Finally, update FetchData.razors Authorize attribute.

@attribute [Authorize(Policy = Policies.IsAdmin)]

That’s it! Our application is now moved over to policy-based authorization. We now have a more flexible authorization system which can use roles, claims, custom policies or any mixture of the above.

Server-side Blazor

I’ve not specifically talked about server-side Blazor for the simple reason that what we’ve done above should translate into server-side Blazor without any issues. However, I have included a server-side example in the code sample which accompanies this post on GitHub.

Note: The server-side sample currently has a build failure caused by this issue.

Summary

In this post, we’ve looked at policy-based authorization in ASP.NET Core and Blazor. We’ve looked at some of the advantages of using policy-based authorization over the more legacy roles-based authorization. Then we migrated the application from part 3 from roles-based auth to policy-based auth.

Configuring Role-based Authorization with client-side Blazor

In parts 1 and 2 of this series I’ve shown how to create both server-side and client-side Blazor apps with authentication. In this post, I’m going to show you how to configure role-based authorization in a client-side Blazor application.

All the code for this post is available on GitHub.

What is role-based authorization?

When it comes to authorization in ASP.NET Core we have two options, role-based and policy-based (there’s also claims-based but thats just a special type of policy-based).

Role-based authorization has been around for a while now and was originally introduced in ASP.NET (pre-Core). It’s a declarative way to restrict access to resources.

Developers can specify the name of the particular role a user must be a member of in order to access a certain resource. This is most commonly done using the [Authorize] attribute by specifying a role or list of roles - [Authorize(Roles = “Admin”)]. Users can be a member of a single role or multiple roles.

How roles are created and managed is dependent on the backing store used. As we’ve been using ASP.NET Core Identity in the series so far we’ll continue use it to manage and store our roles.

We’ll be building on top of the application we build in part 2 of this series.

Setting up Roles with ASP.NET Core Identity

We need to add the role specific services to our application. To do this, we need to update the code in the ConfigureServices method of the Startup class.

services.AddDefaultIdentity<IdentityUser>()
        .AddRoles<IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>();

The IdentityRole type is the default role type provided by ASP.NET Core Identity. But you can provide a different type if it doesn’t fit your requirements.

Next, we’re going to seed our database with some roles - we’re going to add a User and Admin role. To do this we’re going to override the OnModelCreating method of the ApplicationDbContext.

public class ApplicationDbContext : IdentityDbContext
{
    public ApplicationDbContext(DbContextOptions options) : base(options)
    {
    }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        builder.Entity<IdentityRole>().HasData(new IdentityRole { Name = "User", NormalizedName = "USER", Id = Guid.NewGuid().ToString(), ConcurrencyStamp = Guid.NewGuid().ToString() });
        builder.Entity<IdentityRole>().HasData(new IdentityRole { Name = "Admin", NormalizedName = "ADMIN", Id = Guid.NewGuid().ToString(), ConcurrencyStamp = Guid.NewGuid().ToString() });
    }
}

Once this is done we need to generate a migration and then apply it to the database.

Add-Migration SeedRoles
Update-Database

Adding users to roles

Now we have some roles available, we’re going to update the action on the Accounts controller which creates new users.

We’re going to add all new users to the User role. Except if the new users email starts with admin. If it does, then we’re going to add them to User and Admin groups.

[HttpPost]
public async Task<IActionResult> Post([FromBody]RegisterModel model)
{
    var newUser = new IdentityUser { UserName = model.Email, Email = model.Email };

    var result = await _userManager.CreateAsync(newUser, model.Password);

    if (!result.Succeeded)
    {
        var errors = result.Errors.Select(x => x.Description);

        return BadRequest(new RegisterResult { Successful = false, Errors = errors });
    }

    // Add all new users to the User role
    await _userManager.AddToRoleAsync(newUser, "User");
    
    // Add new users whose email starts with 'admin' to the Admin role
    if (newUser.Email.StartsWith("admin"))
    {
        await _userManager.AddToRoleAsync(newUser, "Admin");
    }

    return Ok(new RegisterResult { Successful = true });
}

We’re now assigning users to roles at signup but we need to pass this information down to Blazor. To do this, we need to update the claims we are putting into our JSON Web Token.

Adding roles as claims to the JWT

In the Login controller we’re going to update the Login method. Let’s remove the current line generating claims.

var claims = new[]
{
    new Claim(ClaimTypes.Name, login.Email)
};

And replace it with the following.

var user = await _signInManager.UserManager.FindByEmailAsync(login.Email);
var roles = await _signInManager.UserManager.GetRolesAsync(user);

var claims = new List<Claim>();

claims.Add(new Claim(ClaimTypes.Name, login.Email));

foreach (var role in roles)
{
    claims.Add(new Claim(ClaimTypes.Role, role));
}

We start off by getting the current user via the UserManager, which we then use to get their roles. The original Name claim is added with the users email, as before. If any roles are present we loop over them and each one is added as a Role claim.

It’s important to understand a quirk about role claims at this point. You may expect that if a user is in two roles then two role claims will be added to the JWT.

http://schemas.microsoft.com/ws/2008/06/identity/claims/role - "User"
http://schemas.microsoft.com/ws/2008/06/identity/claims/role - "Admin"

But that’s not what happens, what happens is that the two role claims get combined into an array.

http://schemas.microsoft.com/ws/2008/06/identity/claims/role - ["User", "Admin"]

This is important because on the client we are going to have to workout if we’re dealing with an array or a single value. If we’re dealing with an array then we will need to do some extra work to get the individual roles out.

Working with roles in client-side Blazor

We’re looking pretty good so far. We have new users being added to roles and once they have signed in we are returning those roles via the JWT. But how can we use roles inside of Blazor?

At this point in time there isn’t anything official to help us with roles, so we’ve got to deal with it manually.

In part 2 of the series we added the ApiAuthenticationStateProvider class, which has a method called ParseClaimsFromJwt that looks like this.

private IEnumerable<Claim> ParseClaimsFromJwt(string jwt)
{
    var claims = new List<Claim>();
    var payload = jwt.Split('.')[1];
    var jsonBytes = ParseBase64WithoutPadding(payload);
    var keyValuePairs = JsonSerializer.Parse<Dictionary<string, object>>(jsonBytes);

    keyValuePairs.TryGetValue(ClaimTypes.Role, out object roles);

    if (roles != null)
    {
        if (roles.ToString().Trim().StartsWith("["))
        {
            var parsedRoles = JsonSerializer.Parse<string[]>(roles.ToString());

            foreach (var parsedRole in parsedRoles)
            {
                claims.Add(new Claim(ClaimTypes.Role, parsedRole));
            }
        }
        else
        {
            claims.Add(new Claim(ClaimTypes.Role, roles.ToString()));
        }

        keyValuePairs.Remove(ClaimTypes.Role);
    }

    claims.AddRange(keyValuePairs.Select(kvp => new Claim(kvp.Key, kvp.Value.ToString())));

    return claims;
}

private byte[] ParseBase64WithoutPadding(string base64)
{
    switch (base64.Length % 4)
    {
        case 2: base64 += "=="; break;
        case 3: base64 += "="; break;
    }
    return Convert.FromBase64String(base64);
}

As we saw in part 2 it takes a JWT, decodes it, extracts the claims and returns them. But what we didn’t cover was that I modified it to handle roles as a special case.

If a role claim is present then we check if the first character is a [ indicating it’s a JSON array. If the character is found then roles is parsed again to extract the individual role names. We then loop through the role names and add each as a claim. If roles is not an array then its added as a single role claim.

I admit this is not the prettiest code and I’m sure it could be made much better but it serves our purpose for now.

We need to update the MarkUserAsAuthenticated method to call ParseClaimsFromJwt.

public void MarkUserAsAuthenticated(string token)
{
    var authenticatedUser = new ClaimsPrincipal(new ClaimsIdentity(ParseClaimsFromJwt(token), "jwt"));
    var authState = Task.FromResult(new AuthenticationState(authenticatedUser));
    
    NotifyAuthenticationStateChanged(authState);
}

Finally, we need to update the Login method on the AuthService to pass the token rather than the email when calling MarkUserAsAuthenticated.

public async Task<LoginResult> Login(LoginModel loginModel)
{
    var result = await _httpClient.PostJsonAsync<LoginResult>("api/Login", loginModel);

    if (result.Successful)
    {
        await _localStorage.SetItemAsync("authToken", result.Token);
        ((ApiAuthenticationStateProvider)_authenticationStateProvider).MarkUserAsAuthenticated(result.Token);
        _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("bearer", result.Token);

        return result;
    }

    return result;
}

We should now have the ability to apply role based authorization to our app. Let’s start at the API.

Applying role-based authorization to the API

Let’s set the WeatherForecast action on the SampleDataController to only be accessible to authenticated users in the Admin role. We do this by using the Authorize attribute and specifying the roles that are allowed to access it.

[Authorize(Roles = "Admin")]
[HttpGet("[action]")]
public IEnumerable<WeatherForecast> WeatherForecasts()
{
    var rng = new Random();
    return Enumerable.Range(1, 5).Select(index => new WeatherForecast
    {
        Date = DateTime.Now.AddDays(index),
        TemperatureC = rng.Next(-20, 55),
        Summary = Summaries[rng.Next(Summaries.Length)]
    });
}

If you create a new user in the Admin role and go to the Fetch Data page in the Blazor app you should still see everything load as expected.

But if you create a normal user and do the same, you should see the page stuck with a Loading… message.

Just for reference, as well as applying the Authorize attribute to actions you can also apply to it a controller. When applied at a controller level all actions on that controller are protected.

Applying role-based authorization in Blazor

Blazor can also use the Authorize attribute to protect pages. This is achieved by using the @attribute directive to apply the [Authorize] attribute. You can also restrict access to parts of a page using the AuthorizeView component.

Warning - Any client-side checks can be bypassed as the user can potentially modify any of the code. This is true for any client-side technology, so make sure you always have checks on your API as well.

As the forecast data is only available to Admin users let’s restrict access to that page using the Authorize attribute.

@page "/fetchdata"
@attribute [Authorize(Roles = "Admin")]

Now try logging into that page using your admin user. Everything should continue to work. Then try logging in as the standard user, you should now see a Not authorized message.

Let’s test out the AuthorizeView as well. On the home page (index.razor) add the following code.

<AuthorizeView Roles="User">
    <p>You can only see this if you're in the User role.</p>
</AuthorizeView>

<AuthorizeView Roles="Admin">
    <p>You can only see this if you're in the Admin role.</p>
</AuthorizeView>

Again, log in with your admin and user accounts. When you’re logged in as the admin user you should see both messages, as you’re in both roles.

When you’re logged in as a standard user you should only see the first message.

Summary

In this post, we’ve looked at what role-based authorization is and how to use ASP.NET Core Identity to setup and mange roles. We then moved on to how to pass roles as claims using JSON Web Tokens from the API to the client. Then we worked through processing those role claims in Blazor and finally implemented some roles based authorization checks on both the API and Blazor.

I just want to reiterate that you cannot just rely on client-side authentication or authorization, the client can never be trusted. You must always perform authentication and authorization checks on the server as well.

Authentication with client-side Blazor using WebAPI and ASP.NET Core Identity

In part 1 of this series, I showed how to create a server-side Blazor application with authentication enabled.

In this post, I’m going to show how to setup authentication with client-side Blazor using WebAPI and ASP.NET Core Identity.

All the code for this post is available on GitHub.

If you are not familiar with ASP.NET Core Identity then you can checkout the Microsoft Docs site for full and in-depth information.

Getting Setup: Creating the solution

Start by creating a new Blazor WebAssembly App (remember to tick the ASP.NET Core hosted checkbox), this template will create a Blazor application which runs in the clients browser on WebAssembly hosted by a ASP.NET Core WebAPI. Once the solution has been created we’re going to start making some changes to the server project.

Configuring WebAPI

We’re going to configure the API first, but before we begin let’s get some NuGet packages installed.

    <PackageReference Include="Microsoft.AspNetCore.Blazor.Server" Version="3.1.0-preview3.19555.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.NewtonsoftJson" Version="3.1.0-preview3.19555.2" />

    <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="3.1.0-preview3.19555.2" />
    <PackageReference Include="Microsoft.AspNetCore.Identity.UI" Version="3.1.0-preview3.19555.2" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="3.1.0-preview3.19555.2" />
    <PackageReference Include="Microsoft.AspNetCore.Identity.EntityFrameworkCore" Version="3.1.0-preview3.19555.2" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="3.1.0-preview3.19554.8" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="3.1.0-preview3.19554.8">
      <PrivateAssets>all</PrivateAssets>
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    </PackageReference>
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="3.1.0-preview3.19553.2" />
    <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="3.1.0-preview3.19558.8" />

You can either add the above packages to your server projects .csproj file - or you can install them via the command line or NuGet package manager.

Setting up the Identity database: Connection string

Before we can set anything up, database wise we need a connection string. This is usually kept in the appsettings.json file, but the Blazor hosted template doesn’t supply one - so we are going to have to add it.

Right click on the server project and select Add > New Item. Then select App Settings File from the list.

{
  "ConnectionStrings": {
    "DefaultConnection": "Server=(localdb)\\MSSQLLocalDB;Database=AuthenticationWithClientSideBlazor;Trusted_Connection=True;MultipleActiveResultSets=true"
  }
}

The file comes with a connection string already in place, feel free to point this where ever you need to. I’m just going to add a database name and leave the rest as default.

Setting up the Identity database: DbContext

In the root of the server project create a folder called Data then add a new class called ApplicationDbContext with the following code.

public class ApplicationDbContext : IdentityDbContext
{
    public ApplicationDbContext(DbContextOptions options) : base(options)
    {
    }
}

Because we are using Identity which needs to store information in a database we’re not inheriting from DbContext but instead from IdentityDbContext. The IdentityDbContext base class contains all the configuration EF needs to manage the Identity database tables.

Setting up the Identity database: Registering services

In the Startup class we need to add a constructor which takes an IConfiguration and a property to store it. IConfiguration allows us to access the settings in the appsettings.json file, such as the connection string.

public IConfiguration Configuration { get; }

public Startup(IConfiguration configuration)
{
    Configuration = configuration;
}

Now we need to add the following lines to the top of the ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    services.AddDbContext<ApplicationDbContext>(options =>
                    options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

    services.AddDefaultIdentity<IdentityUser>()
        .AddEntityFrameworkStores<ApplicationDbContext>();

    // other code removed for brevity

}

Essentially, these two lines are adding the ApplicationDbContext to the services collection. Then registering the various services for ASP.NET Core Identity and telling it to use Entity Framework as a backing store via the ApplicationDbContext.

Setting up the Identity database: Creating the database

We’re now in a position to create the initial migration for the database. In the package manager console run the following command.

Add-Migration CreateIdentitySchema -o Data/Migrations

Once the command has run you should see the migrations file in Data > Migrations. Run Update-Database in the console to apply the migration to your database.

If you have any issues with running the migration command, make sure that the server project is selected as the default project in the package manager console.

Enabling Authentication: Registering services

The next step is to enable authentication in the API. Again, in ConfigureServices add the following code after the code we added in the previous section.

public void ConfigureServices(IServiceCollection services)
{

    // other code removed for brevity
    
    services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
        .AddJwtBearer(options =>
        {
            options.TokenValidationParameters = new TokenValidationParameters
            {
                ValidateIssuer = true,
                ValidateAudience = true,
                ValidateLifetime = true,
                ValidateIssuerSigningKey = true,
                ValidIssuer = Configuration["JwtIssuer"],
                ValidAudience = Configuration["JwtAudience"],
                IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Configuration["JwtSecurityKey"]))
            };
        });
        
     // other code removed for brevity   

}

The code above is adding and setting up some services required for authentication to the service container. Then adding a handler for JSON Web Tokens (JWT) and configuring how received JWTs should be validated. Feel free to tweak these settings to your requirements.

Enabling Authentication: App settings

There are a few settings which are being loaded from the appsettings.json file.

We haven’ actually added them to the appsettings file yet, so let do that now. While we’re there we’ll also add a setting to control how long the tokens last, which we’ll use in a bit.

"JwtSecurityKey": "RANDOM_KEY_MUST_NOT_BE_SHARED",
"JwtIssuer": "https://localhost",
"JwtAudience": "https://localhost",
"JwtExpiryInDays": 1,

It’s really important that the JwtSecurityKey is kept secret as this is what is used to sign the tokens produced by the API. If this is compromised then your app would no longer be secure.

As I’m running everything locally I have my Issuer and _Audienc_e set to localhost. But if you’re using this in a real app then you would set the Issuer to the domain the API is running on and the Audience to the domain the client app is running on.

Enabling Authentication: Adding middleware

Finally, in the Configure method we need to add the necessary middleware to the pipeline. This will enable the authentication and authorization features in our API. Add them just above the app.UseEndpoints middleware.

app.UseAuthentication();
app.UseAuthorization();

That should be everything we need to do the Startup class. Authentication is now enabled for the API.

You can test everything is working by adding an [Authorize] attribute to the WeatherForecasts action on the SampleDataController. Then startup the app and navigate to the Fetch Data page, no data should load and you should see a 401 error in the console.

Adding the account controller

In order for people to login to our app they need to be able to signup. We’re going to add an account controller which will be responsible for creating new accounts.

[Route("api/[controller]")]
[ApiController]
public class AccountsController : ControllerBase
{
    private static UserModel LoggedOutUser = new UserModel { IsAuthenticated = false };

    private readonly UserManager<IdentityUser> _userManager;

    public AccountsController(UserManager<IdentityUser> userManager)
    {
        _userManager = userManager;
    }

    [HttpPost]
    public async Task<IActionResult> Post([FromBody]RegisterModel model)
    {
        var newUser = new IdentityUser { UserName = model.Email, Email = model.Email };

        var result = await _userManager.CreateAsync(newUser, model.Password);

        if (!result.Succeeded)
        {
            var errors = result.Errors.Select(x => x.Description);

            return Ok(new RegisterResult { Successful = false, Errors = errors });

        }

        return Ok(new RegisterResult { Successful = true });
    }
}

The Post action uses the ASP.NET Core Identity UserManager to create a new user in the system from a RegisterModel.

We haven’t added the register model yet so we can do that now, put this in the shared project as this will be used by our Blazor app in a bit.

public class RegisterModel
{
    [Required]
    [EmailAddress]
    [Display(Name = "Email")]
    public string Email { get; set; }

    [Required]
    [StringLength(100, ErrorMessage = "The {0} must be at least {2} and at max {1} characters long.", MinimumLength = 6)]
    [DataType(DataType.Password)]
    [Display(Name = "Password")]
    public string Password { get; set; }

    [DataType(DataType.Password)]
    [Display(Name = "Confirm password")]
    [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
    public string ConfirmPassword { get; set; }
}

If all goes well then a successful RegisterResult is returned, otherwise a failed RegisterResult is returned. Again, we need to create the RegisterResult and again it needs to go in the shared project.

public class RegisterResult
{
    public bool Successful { get; set; }
    public IEnumerable<string> Errors { get; set; }
}

Adding the login controller

Now we have a way for users to signup we now need a way for them to login.

[Route("api/[controller]")]
[ApiController]
public class LoginController : ControllerBase
{
    private readonly IConfiguration _configuration;
    private readonly SignInManager<IdentityUser> _signInManager;

    public LoginController(IConfiguration configuration,
                           SignInManager<IdentityUser> signInManager)
    {
        _configuration = configuration;
        _signInManager = signInManager;
    }

    [HttpPost]
    public async Task<IActionResult> Login([FromBody] LoginModel login)
    {
        var result = await _signInManager.PasswordSignInAsync(login.Email, login.Password, false, false);

        if (!result.Succeeded) return BadRequest(new LoginResult { Successful = false, Error = "Username and password are invalid." });

        var claims = new[]
        {
            new Claim(ClaimTypes.Name, login.Email)
        };

        var key = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_configuration["JwtSecurityKey"]));
        var creds = new SigningCredentials(key, SecurityAlgorithms.HmacSha256);
        var expiry = DateTime.Now.AddDays(Convert.ToInt32(_configuration["JwtExpiryInDays"]));

        var token = new JwtSecurityToken(
            _configuration["JwtIssuer"],
            _configuration["JwtAudience"],
            claims,
            expires: expiry,
            signingCredentials: creds
        );

        return Ok(new LoginResult { Successful = true, Token = new JwtSecurityTokenHandler().WriteToken(token) });
    }
}

The sole job of the login controller is to verify the username and password in the LoginModel using the ASP.NET Core Identity SignInManger. If they’re correct then a new JSON web token is generated and passed back to the client in a LoginResult.

Just like before we need to add the LoginModel and LoginResult to the shared project.

public class LoginModel
{
    [Required]
    public string Email { get; set; }

    [Required]
    public string Password { get; set; }

    public bool RememberMe { get; set; }
}
public class LoginResult
{
    public bool Successful { get; set; }
    public string Error { get; set; }
    public string Token { get; set; }
}

That’s everything we need on our API. We have now configured it to use authentication via JSON web tokens. As well as setup the controllers we need for our Blazor client-side app to register new users and to login.

Configuring client-side Blazor

Let’s turn our attention to Blazor. The first thing we’re going to do is install Blazored.LocalStorage, we will need this later to persist the auth token from the API when we login.

We also need to update the App component to use the AuthorizeRouteView component instead of the RouteView component.

<Router AppAssembly="@typeof(Program).Assembly">
    <Found Context="routeData">
        <AuthorizeRouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" />
    </Found>
    <NotFound>
        <CascadingAuthenticationState>
            <LayoutView Layout="@typeof(MainLayout)">
                <p>Sorry, there's nothing at this address.</p>
            </LayoutView>
        </CascadingAuthenticationState>
    </NotFound>
</Router>

This component provides a cascading parameter of type Task<AuthenticationState>. This is used by the AuthorizeView component to determine the current users authentication state.

But any component can request the parameter and use it to do procedural logic, for example.

@code {
    [CascadingParameter] private Task<AuthenticationState> authenticationStateTask { get; set; }

    private async Task LogUserAuthenticationState()
    {
        var authState = await authenticationStateTask;
        var user = authState.User;

        if (user.Identity.IsAuthenticated)
        {
            Console.WriteLine($"User {user.Identity.Name} is authenticated.");
        }
        else
        {
            Console.WriteLine("User is NOT authenticated.");
        }
    }
}

Creating a Custom AuthenticationStateProvider

As we are using client-side Blazor we need to provide our own implementation for the AuthenticationStateProvider class. Because there are so many options when it comes to client-side apps there is no way to design a default class that would work for everyone.

We need to override the GetAuthenticationStateAsync method. In this method we need to determine if the current user is authenticated or not. We’re also going to add a couple of helper methods which we will use to update the authentication state when the user logs in or out.

public class ApiAuthenticationStateProvider : AuthenticationStateProvider
{
    private readonly HttpClient _httpClient;
    private readonly ILocalStorageService _localStorage;

    public ApiAuthenticationStateProvider(HttpClient httpClient, ILocalStorageService localStorage)
    {
        _httpClient = httpClient;
        _localStorage = localStorage;
    }
    public override async Task<AuthenticationState> GetAuthenticationStateAsync()
    {
        var savedToken = await _localStorage.GetItemAsync<string>("authToken");

        if (string.IsNullOrWhiteSpace(savedToken))
        {
            return new AuthenticationState(new ClaimsPrincipal(new ClaimsIdentity()));
        }

        _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("bearer", savedToken);

        return new AuthenticationState(new ClaimsPrincipal(new ClaimsIdentity(ParseClaimsFromJwt(savedToken), "jwt")));
    }

    public void MarkUserAsAuthenticated(string email)
    {
        var authenticatedUser = new ClaimsPrincipal(new ClaimsIdentity(new[] { new Claim(ClaimTypes.Name, email) }, "apiauth"));
        var authState = Task.FromResult(new AuthenticationState(authenticatedUser));
        NotifyAuthenticationStateChanged(authState);
    }

    public void MarkUserAsLoggedOut()
    {
        var anonymousUser = new ClaimsPrincipal(new ClaimsIdentity());
        var authState = Task.FromResult(new AuthenticationState(anonymousUser));
        NotifyAuthenticationStateChanged(authState);
    }

    private IEnumerable<Claim> ParseClaimsFromJwt(string jwt)
    {
        var claims = new List<Claim>();
        var payload = jwt.Split('.')[1];
        var jsonBytes = ParseBase64WithoutPadding(payload);
        var keyValuePairs = JsonSerializer.Deserialize<Dictionary<string, object>>(jsonBytes);

        keyValuePairs.TryGetValue(ClaimTypes.Role, out object roles);

        if (roles != null)
        {
            if (roles.ToString().Trim().StartsWith("["))
            {
                var parsedRoles = JsonSerializer.Deserialize<string[]>(roles.ToString());

                foreach (var parsedRole in parsedRoles)
                {
                    claims.Add(new Claim(ClaimTypes.Role, parsedRole));
                }
            }
            else
            {
                claims.Add(new Claim(ClaimTypes.Role, roles.ToString()));
            }

            keyValuePairs.Remove(ClaimTypes.Role);
        }

        claims.AddRange(keyValuePairs.Select(kvp => new Claim(kvp.Key, kvp.Value.ToString())));

        return claims;
    }

    private byte[] ParseBase64WithoutPadding(string base64)
    {
        switch (base64.Length % 4)
        {
            case 2: base64 += "=="; break;
            case 3: base64 += "="; break;
        }
        return Convert.FromBase64String(base64);
    }
}

There is a lot of code here so let’s break it down step by step.

The GetAuthenticationStateAsync method is called by the CascadingAuthenticationState component to determine if the current user is authenticated or not.

In the code above, we check to see if there is an auth token in local storage. If there is no token in local storage, then we return a new AuthenticationState with a blank claims principal. This is the equivalent of saying the current user is not authenticated.

If there is a token, we retrieve it and set the default authorization header for the HttpClient. We then return a new AuthenticationState with a new claims principal containing the claims from the token. The claims are extracted from the token by the ParseClaimsFromJwt method. This method decodes the token and returns the claims contained within it.

Full disclosure - the ParseClaimsFromJwt method is borrowed from Steve Sandersons Mission Control demo app, which he showed at NDC Oslo 2019.

The MarkUserAsAuthenticated is a helper method that’s used to when a user logs in. Its sole purpose is to invoke the NotifyAuthenticationStateChanged method which fires an event called AuthenticationStateChanged. This cascades the new authentication state, via the CascadingAuthenticationState component.

As you may expect, MarkUserAsLoggedOut does almost exactly the same as the previous method but when a user logs out.

Auth Service

The auth service is going to be the what we use in our components to register users and log them in and out of the application. It’s going to be a nice abstraction for all of the stuff going on in the background.

public class AuthService : IAuthService
{
    private readonly HttpClient _httpClient;
    private readonly AuthenticationStateProvider _authenticationStateProvider;
    private readonly ILocalStorageService _localStorage;

    public AuthService(HttpClient httpClient,
                       AuthenticationStateProvider authenticationStateProvider,
                       ILocalStorageService localStorage)
    {
        _httpClient = httpClient;
        _authenticationStateProvider = authenticationStateProvider;
        _localStorage = localStorage;
    }

    public async Task<RegisterResult> Register(RegisterModel registerModel)
    {
        var result = await _httpClient.PostJsonAsync<RegisterResult>("api/accounts", registerModel);

        return result;
    }

    public async Task<LoginResult> Login(LoginModel loginModel)
    {
        var loginAsJson = JsonSerializer.Serialize(loginModel);
        var response = await _httpClient.PostAsync("api/Login", new StringContent(loginAsJson, Encoding.UTF8, "application/json"));
        var loginResult = JsonSerializer.Deserialize<LoginResult>(await response.Content.ReadAsStringAsync(), new JsonSerializerOptions { PropertyNameCaseInsensitive = true });

        if (!response.IsSuccessStatusCode)
        {
            return loginResult;
        }

        await _localStorage.SetItemAsync("authToken", loginResult.Token);
        ((ApiAuthenticationStateProvider)_authenticationStateProvider).MarkUserAsAuthenticated(loginModel.Email);
        _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("bearer", loginResult.Token);

        return loginResult;
    }

    public async Task Logout()
    {
        await _localStorage.RemoveItemAsync("authToken");
        ((ApiAuthenticationStateProvider)_authenticationStateProvider).MarkUserAsLoggedOut();
        _httpClient.DefaultRequestHeaders.Authorization = null;
    }
}

The Register method posts the registerModel to the accounts controller and then returns the RegisterResult to the caller.

The Login method is similar to the Register method, it posts the LoginModel to the login controller. But when a successful result is returned it strips out the auth token and persists it to local storage.

It then calls the MarkUserAsAuthenticated method we just looked at on the ApiAuthenticationStateProvider. Finally, it sets the default authorization header on the HttpClient.

The Logout method is just doing the reverse of the Login method.

Register Component

We’re on the home stretch now. We can now turn our attention to the UI and creating a component which will allow people to register with the site.

@page "/register"
@inject IAuthService AuthService
@inject NavigationManager NavigationManager

<h1>Register</h1>

@if (ShowErrors)
{
    <div class="alert alert-danger" role="alert">
        @foreach (var error in Errors)
        {
            <p>@error</p>
        }
    </div>
}

<div class="card">
    <div class="card-body">
        <h5 class="card-title">Please enter your details</h5>
        <EditForm Model="RegisterModel" OnValidSubmit="HandleRegistration">
            <DataAnnotationsValidator />
            <ValidationSummary />

            <div class="form-group">
                <label for="email">Email address</label>
                <InputText Id="email" class="form-control" @bind-Value="RegisterModel.Email" />
                <ValidationMessage For="@(() => RegisterModel.Email)" />
            </div>
            <div class="form-group">
                <label for="password">Password</label>
                <InputText Id="password" type="password" class="form-control" @bind-Value="RegisterModel.Password" />
                <ValidationMessage For="@(() => RegisterModel.Password)" />
            </div>
            <div class="form-group">
                <label for="password">Confirm Password</label>
                <InputText Id="password" type="password" class="form-control" @bind-Value="RegisterModel.ConfirmPassword" />
                <ValidationMessage For="@(() => RegisterModel.ConfirmPassword)" />
            </div>
            <button type="submit" class="btn btn-primary">Submit</button>
        </EditForm>
    </div>
</div>

@code {

    private RegisterModel RegisterModel = new RegisterModel();
    private bool ShowErrors;
    private IEnumerable<string> Errors;

    private async Task HandleRegistration()
    {
        ShowErrors = false;

        var result = await AuthService.Register(RegisterModel);

        if (result.Successful)
        {
            NavigationManager.NavigateTo("/login");
        }
        else
        {
            Errors = result.Errors;
            ShowErrors = true;
        }
    }

}

The register component contains a form which allows the user to enter their email address and desired password. When the form is submitted the Register method on the AuthService is called passing in the RegisterModel. If the result of the registration is a success then the user is navigated to the login page. Otherwise any errors are displayed to the user.

Login Component

Now we can register a new account, we need to be able to login. The login component is going to be responsible for that.

@page "/login"
@inject IAuthService AuthService
@inject NavigationManager NavigationManager

<h1>Login</h1>

@if (ShowErrors)
{
    <div class="alert alert-danger" role="alert">
        <p>@Error</p>
    </div>
}

<div class="card">
    <div class="card-body">
        <h5 class="card-title">Please enter your details</h5>
        <EditForm Model="loginModel" OnValidSubmit="HandleLogin">
            <DataAnnotationsValidator />
            <ValidationSummary />

            <div class="form-group">
                <label for="email">Email address</label>
                <InputText Id="email" Class="form-control" @bind-Value="loginModel.Email" />
                <ValidationMessage For="@(() => loginModel.Email)" />
            </div>
            <div class="form-group">
                <label for="password">Password</label>
                <InputText Id="password" type="password" Class="form-control" @bind-Value="loginModel.Password" />
                <ValidationMessage For="@(() => loginModel.Password)" />
            </div>
            <button type="submit" class="btn btn-primary">Submit</button>
        </EditForm>
    </div>
</div>

@code {

    private LoginModel loginModel = new LoginModel();
    private bool ShowErrors;
    private string Error = "";

    private async Task HandleLogin()
    {
        ShowErrors = false;

        var result = await AuthService.Login(loginModel);

        if (result.Successful)
        {
            NavigationManager.NavigateTo("/");
        }
        else
        {
            Error = result.Error;
            ShowErrors = true;
        }
    }

}

Following a similar design to the register component. There is a form for the user to input their email address and password.

When the form is submitted the AuthService is called and the result is returned. If the login was successful then the user is redirected to the home page, otherwise they are shown the error message.

Logout Component

We can now register and login but we also need the ability to logout. I’ve gone with a page component to do this but you could also implement this on a button click somewhere.

@page "/logout"
@inject IAuthService AuthService
@inject NavigationManager NavigationManager

@code {

    protected override async Task OnInitializedAsync()
    {
        await AuthService.Logout();
        NavigationManager.NavigateTo("/");
    }

}

The component doesn’t have any UI, when the user navigates to it the Logout method on the AuthService is called and then the user is redirected back to the home page.

Adding a LoginDisplay and updating the MainLayout

The final task is to add a LoginDisplay component and then update the MainLayout component to use it.

The LoginDisplay component is the same one used in the server-side Blazor template. If unauthenticated, it shows the R_egister_ and Log in links - if unauthenticated, it shows the users email and the Log out link.

<AuthorizeView>
    <Authorized>
        Hello, @context.User.Identity.Name!
        <a href="LogOut">Log out</a>
    </Authorized>
    <NotAuthorized>
        <a href="Register">Register</a>
        <a href="Login">Log in</a>
    </NotAuthorized>
</AuthorizeView>

We just need to update the MainLayout component now.

@inherits LayoutComponentBase

<div class="sidebar">
    <NavMenu />
</div>

<div class="main">
    <div class="top-row px-4">
        <LoginDisplay />
        <a href="http://blazor.net" target="_blank" class="ml-md-auto">About</a>
    </div>

    <div class="content px-4">
        @Body
    </div>
</div>

Registering Services

The last thing that’s needed is to register the various services we’ve been building in the Startup class.

public void ConfigureServices(IServiceCollection services)
{
    services.AddBlazoredLocalStorage();
    services.AddAuthorizationCore();
    services.AddScoped<AuthenticationStateProvider, ApiAuthenticationStateProvider>();
    services.AddScoped<IAuthService, AuthService>();
}

If everything has gone to plan then you should have something that looks like this.

Summary

In this post I showed how to create a new Blazor client-side application with authentication using WebAPI and ASP.NET Core Identity.

I showed how to configure the API to process and issue JSON web tokens. As well as how to setup the various controller actions to service the client application.I then showed how to configure Blazor to use the API and the tokens it issued to set the apps authentication state.

As I mentioned at the start of this post, all the code is available on GitHub.

Introduction to Authentication with server-side Blazor

Authentication and authorisation are two fundamental functions in most applications today. Until recently, it wasn’t very clear how to best achieve these functions in Blazor applications. But with the release of ASP.NET Core 3 Preview 6 that all changed.

In this post, I’ll show you how you can create a new server-side Blazor application with authentication enabled. Then we’ll take a high level look at the services and components which are used in the application.

What is the difference between authentication and authorisation?

Let’s start with the difference between authentication and authorisation as this can sometimes be a bit confusing for new developers.

Authentication is the process of determining if someone is who they claim to be.

This can be done in many different ways, but the most common for web applications is a username and password check. Another example of authentication is using your pin code with your debit card at a ATM.

Whatever the mechanism, authentication verifies you are who you say you are. But what it doesn’t do is define what you have access to, that is where authorisation comes in.

Authorisation is the process of checking if someone has the rights to access a resource.

Authorisation occurs after an identity has been established via authentication and determins what parts of a system you can access. For example, if you’re have administrator rights on a system you can access everything. But if you’re a standard user, you may only be able to access specific screens.

Creating a Blazor application with Authentication

We’ll get stuck in straight away by creating a new Blazor server-side application with authentication enabled.

Follow the normal steps for creating a server-side Blazor application.

When you hit the project type screen, select Blazor Server App then select the Change link under Authentication.

From the popup window select Individual User Accounts and then OK.

Make sure that Authentication is set to Individual User Accounts then click Create.

Once the app has been generated press F5 to run it and you should see the following.

Click the Register link in the top right and fill in your details. Then click Register.

You should then be presented with the following screen prompting you to run migrations. This will setup the database which holds the account details. Click the Apply Migrations button, then refresh the page when prompted.

You should then be redirected back to the home page as an authenticated user. You should see the Register link has been replaced with your email address and a Log out button.

We now have a working Blazor app with authentication, so how is all this achieved? Let’s take a look at the various features which enable all this to happen.

ASP.NET Core Identity

Blazors authentication system is built to work with different configurations including ASP.NET Core Identity. The registration process and login screens aren’t Blazor components but razor pages. You won’t find them in the project structure either, they are provided by the following call in the Startup.cs.

services.AddDefaultIdentity<IdentityUser>()

This adds the default identity UI to the application along with the necessary configuration and services. Don’t worry, you can override these pages and customise/create your own and we’ll cover that in a future post.

AuthenticationStateProvider Service

This service provides the authentication state for the current user and is used by the CascadingAuthenticationState component. The service provides a consistant way of serving this information regardless of whether it is being used in a client-side or server-side Blazor application.

When used within a server-side application the current user information is hydrated using the HttpContext which established the connection to the server.

Where as in a client-side application we would have to configure a custom provider which may populate the user information from a API endpoint. We’ll cover this in a future post.

CascadingAuthenticationState Component

Inside the app.razor you will find the following code.

<CascadingAuthenticationState>
    <Router AppAssembly="typeof(Startup).Assembly">
        <NotFoundContent>
            <p>Sorry, there's nothing at this address.</p>
        </NotFoundContent>
    </Router>
</CascadingAuthenticationState>

The part we are interested in is the CascadingAuthenticationState component. It’s responsible for providing the current authentication state to it’s decendent components.

Currently this value is used by the Router and AuthorizeView components to control access to various parts of the UI.

AuthorizeView Component

Inside LoginDisplay.razor we can find an example of the AuthorizeView component.

<AuthorizeView>
    <Authorized>
        <a href="Identity/Account/Manage">Hello, @context.User.Identity.Name!</a>
        <a href="Identity/Account/LogOut">Log out</a>
    </Authorized>
    <NotAuthorized>
        <a href="Identity/Account/Register">Register</a>
        <a href="Identity/Account/Login">Log in</a>
    </NotAuthorized>
</AuthorizeView>

This component allows us to control what parts of the UI are displayed depending on what the user is authorised to view. By default, if no other policy is applied, then non-authenticated users are treated as not authorised. Logged in users are treated as authorised.

It also exposes a context parameter which can be used to view the currently logged in users information. As well as the following 3 templates, two of which you can see used in the code snippet above.

Summary

In this post, I showed you how to create a new server-side Blazor application with authentication. I then talked through some of the services and components used in the application to enable authentication.

In the next post, I’ll dive into client-side Blazor and how to setup authentication using WebAPI and ASP.NET Core Identity.

Using Blazor Components In An Existing MVC Application

I’ve had this post waiting for a while but due to a bug in Preview 5 I’ve been holding off sharing it. But with the release of Preview 6 the bug is gone and I can finally hit the publish button. So let’s get to it!

One of the many awesome things about Blazor is the migration story it offers. If you’re currently developing or maintaining a MVC or Razor Pages application, then you’re in for a treat! You can now replace parts of your application with Blazor components.

In this post, we’re going to cover how to make all the necessary changes to your existing app to allow Blazor components to be used. Then we’ll also look at replacing part of an existing MVC view with a Blazor component.

All of the source code from this blog post is available on GitHub.

Getting Setup

We’re going to be using the Contoso University sample application. This should make things a little more interesting than using the default MVC templates.

As Blazor is only available in .NET Core 3, I’ve taken the current .NET Core 2.2 version and upgraded it using this guide on the Microsoft Docs site.

Adding Blazor Support to an existing MVC application

In order to use Blazor in an existing MVC or Razor Pages application we need to make a few changes, they are.

Referencing Blazors JS

First on our list is to add a reference to blazor.server.js, we need to do this in the main _Layout.cshtml file.

<footer class="border-top footer text-muted">
    <div class="container">
        &copy; 2019 - Contoso University - <a asp-area="" asp-controller="Home" asp-action="Privacy">Privacy</a>
    </div>
</footer>

<script src="_framework/blazor.server.js"></script>

<environment include="Development">
    <script src="~/lib/jquery/dist/jquery.js"></script>
    <script src="~/lib/bootstrap/dist/js/bootstrap.bundle.js"></script>
</environment>

Adding Blazors services

Next, we need to add the services Blazor requires to the service container. We do this in the Startup.cs file.

public void ConfigureServices(IServiceCollection services)
{
    ...

    services.AddServerSideBlazor();
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_3_0);
}

Mapping Blazors SignalR hub

Last but not least, we need to hook up Blazors SignalR hub to the endpoint routing, again in Startup.cs.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    ...

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapControllerRoute("default", "{controller=Home}/{action=Index}/{id?}");
        endpoints.MapBlazorHub();
    });
}

With those 3 things in place our application is now ready to use Blazor components.

Adding a Blazor component

With our application primed for using Blazor, we are going to replace the grid used on the Courses Index view with a Blazor component. The Index view currently looks like this.

@model IEnumerable<ContosoUniversity.Models.Course>

@{
    ViewData["Title"] = "Courses";
}

<h2>Courses</h2>

<p>
    <a asp-action="Create">Create New</a>
</p>
<table class="table">
    <thead>
        <tr>
            <th>
                @Html.DisplayNameFor(model => model.CourseID)
            </th>
            <th>
                @Html.DisplayNameFor(model => model.Title)
            </th>
            <th>
                @Html.DisplayNameFor(model => model.Credits)
            </th>
            <th>
                @Html.DisplayNameFor(model => model.Department)
            </th>
            <th></th>
        </tr>
    </thead>
    <tbody>
        @foreach (var item in Model)
        {
            <tr>
                <td>
                    @Html.DisplayFor(modelItem => item.CourseID)
                </td>
                <td>
                    @Html.DisplayFor(modelItem => item.Title)
                </td>
                <td>
                    @Html.DisplayFor(modelItem => item.Credits)
                </td>
                <td>
                    @Html.DisplayFor(modelItem => item.Department.Name)
                </td>
                <td>
                    <a asp-action="Edit" asp-route-id="@item.CourseID">Edit</a> |
                    <a asp-action="Details" asp-route-id="@item.CourseID">Details</a> |
                    <a asp-action="Delete" asp-route-id="@item.CourseID">Delete</a>
                </td>
            </tr>
        }
    </tbody>
</table>

Creating the CoursesList component

We’re going to start by creating a new folder in the route called Components, this is my preference but you can call this folder whatever you want. In fact you could put your components anywhere you want to in the project.

Then we’re going to add a new component called CoursesList.razor with the following code.

@using Microsoft.AspNetCore.Components
@using ContosoUniversity.Models

<table class="table">
    <thead>
        <tr>
            <th>
                Number
            </th>
            <th>
                Title
            </th>
            <th>
                Credits
            </th>
            <th>
                Department
            </th>
            <th></th>
        </tr>
    </thead>
    <tbody>
        @foreach (var item in Courses)
        {
            <tr>
                <td>
                    @item.CourseID
                </td>
                <td>
                    @item.Title
                </td>
                <td>
                    @item.Credits
                </td>
                <td>
                    @item.Department.Name
                </td>
                <td>
                    <a href="Courses/Edit/@item.CourseID">Edit</a> |
                    <a href="Courses/Details/@item.CourseID">Details</a> |
                    <a href="Courses/Delete/@item.CourseID">Delete</a>
                </td>
            </tr>
        }
    </tbody>
</table>

@code {
    [Parameter] public IEnumerable<Course> Courses { get; set; }
}

As you can see the code is based on the table from the original Index view. We’ve declared the list of courses as a Parameter to be passed in. Then we’re just iterating over the courses and displaying each one as a row in the table, mimicking the behaviour of the original Index view.

Using the CoursesList component in a view

To use our new CoursesList component, we’re going to update the original Index view to look like this.

@model IEnumerable<ContosoUniversity.Models.Course>

@{
    ViewData["Title"] = "Courses";
}

<h2>Courses</h2>
    
<p>
    <a asp-action="Create">Create New</a>
</p>

@(await Html.RenderComponentAsync<CoursesList>(RenderMode.ServerPrerendered, new { Courses = Model }))

We’ve removed the original table and replaced it with a call to the RenderComponentAsync HTML helper. This helper is responsible for adding and wiring up our component correctly.

It’s worth noting that this way of adding components is not going to be the long term solution. The plan is to be able to add components using normal elements, so in our case the above code would look like this.

@model IEnumerable<ContosoUniversity.Models.Course>

@{
    ViewData["Title"] = "Courses";
}

<h2>Courses</h2>
    
<p>
    <a asp-action="Create">Create New</a>
</p>
    
<CoursesList Courses="Model" />

This work is being tracked by the following issue on GitHub, #6348.

And, that’s it! We’ve now successfully replaced a section of a MVC view with a Blazor component. You can fire up the app and browse to the page and everything should look exactly the same as before.

Summary

The ability to replace parts of a MVC view or a Razor page with Blazor components is incredibly powerful. And it offers a great migration path for anyone who is looking to modernise an existing application.

In this post, I showed how you can enable an existing MVC application to use Blazor components. Then showed how to replace part of an existing MVC view with a Blazor component.

Prerendering a Client-side Blazor Application

While prerendering is now the default for server-side Blazor applications, I only recently discovered (as in the last 48 hours via Daniel Roth’s work) that client-side Blazor applications can take advantage of this as well. In this post, I’m going to show you how you can setup your client-side Blazor application for prerendering.

The example project for this post can be found on GitHub.

What is prerendering?

Prerendering is a process where all the elements of a web page are compiled on the server and static HTML is served to the client. This technique is used to help SPAs (Single Page Applications) improve their SEO (Search Engine Optimisation). Another benefit is that sites appear to load much faster.

What this means for a Blazor application is that the requested page will be built on the server and compiled to static HTML. This static HTML will include the blazor.webassembly.js file which is present in the standard client-side Blazor template. When the client receives this static HTML it will be processed and rendered as normal.

When the blazor.webassembly.js file is executed the mono runtime will be downloaded along with the application dlls and your application will be run. At this point all of the static prerendered elements will be replaced with interactive components and the application will become interactive.

Now, this may sound like a lot has to happen before your application becomes usable. But this all happens in a very short space of time and is imperceivable to most end users.

The Prerendering Trade-off

Before we look at how to enable prerendering I want to point out that there are some trade-offs with using it.

You will no longer be able to deploy your Blazor application as static files. As we’ll see in a second, prerendering requires a razor page, and that means a .NET runtime is required. While this is probably not a big issue, I do want to make sure you’re aware of it.

The other trade-off is that you must manage any JavaScript calls to account for prerendering. If you attempt to perform JavaScript interop in the OnInit or OnInitAsync method of a component which is being prerendered then you will get an exception. When using prerendering all JavaScript interop calls should be moved to the OnAfterRenderAsync life-cycle method. This method will only be called once the page is fully rendered.

Enabling Prerendering

We’re going to start with a stand alone Blazor application and go though the steps to enable prerendering. I’ve created a new stand alone Blazor application using the .NET CLI. You can also use the template in Visual Studio.

dotnet new blazorwasm -o BlazorPrerendering.Client

If you are looking to enable prerendering on a client-side Blazor app that is already using the “Hosted” template. Then you can just add in the bits of configuration as we go along.

Adding a Host Project

The first thing we are going to do is add a new empty ASP.NET Core Web App. Again, I’m using the .NET CLI but you can use the templates in Visual Studio if you prefer.

dotnet new web -o BlazorPrerendering.Server

Our solution should now look like this.

Then we need to add a project reference from the server project to the client project as well as a NuGet package. So the easiest thing to do is edit the csproj file directly.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.Server" Version="3.2.1" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\BlazorPrerendering.Client\BlazorPrerendering.Client.csproj" />
  </ItemGroup>

</Project>

Once you’re done, your project file should look like the code above.

Configuring The Host

Now our projects are setup, we are going to make some changes to the server projects Startup.cs.

First, add the following code to the ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    services.AddRazorPages();

    services.AddScoped<HttpClient>(s =>
    {
        var navigationManager = s.GetRequiredService<NavigationManager>();
        return new HttpClient
        {
            BaseAddress = new Uri(navigationManager.BaseUri)
        };
    });
}

Then replace the code in the Configure method with the code below.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseBlazorFrameworkFiles();
    app.UseStaticFiles();

    app.UseRouting();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapFallbackToPage("/_Host");
    });
}

Finally, we need to create a folder called Pages in the root of the server project and create a file called _Host.cshtml with the following code.

@page "/"
@namespace BlazorPrerendering.Server.Pages
@using BlazorPrerendering.Client
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Prerendering client-side Blazor</title>
    <base href="~/" />
    <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" />
    <link href="css/site.css" rel="stylesheet" />
</head>
<body>
    <app>
        <component type="typeof(App)" render-mode="Static" />
    </app>

    <script src="_framework/blazor.webassembly.js"></script>
</body>
</html>

Testing Prerendering

We should now be able to start up the server project and launch the application. Once the application has loaded, the easiest way to test prerendering is to disable JavaScript in your browser. Then reload the page, if the page loads then prerendering is working.

To achieve this in Chrome or Edgeium, open the dev tools and press ctrl+shift+p or cmd+shift+p, depending on your OS. The start typing JavaScript, you should see the option appear to disable JavaScript.

You should still be able to navigate around the app but you will find components will not be interactive. Go ahead and enable JavaScript again (just repeat the steps to disable but now the option will be to Enable JavaScript) and refresh the page, you should now have an interactive application once more.

Summary

Prerendering is a really useful tool to have available and it’s great that we can now use it with both server-side and client-side Blazor. In this post, I’ve shown how you can enable prerendering of your client-side Blazor applications by use of a hosted server project.

Getting Started With Blazored Typeahead

Update: I’ve now released a new version of the Typeahead which supports Blazors forms and validation. I’ve updated this post accordingly as there were some breaking changes. Check out the repo for full details.

This week I thought I’d give an overview of the most recent addition to the Blazored collection, Blazored.Typeahead.

For those for you not familiar with Blazored, it’s a collection of components and libraries to help with developing Blazor applications. There are libraries such as Blazored.LocalStorage and Blazored.SessionStorage which wrap browser APIs with C# APIs, so you don’t have to. Then there are components such as Blazored.Modal and Blazored.Toast which save you having to build common UI features.

What is Blazored.Typeahead

Blazored.Typeahead is flexible autocomplete/typeahead/suggestion component for Blazor applications. Which looks like this…

Features

Here are some of the key features of the component.

Forms Integration

There are two versions of the component, standalone and forms integrated. The first is designed to work independently while the second will only work with Blazor built-in forms and validation components.

Searching Data

The primary feature of the component is in the way it handles searching. You provide the component with a search method which it will call with the search text entered. This means you’re in full control of where your data comes from, it could be a local collection or it could be the result of an API call.

Debounce Control

The control has built in debounce functionality which delays searches being performed until a period of inactivity has been reached. This is extremely useful when calling external APIs for searching as performing a search on every keypress would not be a good thing, especially for long queries.

Templating

Developers are able to provide templates for the following things.

This allows total control over how the results look and you can make them as rich and stylish as you want.

Getting Started

Now we’ve looked at a few of the features let’s look at how to get the component into a Blazor application.

Installing

Just like all Blazored packages, Blazored.Typeahead is available on NuGet. You can add it to your apps by either searching for it in the package explorer in Visual Studio.

Or if you prefer the command line, you can install it via the Package Manager Console or dotnet CLI using the following commands, respectively.

Install-Package Blazored.Typeahead
dotnet add package Blazored.Typeahead

Once the package is installed I would suggest adding the following using statement to your main _Imports.razor. This will save you having to use the fully qualified name when using the component.

@using Blazored.Typeahead

You will also need to add the following tag to the head tag of either the _Host.cshtml or index.html file, depending on if you’re running a Blazor Server App or a Blazor WebAssembly App.

<link href="_content/Blazored.Typeahead/blazored-typeahead.css" rel="stylesheet" />

As well as the following script tag at the bottom.

<script src="_content/Blazored.Typeahead/blazored-typeahead.js"></script>

Configuration Options

In order to use the component there are a few required parameters and templates that must be provided. There are also several optional parameters which allow you to fine tune the components behaviour.

Parameters

Templates

Usage Example

In this example, we’ll look at the minimum needed to get up and working. Most settings on the component have default values, the only things that must be specified are:

To keep things simple we’ll be querying a local collection of Films.

<BlazoredTypeahead SearchMethod="SearchFilms"
                   @bind-Value="SelectedFilm">
    <SelectedTemplate>
        @context.Title
    </SelectedTemplate>
    <ResultTemplate>
        @context.Title (@context.Year)
    </ResultTemplate>
</BlazoredTypeahead>

@if (SelectedFilm != null)
{
    <p>Selected Film is: @SelectedFilm.Title</p>
}

@code {

    private List<Film> Films;
    private Film SelectedFilm;

    protected override void OnInitialized()
    {
        Films = new List<Film> {
            new Film("The Matrix", 1999),
            new Film("Hackers", 1995),
            new Film("War Games", 1983) };
    }

    private async Task<IEnumerable<Film>> SearchFilms(string searchText)
    {
        return await Task.FromResult(Films.Where(x => x.Title.ToLower().Contains(searchText.ToLower())).ToList());
    }

    class Film
    {
        public string Title { get; set; }
        public int Year { get; set; }

        public Film(string title, int year)
        {
            Title = title;
            Year = year;
        }
    }

}

The key thing to note in the code above is the SearchFilms method. This is what the typeahead is calling to perform the search. This method must have the following signature.

Task<IEnumerable<T>> MethodName(string searchText)

This gives us lots of flexibility on how data is sourced and queried. For example, we could change the SearchFilms method to query an API instead.

private async Task<IEnumerable<Film>> SearchFilms(string searchText)
{
    var result = await httpClient.GetJsonAsync<List<Film>>($"https://awesomefilmsearch.com/api/films/?title={searchText}");
    return result;
}

Forms Integration

The current version of the control does not yet integrate with the forms and validation provided in Blazor. This is the next feature to be worked on and I’m hoping this will be done shortly.

Blazored.Typeahead now supports forms integration without any additional setup or code changes. Just put it inside of a EditForm component and it will just work as expected.

<EditForm Model="@FormModel" OnValidSubmit="HandleFormSubmit">
    <DataAnnotationsValidator />

    <BlazoredTypeahead SearchMethod="GetPeopleLocal"
                       @bind-Value="FormModel.SelectedPerson"
                       Placeholder="Search by first name...">
        <SelectedTemplate Context="person">
            @person.Firstname
        </SelectedTemplate>
        <ResultTemplate Context="person">
            @person.Firstname @person.Lastname
        </ResultTemplate>
    </BlazoredTypeahead>
    <ValidationMessage For="@(() => FormModel.SelectedPerson)" />

    <button class="btn btn-primary" type="submit">Submit</button>
</EditForm>

Summary

That’s about it for Blazored.Typeahead! If you like what you see then please go and add it to your Blazor project. This is still an early version of the component so if there are any features you would like to see, or if you find a bug, then please head over the GitHub repo and open an issue and we can have a chat about it.

Calling gRPC Services With Server-side Blazor

In this post, I want to show you how you can call gRPC services using server-side Blazor. I just want to say that I’ve only been experimenting with gRPC for a couple of days so I’m very much still learning, but it’s been a great experience so far.

Before we get into any code I want to just explain what gRPC is, for those who’ve not heard of it before.

All of the code for this post is available on GitHub.

What is gRPC

gRPC is a fast and efficient open source remote procedure call (RPC) framework initially created by Google. It’s an evolution of an internal RPC technology called “Stubby”. It uses HTTP/2 as its transport protocol and a technology called Protocol Buffers, or protobuf, to describe interfaces and messages.

Currently, gRPC stands for gRPC Remote Procedure Call. I say currently as the ‘g’ stands for something different in every release. Previous definitions include, ‘green’, ‘good’, ‘gentle’ and ‘gregarious’. If you’re interested, you can see the full list on the main repo on GitHub.

So why use it? Simply put, it’s really fast and highly scaleable. Because of this, unsurprisingly, gRPC has gained a lot of traction in microservices.

When comparing REST with gRPC, it does look more limited. With REST you’re limited to 4 primary functions, GET, PUT, POST and DELETE. With gRPC you can define any kind of function be that synchronous, asynchronous, uni-direction or even streams. gRPC also requires much less boilerplate code than REST. With much of the code needed on the server and the client generated automatically from protobuf files.

As I said at the start, I’m still learning about gRPC so I’m sure there’s loads more good stuff to discover. But for now let’s move on and look at how we can setup a simple gRPC service and configure a Blazor client.

Getting Setup

Everything I’m going to be showing you requires the latest .NET Core 3 SDK and Visual Studio 2019 Preview.

Microsoft are actively contributing to the gRPC for .NET project and as of .NET Core 3 Preview 3, there is now a template for building gRPC services. You can find it by creating a new ASP.NET Core Web Application.

The Service

I’m going to use the default gRPC service template as is, for this post. I’m not going to go into detail about how it’s setup. But I do want to focus on greet.proto file, in the Protos folder.

syntax = "proto3";

package Greet;

// The greeting service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (HelloRequest) returns (HelloReply) {}
}

// The request message containing the user's name.
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings.
message HelloReply {
  string message = 1;
}

This file defines the contract for the service. The Greeter service has a single interface, SayHello, which takes a HelloRequest and returns a HelloReply.

The message definition for HelloRequest specifies one field called name. While HelloReply specifies one called message. Fields are either scalar types or composite types.

In the example above, all fields are scalar types with a type, name and unique number. It’s important that once defined, these unique numbers don’t change as they’re used to identify fields once in binary format.

The gRPC tooling uses this file to generate a service base class which is then used in the GreeterService in the Services folder.

The Client

Now the service is in place I’m going to add a Blazor server-side project to the solution. This is going to be the gRPC client.

To make things easier to test, I’m going to make a change to allow multiple startup projects for the solution. This is done by right clicking on the solution and selecting Set Startup Projects… Then setting both projects action to Start.

I’m going to start by making a copy of greet.proto from the server project and adding it to the client project.

Next I’m going to make some additions to the clients csproj file.

I’m going to include 3 package references for Google.Protobuf, Grpc.Core and Grpc.Tools. I’m also including an ItemGroup referencing the protobuf file so the tooling can generate the client code needed to communicate with the server. The final csproj file looks like this.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <LangVersion>7.3</LangVersion>
  </PropertyGroup>

  <ItemGroup>
    <Protobuf Include="Protos\greet.proto" GrpcServices="Client" Generator="MSBuild:Compile"/>
    <Content Include="@(Protobuf)" />
    <None Remove="@(Protobuf)" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Google.Protobuf" Version="3.8.0-rc.1" />
    <PackageReference Include="Grpc.Core" Version="1.21.0" />
    <PackageReference Include="Grpc.Tools" Version="1.21.0">
      <PrivateAssets>all</PrivateAssets>
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    </PackageReference>
  </ItemGroup>

</Project>

I need to add a couple of using statements to my main _Imports.razor.

@using Greet
@using Grpc.Core

Finally, I’m going to replace the contents of the Index.razor page with the following code.

@page "/"

<h1>Hello, gRPC!</h1>

<button class="btn btn-primary" onclick="@SayHello">Say Hello</button>

<hr />

<p>@Greeting</p>

@functions {

    private string Greeting;

    async Task SayHello()
    {
        var channel = new Channel("localhost:50051", ChannelCredentials.Insecure);
        var client = new Greeter.GreeterClient(channel);

        var reply = await client.SayHelloAsync(new HelloRequest { Name = "Blazor gRPC Client" });
        Greeting = reply.Message;

        await channel.ShutdownAsync();
    }

}

All the magic is happening in the SayHello method.

I start by creating a gRPC channel to the server. Next, I create an instance of the GreeterClient. This class is created by the code gen tools using the greet.proto file.

With an instance of the client I can call methods as defined in the proto file. I can send a HelloRequest specifying the clients name and await the reply.

Once I have a reply I can assign the Message to the Greeting field and then close the channel to the server.

All thats left to do is fire everything up and click the button.

Summary

This has been a high-level overview of gRPC and how you can configure server-side Blazor as a client. We started with a quick introduction to what gRPC is and why we might use it, before diving into a small sample app with a gRPC service and a server-side Blazor Client.

I have only scratched the surface with gRPC and I’m looking forward to learning more about it and trying out some more advanced scenarios.

A Detailed Look At Data Binding in Blazor

Binding data is a fundamental task in single page applications (SPAs). At some point every application needs to either display data (e.g. labels) or receive data (e.g. forms).

While most SPA frameworks have similar concepts for data binding, either one way binding or two way binding, the way they work and are implemented varies widely. In this post, we’re going to have a good look at how one way and two way binding work in Blazor.

One Way Binding

One way bindings have a unidirectional flow, meaning that updates to the value only flow one way. A couple of examples of one way binding are rendering a label dynamically or dynamically outputting a CSS class name in markup.

One way bindings can be constant and not change, but often the value will have a reason to be updated. Otherwise it would probably be better to avoid using a bound value altogether and just type the value out directly.

In Blazor, when modifying a one way binding the application is going to be responsible for making the change. This could be in response to user action or event such as a button click. The point being, that the user will never be able to modify the value directly, hence one way binding.

Now we have an idea what one way binding is let’s take a look at some examples.

<h1>@Title</h1>

@code {
    private string Title { get; set; } = "Hello, World!";
}

In the code above, we have a component which displays a heading. The contents of that heading, Title, is a one way bound value. In order to bind one way values we use the @ symbol followed by the property, the field or even the method we want to bind too.

<h1>@Title</h1>

<button @onclick="UpdateTitle">Update Title</button>

@code {
    private string Title { get; set; } = "Hello, World!";

    private void UpdateTitle()
    {
        Title = "Hello, Blazor!";
    }
}

In the first example the value is set and never changed, in this example we’ve added a method which updates the value of Title when the button is clicked.

As we talked about previously, values can only be updated in one direction and here we can see that in action. When the buttons onclick event is triggered the UpdateTitle method is called and Title property is updated to the new value. Executing event handlers in Blazor triggers a re-render which updates the UI.

One Way Binding Between Components

In the previous examples, we looked at one way binding inside of a component. But what if we want one way binding across components? Using our previous example, say we wanted to display the title of a parent component in a child component, how could we achieve this? By using component parameters.

<!-- Parent Component -->

<h1>@Title</h1>

<button @onclick="UpdateTitle">Update Title</button>

<ChildComponent ParentsTitle="Title" />

@code {
    private string Title { get; set; } = "Hello, World!";

    private void UpdateTitle()
    {
        Title = "Hello, Blazor!";
    }
}
<!-- Child Component -->

<h2>Parent Title is: @ParentsTitle</h2>

@code {
    [Parameter] public string ParentsTitle { get; set; }
}

In the example, the parent component is passing its title into the child component via the child components ParentsTitle component parameter. When then components are first rendered the headings will be the following.

<!-- Parent Component -->
<h1>Hello, World!</h1>

<!-- Child Component -->
<h2>Parent Title is: Hello, World!</h2>

When the Update Title button is pressed then the output will become the following.

<!-- Parent Component -->
<h1>Hello, Blazor!</h1>

<!-- Child Component -->
<h2>Parent Title is: Hello, Blazor!</h2>

Similar to what happened with the earlier example inside a single component. The button click event calls the UpdateTitle method and the Title property is updated. Then the running of the event handler triggers a re-render of the parent component.

This also updates the Title parameter passed to the child component. Updating the component parameter triggers a re-render of the child component, updating its UI with the new title.

Two way binding

Now we know all about one way binding, you could probably guess that two way bindings have a bidirectional flow. Allowing values to be updated from two directions.

The primary use case for two way binding is in forms, although it can be used anywhere that an application requires input from the user. The primary method of achieving two way binding in Blazor is to use the bind attribute.

The Bind Attribute

The bind attribute is a very versatile tool for binding in Blazor and has 3 different forms which allows developers to be very specific about how they want binding to occur.

We’re going to look at each of these over the next few examples to see how they work.

Basic Two Way Binding

<h1>@Title</h1>

<input @bind="@Title" />

@code {
    private string Title { get; set; } = "Hello, World!";
}

Continuing with our previous examples, we have added an input control which is two way bound to the Title value using the bind attribute. If you run this code you will notice that the value of Title doesn’t actually update until you tab out of the input.

This is because under the covers bind is actually setting the value attribute of the input to Title and setting up a onchange handler which will update Title when the input loses focus. We can see this if we look at lines 8 and 9 of the compiled components BuildRenderTree method.

protected override void BuildRenderTree(Microsoft.AspNetCore.Components.RenderTree.RenderTreeBuilder builder)
{
    builder.OpenElement(0, "h1");
    builder.AddContent(1, Title);
    builder.CloseElement();
    builder.AddMarkupContent(2, "\r\n\r\n");
    builder.OpenElement(3, "input");
    builder.AddAttribute(4, "value", Microsoft.AspNetCore.Components.BindMethods.GetValue(Title));
    builder.AddAttribute(5, "onchange", Microsoft.AspNetCore.Components.EventCallback.Factory.CreateBinder(this, __value => Title = __value, Title));
    builder.CloseElement();
}

The bind attribute understands different control types, for example, a checkbox does not use a value attribute, it uses a checked attribute. bind knows this and will apply the correct attributes accordingly.

This is all good, but what if we want our Title to update as we type and not just when the input loses focus? Well, we can do that using a more specific version of bind.

Two Way Binding To A Specific Event

<h1>@Title</h1>

<input @bind-value="Title" @bind-value:event="oninput" />

@code {
    private string Title { get; set; } = "Hello, World!";
}

We can specify what event the bind attribute should use to handle updating the value. As we now know, by default this event is onchange, but in the example above we’ve specified the oninput event. This event is fired for each character typed so the Titles value is updated continually.

Two Way Binding Between Components

To create two way binding between components we can once again take advantage of the bind attribute. We also need to setup our components with a certain convention, let’s look at an example.

<h1>@Title</h1>

<ChildComponent @bind-ParentsTitle="Title" />

<button @onclick="UpdateTitle">Update Title</button>

@code {
    private string Title { get; set; } = "Hello, World!";

    private void UpdateTitle()
    {
        Title = "Hello, Blazor!";
    }
}
<h2>Parent Title is: @ParentsTitle</h2>

<button @onclick="UpdateParentsTitle">Update Parents Title</button>


@code {

    [Parameter] public string ParentsTitle { get; set; }
    [Parameter] public EventCallback<string> ParentsTitleChanged { get; set; }

    private async Task UpdateParentsTitle()
    {
        ParentsTitle = "Hello, From Child Component!";
        await ParentsTitleChanged.InvokeAsync(ParentsTitle);
    }
}

We’re adapting our previous example from one way binding and making it two way. There’s now a button on the child component which triggers a method to update the ParentsTitle and invokes the ParentsTitleChanged EventCallback. The parent component has also been updated to use bind-ParentTitle when passing its Title parameter to the child component.

If we run the above code we’re able to click the button on the parent or the button on the child and both components titles will be updated. So how does this work?

The two key factors here are the EventCallback on the child component and the use of the bind attribute on the child component in the parent. By using this version of bind in the parent, it’s the equivalent of writing this.

<ChildComponent @bind-ParentsTitle="Title" @bind-ParentsTitle:event="ParentsTitleChanged" />

By default Blazor will look for an event on the child component using the naming convention of {PropertyName}Changed. This allows us to use the version of bind, only specifying the property name. It is important to note that the event property will need to be marked with the Parameter attribute.

However, it’s also possible to use a completely different name for the EventCallback property, for example, ParentsTitleUpdated. But in this case we would need to use the long form version of bind and specify the event name like so.

<ChildComponent @bind-ParentsTitle="Title" @bind-ParentsTitle:event="ParentsTitleUpdated" />

Summary

I think that brings us to a nice conclusion. We’ve had a fairly deep look into binding in Blazor covering one way and two way binding.

We looked at how we use one way binding inside of a component and how values are updated. Then moved on to one way binding between components. We then moved onto two way binding, both within a component and between components, using the bind attribute. We looked at how we can use the various forms of the bind attribute to specify which values and/or events to use when binding.

Blazor Bites Updated and Build 2019 Blazor Roundup

With it being Build week and there being loads of cool Blazor stuff talked about I thought I would do a news post this week. I’ve also been wanting to get some of my older posts updated to the latest version of Blazor, which I’ve managed to get done as well. So let’s start there and then check out all the cool bits from Build 2019.

Blazor Bites Updated

I’ve spent the bank holiday weekend updating all of my Blazor Bites posts to the latest version of Blazor. If you’ve not read any of them before, they’re the first posts I wrote about Blazor, just after Blazor 0.1.0 came out back in early 2018.

At that time there was no official documentation about Blazor. There was only a community effort called Learn Blazor. So I wanted to create some posts to help people understand the various areas of Blazor and get up and running as painlessly as possible. I’ve always tried to keep them updated as time has gone on but recently they have fallen a behind, that is no longer the case.

If you are new to Blazor then you should find them helpful, if you’re more experienced then it’s probably worth going straight to my more indepth posts about the various features of Blazor.

Build 2019 Blazor Roundup

I’m writing this on the final day of Build 2019 and it’s been another great conference by Microsoft. Unfortunatly I wasn’t able to attend in person by I’ve been watching the live streams as much as possible as well as various recorded sessions.

In terms of Blazor, there has been plenty going on with a couple of great sessions which I’ll link below. There was also some interesting news regarding release schedules for the various Blazor models.

Blazor Sessions

There were two fantastic sessions at this years Build which featured Blazor.

The first was hosted by Jeff Hollan called Serverless web apps with Blazor, Azure Functions, and Azure Storage. In his session Jeff pairs Blazor with Azure functions to create a fully serverless full stack C# web application.

The second session was hosted by Daniel Roth titled, Full stack web development with ASP.NET Core 3.0 and Blazor. In his session Daniel builds out a full stack C# application using Blazor for the frontend as well as some cool new .NET tech such as gRPC and worker processes.

Blazor Release Schedules

At Build we also found out the offical release schedule for server-side Blazor shipping with .NET Core 3. .NET Core 3 will ship in September 2019, there will also be a couple of RCs being released in July and August.

Client-side Blazor is now in official preview but it will not be shipping with .NET Core 3, but sometime after. While we still don’t have a firm date for client-side Blazor, in a Q&A on Ed Charbeneau’s StateHasChanged stream. Daniel Roth (PM for Blazor) said he felt we may be looking at Q1 of 2020, but he was clear that it was a gut feeling and not an official date.

Get Some Sass Into Your Blazor App

I’ve been doing a lot of styling work recently so I thought it might be useful to write a post about it. We’re going to have a run through of what Sass is and why you might want to use it. And then we’re going to have a look at how we can get it integrated into a Blazor application.

What is Sass?

As you can probably tell from my very subtle title, I use Sass for managing my styles. If you’ve never heard of Sass, it stands for Syntactically Awesome Style Sheets and it’s a Css extension language or pre-processor.

Sass has two syntax options, SCSS and indented. The latter is the older, original syntax for Sass. Files using this option have a .sass extension. It uses indentation in order to describe the format of the document instead of the brackets and semi-colons found in traditional Css.

SCSS is the more modern syntax and files using this option have the .scss extension. Minus a few small exceptions, this is superset of Css and therefor any valid Css is also valid SCSS. This option uses brackets and semi-colons and looks almost identical to traditional Css which makes it easier to pick up for most people.

We can’t use Sass files directly, they must be compiled it into valid Css first. There are a few different way of doing this but traditionally the common approach is to install Npm, a task runner like Gulp along with various other packages to handle the compilation and eventual minification. Don’t worry though, in a bit I’m going to show you how you can avoid all of that and stay within the .NET ecosystem to compile your Sass.

Why use Sass?

Now we know a bit about Sass why would we want to use it? Css has always been a tricky to manage, especially in large applications or team environments. Stylesheets become very large and making changes or keeping things organised quickly becomes an issue. Not to mention the amount of repeated code that inevitably happens.

SASS offers a solution to this by enabling the use of basic programming paradigms such as modules, variables, functions and inheritance.

For example, how many times have you gone searching back through your Css for a certain colour value or padding size? Sass variable remove this issue, you can define your value in a variable which you can then use throughout your styles.

$primary-color: #f4f4f3;

a { color: $primary-color; }
.logo { color: $primary-color; }

Another common problem with Css is repeating the same code. SASS give us an answer to that as well in the form of mixins. Mixins allow us to write re-usable chunks of code which we can then apply anywhere across our site.

@mixin flex-row() {
  display: flex;
  flex-direction: row;
}

.row {
  @include flex-row;
}

The above example compiles to the following Css.

.row {
  display: flex;
  flex-direction: row;
}

We can even pass variables into them to make them even more flexible.

@mixin transform($degrees) {
  transform: translateZ($degrees);
  -webkit-transform: translateZ($degrees);
  -ms-transform: translateZ($degrees);
}

.header {
  @include transform(180deg);
}

Which compiles to this.

.header {
  transform: translateZ(180deg);
  -webkit-transform: translateZ(180deg);
  -ms-transform: translateZ(180deg);
}

You can see examples of all the great features of Sass in the guide on the official site. But hopefully, even with the examples above, you can see what an advantage using Sass can give you.

Using SASS with Blazor

Now we know some of the benefits of using SASS, let’s look at how can we take advantage of it in Blazor. We’re also looking to avoid the JavaScript tool chain for compiling the Sass as well, so what are our options?

The answer to this question is Web Compiler by Madz Kristensen. This extension for Visual Studio will allow us to compile Sass to Css without having to install NPM or Gulp. The even better news is that there is also a NuGet package which adds a MSBuild task so we can even compile Sass in our devops pipeline.

Installing Web Compiler

Start by opening up Visual Studio 2019 and going to the Extension > Manage Extensions menu. Select Online from the left menu and then search for web compiler in the search box.

Click Download and then close Visual Studio to start the install process.

Once the install process is complete you can re-open Visual Studio. I’m going to create a fresh Blazor client-side app for this process but you can follow along using your own application.

Enabling Sass Compilation

I’m going to start by adding a Sass file to my project. I"m going to add this in a new folder at the root of my app called Styles.

Next, I’m going to right click on that file and go to the Web Compiler menu and then Compile File.

Once that is done there should be some new files in the project.

There is now a .css version of my Sass file, BlazorSass.css. And there is also a minified version, BlazorSass.min.css. In the root there are two new files, compilerconfig.json and compilerconfig.json.defaults. Let have a closer look at these two.

// compilerconfig.json

[
  {
    "outputFile": "Styles/BlazorSass.css",
    "inputFile": "Styles/BlazorSass.scss"
  }
]

The compilerconfig.json allows us to specify an input and an output file. We input SASS and it will output Css. By default, the output is located next to the input file but I’m going to make a change here and move the output location to the wwwroot folder. This will allow our compiled Css to be included when we publish our app.

// compilerconfig.json

[
  {
    "outputFile": "wwwroot/css/BlazorSass.css",
    "inputFile": "Styles/BlazorSass.scss"
  }
]

You will notice that as soon as you save the changes to the file your Sass will be recompiled and a new file will appear in the new output location.

The great thing about Web Compiler is that whenever a change is made to either a Sass file or the compilerconfig.json a recompile will be triggered. This means that when you have your Blazor app running you can make style changes and you won’t need to do a rebuild, you will be able to just refresh the browser to see your changes.

Let’s have a look at the compilerconfig.json.defaults file next.

{
  "compilers": {
    "less": {
      "autoPrefix": "",
      "cssComb": "none",
      "ieCompat": true,
      "strictMath": false,
      "strictUnits": false,
      "relativeUrls": true,
      "rootPath": "",
      "sourceMapRoot": "",
      "sourceMapBasePath": "",
      "sourceMap": false
    },
    "sass": {
      "autoPrefix": "",
      "includePath": "",
      "indentType": "space",
      "indentWidth": 2,
      "outputStyle": "nested",
      "Precision": 5,
      "relativeUrls": true,
      "sourceMapRoot": "",
      "lineFeed": "",
      "sourceMap": false
    },
    "stylus": {
      "sourceMap": false
    },
    "babel": {
      "sourceMap": false
    },
    "coffeescript": {
      "bare": false,
      "runtimeMode": "node",
      "sourceMap": false
    },
    "handlebars": {
      "root": "",
      "noBOM": false,
      "name": "",
      "namespace": "",
      "knownHelpersOnly": false,
      "forcePartial": false,
      "knownHelpers": [],
      "commonjs": "",
      "amd": false,
      "sourceMap": false
    }
  },
  "minifiers": {
    "css": {
      "enabled": true,
      "termSemicolons": true,
      "gzip": false
    },
    "javascript": {
      "enabled": true,
      "termSemicolons": true,
      "gzip": false
    }
  }
}

Ok, there’s a lot of stuff in there. So as well as compiling Sass files, Web Compiler can compile a lot of other things as well. From this defaults file, we are able to configure the settings for the various compilers.

I’m only interested in Sass and minification so everything else can be safely removed.

{
  "compilers": {
    "sass": {
      "autoPrefix": "",
      "includePath": "",
      "indentType": "space",
      "indentWidth": 2,
      "outputStyle": "nested",
      "Precision": 5,
      "relativeUrls": true,
      "sourceMapRoot": "",
      "lineFeed": "",
      "sourceMap": true
    }
  },
  "minifiers": {
    "css": {
      "enabled": true,
      "termSemicolons": true,
      "gzip": false
    }
  }
}

I’m not going to change any of the default other than to enable sourceMaps which will make life a bit easier when debugging style issues.

The final piece of configuration needed is to enable compile on build. This will issue a prompt to install the NuGet package I mentioned earlier. This package will provide a MSBuild task so the Sass can be compiled during builds. To do this, right click on the compilerconfig.json and go to the Web Compiler menu and then Enable compile on build…

When you see the prompt to install the NuGet package, select Y_es_.

You should then be able to see the BuildWebCompiler NuGet package in your projects dependencies.

Linking the Css

The last thing to do is to add a link to the compiled Css file into the index.html.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width" />
    <title>BlazorSass</title>
    <base href="/" />
    <link href="css/BlazorSass.min.css" rel="stylesheet" />
</head>
<body>
    <app>Loading...</app>

    <script src="_framework/blazor.webassembly.js"></script>
</body>
</html>

That’s it, the project is now setup to use Sass for styling.

Summary

Sass is a very powerful tool to help us manage styles over traditional Css. We can even avoid the JavaScript tool chain by taking advantage of Madz excellent Web Compiler extension.

If you would like to learn more about Sass, then here are a few resources which will help you get started.

In a future post I’m going to share some ideas on how to structure Blazor applications which will include Sass structure as well. So check back for that one soon.

3 Ways to Communicate Between Components in Blazor

One of the most common questions I get asked, or I see asked, is what is the best way to communicate between components? I think the answer, like so many things in software development is, it depends. We’re going to look at 3 different ways to communicate between components and how you can best use them.

The three techniques that we’re going to look at are.

1. EventCallbacks

We’re going to start with EventCallbacks. EventCallback and EventCallback<T> were added to Blazor in .NET Core 3 Preview 3. They give us a better way to define component callbacks over using Action or Func.

The reason for this is that when using Action or Func the callback method had to make a call to StateHasChanged in order to render any changes. With EventCallback this call is made for you, automatically by the framework. You can also provide a synchronous or asynchronous method to an EventCallback without having to make any code changes.

EventCallbacks are great for situations where you have nested components and you need a child component to trigger a parent components method upon a certain event.

<!-- Child Component -->

<button @onclick="@(() => OnClick.InvokeAsync("Hello from ChildComponent"))">Click me</button>

@code {

    [Parameter] public EventCallback<string> OnClick { get; set; }

}
<!-- Parent Component -->

<ChildComponent OnClick="ClickHandler"></ChildComponent>

<p>@message</p>

@code {

    string message = "Hello from ParentComponent";

    void ClickHandler(string newMessage)
    {
        message = newMessage;
    }

}

In the example above, the child component exposes an EventCallback<string> parameter, OnClick. The parent component has registered its ClickHandler method with the child component. When the button is clicked the parent components method is invoked with the string from the child. Due to the automatic call to StateHasChanged, the message the parent component displays is automatically updated.

2. Cascading Values

The second method we are going to look at is Cascading Values. If you want a detailed rundown of Cascading Values and Parameters then checkout one of my earlier posts. But in a nutshell, Cascading values and parameters are a way to pass a value from a component to all of its descendants without having to use traditional component parameters.

This makes them a great option when building UI controls which need to manage some common state. One prominent example is Blazors form and validation components. The EditForm component cascades a EditContext value to all the controls in the form. This is used to coordinate validation and invoke form events.

Let’s have a look at an example.

<!-- Tab Container -->

<h1>@SelectedTab</h1>

<CascadingValue Value="this">
    @ChildContent
</CascadingValue>

@code {

    [Parameter] public RenderFragment ChildContent { get; set; }

    public string SelectedTab { get; private set; }

    public void SetSelectedTab(string selectedTab)
    {
        SelectedTab = selectedTab;
        StateHasChanged();
    }
}
<!-- Tab Component -->

<div @onclick="SetSelectedTab">@Title @(TabContainer.SelectedTab == Title ? "Selected" : "")</div>


@code {

    [CascadingParameter] TabContainer TabContainer { get; set; }

    [Parameter] public string Title { get; set; }

    void SetSelectedTab()
    {
        TabContainer.SetSelectedTab(Title);
    }
}

In the code above, we’ve setup a TabContainer component. It displays the currently selected tab, it also sets up a cascading value. In this example, the value that is cascaded is the TabContainer component itself.

In the Tab component, we receive that cascaded value and use it to call the SetSelectedTab method on the TabContainer whenever the div is clicked. We also check the value of the current SelectedTab, again using the cascaded value, and if it matches the title of the tab, we add “Selected” to the tab title.

3. State Container

The last method we’re going to look at is using a state container. There are various degrees of complexity you can go to when implementing a state container. It can be a simple class injected as a singleton or scoped service, depending on if you’re using Blazor client-side or server-side respectively. Or you could implement a much more complex pattern such as Flux.

This is the most complex solution out of the 3. With this solution it is possible to manage and coordinate many components across whole applications.

In the example, we’ll look at using a simple AppState class as our state container.

public class AppState
{
    public string SelectedColour { get; private set; }

    public event Action OnChange;

    public void SetColour(string colour)
    {
        SelectedColour = colour;
        NotifyStateChanged();
    }

    private void NotifyStateChanged() => OnChange?.Invoke();
}

This is our AppState class. It stores the currently selected colour as well as exposing a method to update the selected colour. There’s also a OnChange event, this will be needed by any components wishing to show the selected colour.

<!-- Red Component -->

@inject AppState AppState

<button @onclick="SelectColour">Select Red</button>

@code {

    void SelectColour()
    {
        AppState.SetColour("Red");
    }

}
<!-- Blue Component -->

@inject AppState AppState

<button @onclick="SelectColour">Select Blue</button>

@code {

    void SelectColour()
    {
        AppState.SetColour("Blue");
    }

}

Next we have two component, Red and Blue. These components just update the selected colour on the AppState using a button click.

@inject AppState AppState
@implements IDisposable

@AppState.SelectedColour

<RedComponent />
<BlueComponent />

@code {

    protected override void OnInitialized()
    {
        AppState.OnChange += StateHasChanged;
    }

    public void Dispose()
    {
        AppState.OnChange -= StateHasChanged;
    }

}

This last component ties everything together. It handles the OnChange event exposed by the AppState class. Whenever the selected colour is changed StateHasChanged will be invoked and the component will re-render with the new selected colour.

It’s important to remember to unsubscribe the components StateHasChanged method from the AppState’s OnChange event - otherwise we could introduce a memory leak. We can do this my implementing the IDisposable interface as per the example code above.

Summary

We’ve looked at 3 different ways to handle communication between components in Blazor.

We started off looking at simple scenario of parent child communication using EventCallbacks. We then looked at how we could use Cascading Values to coordinate communication amount closely related, nested, components. Finally, we looked at a larger more application wide component communication method, state containers.

What’s your preferred method?

Getting Started with TypeScript for JSInterop in Blazor

One of the most exciting prospects of Blazor is the potential to remove the need for JavaScript. However, we are not there yet. In an earlier post, I pointed out that WebAssembly isn’t currently able to interact with the DOM or call browser APIs. I’m not even sure how server-side Blazor is going to move away from JavaScript or if it can.

So if we’re going to have to write JavaScript, then it would be great to get as close to our development experience with C# as we can. This is where we can leverage TypeScript.

TypeScript is a first class citizen in Visual Studio, one of the benefits of this is that any TypeScript you write in your Blazor project will be transpiled to JS for you automatically.

But what about TypeScript in a Razor Class Library project? Unfortunately, this doesn’t get compiled automatically. It turns out though that it’s not too difficult to get this working.

What is TypeScript

Just before we continue, for those who aren’t familiar, TypeScript is a typed superset of JavaScript, which compiles to plain JavaScript. It gives us the ability to use static typing, classes and interfaces. But the biggest benefit is that because it’s compiled, we can get compile time checks on our code as apposed to writing plain JS where errors may only show up at runtime.

The Example

For this post we’re going to use the standard Razor Class Library template. You can create this via the dotnet CLI using the following command.

dotnet new razorclasslib

Inside this project there is a content folder with a file called exampleJsInterop.js, with the following contents.

// This is a JavaScript module that is loaded on demand. It can export any number of
// functions, and may import other JavaScript modules if required.

export function showPrompt(message) {
  return prompt(message, 'Type anything here');
}

We’re going to convert this file to TypeScript and then make a few changes to the project to make it compile when the project builds.

Converting to TypeScript

As I mentioned earlier, TypeScript is a superset of JavaScript what this means is that any valid JavaScript is valid TypeScript. So we could just give the exampleJsInterop file a .ts extension and we would be done. But that kind of defeats the point of using TypeScript.

So we’re going to rewrite the code to take advantage of some of the features TypeScript gives us.

namespace JSInteropWithTypeScript {

  class ExampleJsFunctions {
    public showPrompt(message: string): string {
      return prompt(message, 'Type anything here');
    }
  }

  export function Load(): void {
    window['exampleJsFunctions'] = new ExampleJsFunctions();
  }
}

JSInteropWithTypeScript.Load();

This is how things look once converted to TypeScript. Hopefully, this should look a lot more familiar to C# developers. We now have a namespace, a class and types. I’ll admit that types don’t really give us much as this code is going to be called by C#. But if the showPrompt method was going to be called by another TypeScript method we would now benefit from compile time checks.

Configuring the build

We now have our TypeScript file so how can we get it to build with our project. The first thing we need to do is to install the [Microsoft.TypeScript.MSBuild](https://www.nuget.org/packages/Microsoft.TypeScript.MSBuild) package from NuGet. You can do this either via the NuGet package manger.

Install-Package Microsoft.TypeScript.MSBuild

Or via the dotnet CLI.

dotnet add package Microsoft.TypeScript.MSBuild

Once installed, we need to add a tsconfig.json to the root of the Razor Class Library project. Below is an example of a basic config.

{
  "compilerOptions": {
    "module": "commonjs",
    "target": "es5",
    "sourceMap": true
  },
  "exclude": [
    "node_modules"
  ]
}

Checking the build

That should be all we need to be able to compile our TypeScript. We can now do a build and if everything has gone to plan then you should see a exampleJsInterop.js file and a exampleJsInterop.js.map file.

The .map file has been generated for us by the TypeScript compiler. Map files provide a mapping between the original TypeScript source file and the compiled JavaScript. This means we can debug the TypeScript version of our code in the browser instead of the compiled JavaScript version.

Summary

In this post, we’ve taken a first look at how we can use TypeScript with our Blazor libraries. As well as how we can configure our projects to compile our TypeScript files during a build.

There is a lot more that can be done with TypeScript in terms of configuration but I wanted to provide a quick start guide which should work for most interop cases in Blazor. Have you written much interop code so far? Let me know in the comments.

Building Components Manually via RenderTreeBuilder

We’re all very used to seeing Razor Components defined using Razor syntax. No surprises there, after all they’re called Razor Components. But you can also skip using Razor and build components manually using C# and Blazors RenderTreeBuilder.

What we’ll build

Let’s start by looking at what we are going to build. We’re going to replicate the following component using the pure C# approach.

<!-- Menu.razor -->

<nav class="menu">
    <ul>
        <li><NavLink href="/" Match="NavLinkMatch.All">Home</NavLink></li>
        <li><NavLink href="/contact">Contact</NavLink></li>
    </ul>
</nav>

This is a simple menu component that will render a couple of links, nothing fancy.

Right lets get started.

Scaffolding the component

We need to create a new class, much the same as when creating a component using Razor, the name of the class is what we’ll use to reference the component in any markup. As we’re re-creating the menu component above, we’ll call our new class Menu. As this is a component, we will also need to inherit from ComponentBase.

public class Menu : ComponentBase
{
}

That’s simple enough. The only thing we need to do now is to override the BuildRenderTree method from the ComponentBase class.

public class Menu : ComponentBase
{
    protected override void BuildRenderTree(RenderTreeBuilder builder)
    {
    }
}

In terms of the basic setup that is all we need. Everything to do with defining the markup the component produces will happen inside the BuildRenderTree method.

Defining the component

Now we’ve scaffolded the component, we need to define what it does. In order to do that we are going to use the RenderTreeBuilder. This class contains a set of methods which we can use to define everything the component does.

We’ll start by calling the base implementation, you need to do this otherwise things go a little screwy. Then we’ll define the first line of our menu component.

protected override void BuildRenderTree(RenderTreeBuilder builder)
{
    base.BuildRenderTree(builder);
    builder.OpenElement(0, "nav");
    builder.AddAttribute(1, "class", "menu");
}

The code above translates into the following Razor markup.

<nav class="menu">

Like most of the methods we’re going to use on the RenderTreeBuilder, the OpenElement and AddAttribute methods create a RenderTreeFrame. A RenderTreeFrame is essentially a tiny piece of the UI. It’s these building blocks Blazor uses to render the final HTML output.

OK, let’s create the rest of the instructions for our menu component up to the first link.

protected override void BuildRenderTree(RenderTreeBuilder builder)
{
    base.BuildRenderTree(builder);
    builder.OpenElement(0, "nav");
    builder.AddAttribute(1, "class", "menu");
    
    builder.OpenElement(2, "ul");
    builder.OpenElement(3, "li");
    builder.OpenComponent<NavLink>(4);
    builder.AddAttribute(5, "href", "/");
    builder.AddAttribute(6, "Match", NavLinkMatch.All);
    builder.AddAttribute(7, "ChildContent", (RenderFragment)((builder2) => {
        builder2.AddContent(8, "Home");
    }));
    builder.CloseComponent();
    builder.CloseElement();
}

The code above now translates into the following Razor markup.

<nav class="menu">
    <ul>
        <li>
            <NavLink href="/" Match="NavLinkMatch.All">Home</NavLink>
        </li>

We’ve now had to create another component, NavLink. We do this using the OpenComponent method using the type of the component we want. The other point of interest is the child content for the NavLink component.

Child content is always defined as a RenderFragment. This is just a delegate that writes it’s content to a RenderTreeBuilder. We’re using a lambda expression to build the child content and then we’re passing it to the NavLink component as a parameter.

Let’s go ahead and write out the rest of the instructions to complete our menu component.

protected override void BuildRenderTree(RenderTreeBuilder builder)
{
    base.BuildRenderTree(builder);
    builder.OpenElement(0, "nav");
    builder.AddAttribute(1, "class", "menu");
    
    builder.OpenElement(2, "ul");
    builder.OpenElement(3, "li");
    builder.OpenComponent<NavLink>(4);
    builder.AddAttribute(5, "href", "/");
    builder.AddAttribute(6, "Match", NavLinkMatch.All);
    builder.AddAttribute(7, "ChildContent", (RenderFragment)((builder2) => {
        builder2.AddContent(8, "Home");
    }));
    builder.CloseComponent();
    builder.CloseElement();
    
    builder.OpenElement(9, "li");
    builder.OpenComponent<NavLink>(10);
    builder.AddAttribute(11, "href", "/contact");
    builder.AddAttribute(12, "ChildContent", (RenderFragment)((builder2) => {
        builder2.AddContent(13, "Contact");
    }
    ));
    builder.CloseComponent();
    builder.CloseElement();
    builder.CloseElement();
    builder.CloseElement();
}

The code above now translates into the Razor markup we started with.

<nav class="menu">
    <ul>
        <li><NavLink href="/" Match="NavLinkMatch.All">Home</NavLink></li>
        <li><NavLink href="/contact">Contact</NavLink></li>
    </ul>
</nav>

We can now reference our C# only component in any razor markup just as you would any other component.

<Menu />

Or even in another C# only component.

builder.OpenComponent<Menu>(0);
builder.CloseComponent();

Sequence Number

You may have noticed that most instructions we added when creating our component had a number associated with them. These are sequence number and are extremely important to understand when building components this way.

The sequence number is used when Blazor is calculating diffs. To quote Steve Sanderson.

Unlike .jsx files, .razor/.cshtml files are always compiled. This is potentially a great advantage for .razor, because we can use the compile step to inject information that makes things better or faster at runtime.

A key example of this are sequence numbers. These indicate to the runtime which outputs came from which distinct and ordered lines of code. The runtime uses this information to generate efficient tree diffs in linear time, which is far faster than is normally possible for a general tree diff algorithm.

There has been a lot of misunderstanding about how these numbers should be generated. It turns out most of us in the community were getting it wrong and dynamically generating these numbers using code like this.

var index = 0;

builder.OpenElement(index++, "div");

This led to Steve creating this Gist explaining why these sequence numbers should be hard-coded. I urge everyone to have a read of that Gist before embarking on creating components manually.

This leads us to the final part of this post…

Why build a component manually?

I think my personal opinion on this is that you probably shouldn’t. Well, certainly not as a default anyway.

As you can see in the example above, building component this way is quite verbose and it’s harder to read. It’s also much easier to make a mistake, you’re responsible for ordering all the instructions correctly and closing tags correctly, if you get this wrong you won’t know until runtime, as the compiler can’t help you.

This an advanced use of Blazor and most of the time is just not necessary. Dealing with the sequence numbers alone is a maintenance nightmare. I’ve used this technique when building the Blazored Menu library. But looking at it now, I fell in the trap of auto generating sequence numbers and I could have achieved the same result using just Razor.

Summary

This has been an interesting post as my view on this approach has changed while writing it. I’m certainly not saying that this method should be avoided at all costs and using it is bad practice, but I think this is an advanced technique for niche situations.

What are your thoughts? Let me know in the comments.

Using FluentValidation for Forms Validation in Blazor

Blazor now has built-in form and validation. The default implementation uses data annotations and is a very similar experience to forms and validation in ASP.NET MVC applications. While it’s great to have this included out of the box, there are other popular validation libraries available. And it would be great to be able to use them in place of data annotations if we so choose.

FluentValidation is a popular alternative to data annotations with over 12 million downloads. So I thought it would be interesting to see how much work it would take to integrate FluentValidation with Blazors forms and validation system. All the code from this post is available on GitHub.

If you just want to use able to use Fluent Validations in your Blazor app and you’re not interesting in the details. I have developed the code from this post into a NuGet package called Blazored FluentValidation. You can just install it and get on with writing your code!

Getting Setup

I’m going to start with a new client-side Blazor project but you can use server-side Blazor if you prefer. The sample code contains both project types.

First, we need to install the FluentValidation library from NuGet. You can use the package manager in Visual Studio for this or if you prefer, you can use the dotnet CLI

dotnet add package FluentValidation

We’re also going to need something to validate, so lets create a simple Person class. This will define the various fields that will be available on the form.

public class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
    public string EmailAddress { get; set; }
}

That’s it for the basics setup, next we will create the validation rules for our Person model.

Creating a model validator

FluentValidation works by creating a validator for each object you want to validate. In the validator you create validation rules for each property of the object using a fluent syntax.

Out of the box there are 20 predefined validators you can use covering most common validation checks such as not null, greater than or valid email. But if you need something that’s not covered you can also write your own custom validators.

To write a model validator you must create a class that inherits from AbstractValidator<T>. You then add all the validation rules for the model in the constructor.

This is the validator code for our Person class.

public class PersonValidator : AbstractValidator<Person>
{
    public PersonValidator()
    {
        RuleFor(p => p.Name).NotEmpty().WithMessage("You must enter a name");
        RuleFor(p => p.Name).MaximumLength(50).WithMessage("Name cannot be longer than 50 characters");
        RuleFor(p => p.Age).NotEmpty().WithMessage("Age must be greater than 0");
        RuleFor(p => p.Age).LessThan(150).WithMessage("Age cannot be greater than 150");
        RuleFor(p => p.EmailAddress).NotEmpty().WithMessage("You must enter a email address");
        RuleFor(p => p.EmailAddress).EmailAddress().WithMessage("You must provide a valid email address");
    }
}

In this instance, there are no custom validators we’re just using the built-in ones.

By using the NotEmpty validator, we’re making all the properties required. We’ve set a maximum length for the Name property. We’ve said no age can be greater than 150. And that the email must be in a valid format.

The WithMessage method allows us to define what the error message should be if that particular rule is not met.

Building a form validator component

We’ve now got all the ground work done, FluentValidation is installed and we’ve setup a validator for our Person model. So how do we make this work with the forms and validation system in Blazor?

As it turns out we only need to build a couple of things. The first is a new validator component to use in place of the DataAnnotationsValidator which comes as default. Then we need to create an extension method for the EditContext which calls the validation logic from FluentValidation. Other than that, all the other forms components will just work without any modification. That’s really cool.

We’ll start by building the new validator component to replace the default data annotations one. The purpose of the validator component is to hook up the validation mechanism with the form. I really like this approach as you are able to change way you perform validation in your app by simply swapping in a new validator component.

This is what our FluentValidation validator component looks like.

public class FluentValidationValidator : ComponentBase
{
    [CascadingParameter] EditContext CurrentEditContext { get; set; }

    protected override void OnInitialized()
    {
        if (CurrentEditContext == null)
        {
            throw new InvalidOperationException($"{nameof(FluentValidationValidator)} requires a cascading " +
                $"parameter of type {nameof(EditContext)}. For example, you can use {nameof(FluentValidationValidator)} " +
                $"inside an {nameof(EditForm)}.");
        }

        CurrentEditContext.AddFluentValidation();
    }
}

As you may have noticed, this is just a standard component inheriting from ComponentBase. It receives a CascadingParameter called CurrentEditContext which is passed down from the EditForm component. The component makes sure that this parameter is not null and then calls the AddFluentValidation method.

The AddFluentValidation method is the extension method we mentioned before and we will be looking at that in a moment. But as you can see, this is a really simple component, it’s only job is to call that extension method on the EditContext.

Extending EditContext to use FluentValidation

The EditContext is the engine of forms validation in Blazor. It’s what’s responsible for executing validation as well as managing all the validation state.

Following the pattern used by the ASP.NET Core team for the default data annotations validation. We’re going to create a new extension method for EditContext which will tell it how to use FluentValidation.

public static class EditContextFluentValidationExtensions
{
    public static EditContext AddFluentValidation(this EditContext editContext)
    {
        if (editContext == null)
        {
            throw new ArgumentNullException(nameof(editContext));
        }

        var messages = new ValidationMessageStore(editContext);

        editContext.OnValidationRequested +=
            (sender, eventArgs) => ValidateModel((EditContext)sender, messages);

        editContext.OnFieldChanged +=
            (sender, eventArgs) => ValidateField(editContext, messages, eventArgs.FieldIdentifier);

        return editContext;
    }

    private static void ValidateModel(EditContext editContext, ValidationMessageStore messages)
    {
        var validator = GetValidatorForModel(editContext.Model);
        var validationResults = validator.Validate(editContext.Model);

        messages.Clear();
        foreach (var validationResult in validationResults.Errors)
        {
            messages.Add(editContext.Field(validationResult.PropertyName), validationResult.ErrorMessage);
        }

        editContext.NotifyValidationStateChanged();
    }

    private static void ValidateField(EditContext editContext, ValidationMessageStore messages, in FieldIdentifier fieldIdentifier)
    {
        var properties = new[] { fieldIdentifier.FieldName };
        var context = new ValidationContext(fieldIdentifier.Model, new PropertyChain(), new MemberNameValidatorSelector(properties));

        var validator = GetValidatorForModel(fieldIdentifier.Model);
        var validationResults = validator.Validate(context);

        messages.Clear(fieldIdentifier);
        messages.AddRange(fieldIdentifier, validationResults.Errors.Select(error => error.ErrorMessage));

        editContext.NotifyValidationStateChanged();
    }

    private static IValidator GetValidatorForModel(object model)
    {
        var abstractValidatorType = typeof(AbstractValidator<>).MakeGenericType(model.GetType());
        var modelValidatorType = Assembly.GetExecutingAssembly().GetTypes().FirstOrDefault(t => t.IsSubclassOf(abstractValidatorType));
        var modelValidatorInstance = (IValidator)Activator.CreateInstance(modelValidatorType);

        return modelValidatorInstance;
    }
}

There is a lot of code there so let’s break it all down.

Hooking up events

The AddFluentValidation methods main job is to wire up the two events OnValidationRequested and OnFieldChanged.

editContext.OnValidationRequested += (sender, eventArgs) => ValidateModel((EditContext)sender, messages, validator);

editContext.OnFieldChanged += (sender, eventArgs) => ValidateField(editContext, messages, eventArgs.FieldIdentifier);

OnValidationRequested is fired when validation is required for the whole model, for example, when attempting to submit the form. OnFieldChanged is fired when an individual fields value is changed.

The other important thing this method does is create a new ValidationMessageStore associated with the current EditContext.

var messages = new ValidationMessageStore(editContext);

A ValidationMessageStore is where all the validation messages for a forms fields are kept. This is used to decide if the form is valid or not based on if it contains any validation messages after validation has been run.

Validating the model

Next up we have the ValidateModel method. This method is invoked when the OnValidationRequest event is triggered. The main trigger for this event is the user attempting to submit a form so the whole model must be checked.

FluentValidation makes this really easy. All we have to do is call a method called Validate on the model validator. We get the model validator via the GetValidatorForModel method. We pass it the model we want a validator for and it uses a bit of reflection to create the correct instance. In our case, it will return an instance of the PersonValidator class we built earlier.

Once we have an instance of the validator. We can call the Validate method passing in the model we want to validate and it will give us a ValidationResult back.

var validator = GetValidatorForModel(editContext.Model);
var validationResults = validator.Validate(editContext.Model);

As we’re re-validating the form, we need to clear out any existing validation messages from the validation message store.

messages.Clear();

It’s then just a case of looping over the errors collection on the validation result and recording any errors into the validation message store.

foreach (var validationResult in validationResults.Errors)
{
    messages.Add(editContext.Field(validationResult.PropertyName), validationResult.ErrorMessage);
}

Finally, we call NotifyValidationStateChanged on the EditContext which tells the context that there has been a change in the validation state.

Validating individual fields

The last method, ValidateField, is invoked from the OnFieldChanged event. This allows us to validate a field whenever it’s been altered.

We start by creating a ValidationContext which allows us to specify the fields we want to validate.

var properties = new[] { fieldIdentifier.FieldName };
var context = new ValidationContext(fieldIdentifier.Model, new PropertyChain(), new MemberNameValidatorSelector(properties));

This is setup to only include the field which raised the event.

Just like before, we call GetValidatorForModel to get a validator instance then pass the validation context into the Validate method, except in this overload only the field we specified will be validated.

var validator = GetValidatorForModel(fieldIdentifier.Model);
var validationResults = validator.Validate(context);

We clear any existing validation messages from the validation message store, except this time we only do it for the field we are validating.

messages.Clear(fieldIdentifier);

If there are any error messages in the validation result, they are added to the validation message store.

messages.AddRange(fieldIdentifier, validationResults.Errors.Select(error => error.ErrorMessage));

Before finally calling NotifyValidationStateChanged, as we did in the previous method.

And that’s it! This is all we need to hook up FluentValidation to the build-in forms validation system in Blazor.

Sample Projects

If you want to see this code in action I’ve created a repo with a client-side Blazor and a server-side Blazor sample. The validation code in both projects is completely identical, everything work exactly the same regardless of project type.

Summary

That brings this post to a close. I was really surprised at just how simple it was to replace the default data annotations validation with FluentValidation. I think this is yet again another great example of the team providing options out of the box but not locking you in.

I do want to say that I’m by no means an expert on FluentValidation, the code above seems to work for most scenarios I’ve run it through. But if you find any issues please let me know in the comments.

Using JavaScript Interop in Blazor

While WebAssembly has the potential to end our reliance on JavaScript, JavaScript is not going away anytime soon. There are still a lot of things WebAssembly just can’t do, most notably DOM manipulation. If you’re running server-side Blazor then you don’t even have WebAssembly as an option. So how do we handle this problem?

The answer is JavaScript interop. When we can’t do what we need using .NET code alone, we can use the IJSRuntime abstraction to make calls into JavaScript functions. We can even get JavaScript functions to make calls into our C# code.

JSRuntime.Static is gone

Before we go any further, I want to point out a recent change in the way JS interop works. Historically, developers could execute their calls to JavaScript using JSRuntime.Current. This was a static version of the JSRuntime and avoided the need to inject anything into components or services.

This worked fine when using Blazor WebAssembly as the app was running in the local browser and there was no state shared with anyone else. However, when running Blazor Server this caused some serious problems. Because it was static it made the behaviour extremely unpredictable so the team has now removed this static implementation and developers can only use an injected instance of IJSRuntime.

IJSRuntime

This abstraction is our gateway into the JavaScript world. It gives us 2 methods we can use to call JavaScript functions.

ValueTask<TValue> InvokeAsync<TValue>(string identifier, params object[] args);
ValueTask InvokeVoidAsync(string identifier, params object[] args);

The first is InvokeAsync, we use this when we are expecting the JavaScript function we’re calling to return something to us. It takes a string, identifier, which is the identifier for the function to call. This identifier is relative to the window object, so if you wanted to call window.Blazored.LocalStorage.setItem you would pass in ``Blazored.LocalStorage.setItem`. If you have any parameters you need to pass to the JavaScript function, you can do so using the second argument.

The second is InvokeVoidAsync and as you can probably tell, we use this when we want to call a JavaScript function that doesn’t return anything. Just as with InvokeAsync, the first argument is the function we want to call and the second allows us to pass in various arguments.

You’ll have noticed that both methods are async. This is important because if you want your code to work in both client-side and server-side Blazor then all JS interop calls must be asynchronous due to the SignalR connection used by server-side Blazor.

IJSInProcessRuntime

If you have client-side only scenarios where you need to invoke a JavaScript call synchronously, then you have the ability to downcast IJSRuntime to IJSInProcessRuntime. This interface offers us the same two methods, only this time they are synchronous.

T Invoke<T>(string identifier, params object[] args);
void InvokeVoid(string identifier, params object[] args);

I can’t stress this enough, only use IJSInProcessRuntime when using Blazor WebAssembly. This will not work if you’re using Blazor Server.

How to call a JavaScript function from C#

Now we have covered the tools available to us. Let’s look at an example of how we can use them to call a JavaScript function.

We’re going setup a component which will interop with the following JavaScript function.

window.ShowAlert = (message) => {
    alert(message);
}

The code above is just wrapping a call to the JavaScript alert function allowing us to pass in a message to be displayed.

Making asynchronous calls

Let’s checkout what the code looks like to call this function from a Razor Component.

@inject IJSRuntime jsRuntime

<input type="text" @bind="message" />
<button @onclick="ShowAlert">Show Alert</button>

@code {

    string message = "";

    private async Task ShowAlert()
    {
        await jsRuntime.InvokeVoidAsync("ShowAlert", message);
    }
}

Starting from the top, we’re requesting an instance of IJSRuntime from the DI container using the @inject directive. We’ve got an input which we can use to enter a message then a button which triggers the interop call to the JS function.

Making synchronous calls (Blazor WebAssembly Only)

As I said earlier, you should always default to async calls where ever possible to make sure your code will run in both client and server scenarios. But if you have the need, and you know the code won’t be running on the server, then you can make sync calls.

Let’s look at the same example again, but this time we’ll make some changes to run the code synchronously.

@inject IJSRuntime jsRuntime

<input type="text" @bind="message" />
<button @onclick="ShowAlert">Show Alert</button>

@code {

    string message = "";

    private void ShowAlert()
    {
        ((IJSInProcessRuntime)jsRuntime).InvokeVoid("ShowAlert", message);
    }
}

As you can see, we’ve downcast the IJSRuntime to IJSInProcessRuntime which has given us access to the synchronous InvokeVoid method. Other than updating the ShowAlert method signature, we haven’t had to make any further changes to make our code run synchronously.

Once again, this will not work with Blazor Server. If you try to run the above code you will end up getting an InvalidCastException. Only use this method when you’re sure your code will execute client-side only.

How to call a C# method from JavaScript

Sometimes you need to have JavaScript functions make calls into your C# code. One example of this is when using JavaScript promises. The promise may resolve sometime after the initial call and you need to know the result.

There are two options when calling C# code. The first is calling static methods and the second is calling instance methods.

I’m only going to show asynchronous examples here as that should always be the default but you can use the downcasting method described above to call synchronous methods if required.

Calling static methods

When calling static methods we can either use DotNet.invokeMethod or DotNet.invokeMethodAsync. Much like before, you should always use the async version where ever possible as this will make the code compatible with client and server scenarios. Let’s look at an example.

namespace JSInteropExamples
{
    public static class MessageProvider
    {
        [JSInvokable]
        public static Task GetHelloMessage()
        {
            var message = "Hello from C#";
            return Task.FromResult(message);
        }
    }
}

We’re going to call the GetHelloMessage method in the class above. When calling static methods they must be public and they must be decorated with the [JSInvokable] attribute.

We’re going to call it from the following JavaScript function.

window.WriteCSharpMessageToConsole = () => {
    DotNet.invokeMethodAsync('JSInteropExamples', 'GetHelloMessage')
      .then(message => {
        console.log(message);
    });
}

We’re using the DotNet.invokeMethodAsync function which is provided by the Blazor framework. The first argument is the name of the assembly containing the method we want to call. The second argument is the method name. As the call is asynchronous it returns a promise, when the promise resolves we take the message, which comes from our C# code, and log it to the console.

@inject IJSRuntime jsRuntime

<button @onclick="WriteToConsole">Run</button>

@code {

    private async Task WriteToConsole()
    {
        await jsRuntime.InvokeVoidAsync("WriteCSharpMessageToConsole");
    }
}

When we execute the above component, clicking the Run button will result in the message “Hello From C#” being printed to the browser console.

Calling instance methods

It’s also possible to call instance methods from JavaScript. Let’s make a few tweaks to the previous example to see how we can use it.

window.WriteCSharpMessageToConsole = (dotnetHelper) => {
    dotnetHelper.invokeMethodAsync('GetHelloMessage')
        .then(message => console.log(message));
}

The JS function now takes a parameter, dotnetHelper. We can use this helper, specifically its invokeMethodAsync function, to call our C# method. As before, when the promise resolves the message passed back from C# is printed to the console.

@inject IJSRuntime jsRuntime

<button @onclick="WriteToConsole">Run</button>

@code {
    private async Task WriteToConsole()
    {
        await jsRuntime.InvokeVoidAsync("WriteCSharpMessageToConsole", DotNetObjectReference.Create(this));
    }
            
    [JSInvokable]
    public Task<string> GetHelloMessage()
    {
        var message = "Hello from a C# instance";
        return Task.FromResult(message);
    }
}

The static MessageProvider class is now gone and the GetHelloMessage method now lives on the component, still decorated with the [JSInvokable] attribute. The call to invoke the JS method is slightly different, we’re passing in a DotNetObjectReference. This special type stops the value passed to it being serialised to JSON and instead passes it as a reference.

When we run the code above we should now see a message in the console saying “Hello from a C# instance”.

Summary

In this post, we’ve had a detailed look at JavaScript interop. We’ve covered how to make calls from C# into JavaScript functions as well as how to make calls from JavaScript into C# methods.

I think the big take away here it that async is king when it comes to interop. It should always be your default choice if you want to make your code compatible with both client-side and server-side Blazor.

Deploying Blazor Apps Using Azure Pipelines

In my previous post, I showed how you can use Azure pipelines to build your Blazor (or Razor Components) apps. This time, I’m going to show you how you can deploy a Blazor app to Azure storage using Azure pipelines.

One of the great things about Blazor applications, once published, they’re just static files. This means they can be hosted on a wide variety of platforms as they don’t require .NET to be running on the server.

This opens up loads of hosting options which .NET developers have not been able to use before now. The one I’m going to show you in this post is Azure storage. Just to be clear, Azure storage is not a free option but it’s a very low cost one. I use Azure a lot so keeping everything in one place is easier for me and I’m happy to pay a few pennies a month for that. If you’re looking for a completely free option, then GitHub Pages or Netlify would be worth checking out.

I’m going to assume you already have an Azure account, but if you don’t, you can sign up for one here.

Creating a Storage Account

We’re going to start by creating a new storage account, this is where our site is going to be deployed. Once you’re logged into the Azure portal, go to the storage accounts service.

From here, click Add.

We’re now presented with the create storage account screen.

If you have multiple subscriptions you will need to make sure the one selected is correct. You then need to do the same with the resource group. You also need to give the storage account a name. You can leave the defaults for everything else and press Review + create.

You’ll now be presented with a summary of what is going to be set up.

You just need to press Create and Azure will begin deploying your new storage account.

Configuring the Storage Account

After a minute or two your new storage account should be ready. If you go to the storage account service, as we did earlier, you should see your new account in the list. Click on it and you’ll be taken to the details view. You then need to click on the Static website link in the settings menu.

You should now see the static site configuration screen.

Click Enabled, then add index.html for both the index document name and the error document name. What this does is set up some redirects so all requests get passed to the index.html document. We need this to happen so that Blazors router can manage all requests and load the correct components.

Once you’ve finished, you can click save.

After saving you will now see two extra fields, primary endpoint and secondary endpoint. These are the addresses you use to load your site once we’ve deployed it. Save these, we’ll need them later to check the website is working once we’ve deployed it.

That’s it for the storage account. We can now head over to Azure pipelines and build our release pipeline.

Creating a Release Pipeline

From your Azure pipelines account, select your project and under Pipelines, select releases.

You then need to click New pipeline.

You should now see the new release pipeline screen with a menu on the right.

From the menu select Empty job template right at the top. You will now have the option to name the stage.

You can call this whatever you want, but it’s probably best to give it meaningful name as I have above. Release pipelines can be made up of many stages. You could have stages for dev, QA and prod for example. If you want to know more about stages in release pipelines you can check out the docs.

Selecting an Artifact

Before we configure any tasks we need to tell the pipeline what we want to deploy. In the previous post we created a build artifact, this is where we configure the release pipeline to use it.

Click on the Add an artifact box in the pipeline.

In the modal, select Build then the name of the project containing your build pipeline from last time. Finally, select the source. Once you’re done, click Add.

Tasks

In the stages section of the pipeline click on the link under you stages name.

You will then be taken to the Tasks screen.

From here click on the + on the agent job.

In the search box, type “azure file copy”. You should see the task above, click Add. The task should appear under the Agent job with some error text stating some settings need attention. Click on the new job.

We now have to fill in the details highlighted in red. The first is the source, this one is important to get right, if we don’t, we’ll end up deploying the wrong files.

It’s important that when building your Blazor app that you don’t have the publish step set to zipAfterPublish. In the last post we set this to false, now you can see why. We need to drill down through the published folders to the dist folder. If we’d set zipAfterPublish too true we wouldn’t be able to do this.

Once you’ve selected the dist folder, click OK.

Next, select your Azure subscription. You may also need to authorise your account at this point. If so, you’ll see a blue Authorize button. Just click it and follow the authorisation process.

The destination type needs to be set to Azure blob. The RMStorage Account is the name of the storage account we set up earlier. Finally, the container name should be $web, we saw this earlier when configuring the static site details in Azure storage.

That’s all the configuration done! To finish up you can click on the pipelines name and call it something more meaningful.

Once you’re happy click Save.

Creating a Release

You should now be able to create a release using the new pipeline. The +Release button next to the save button should be enabled, click it and you will see the create a release modal appear.

Select the Version, this will be the latest build from your build pipeline. Then click Create.

You can click on the link to Release-1 and after a few seconds you should hopefully see a successfully completed release.

Checking the Site

When we configured the static website settings for Azure storage earlier we were assigned two URLs. You can use either of them to test that the site is working. If all has gone to plan you should see your app running.

Summary

In this post, I showed how you can take a Blazor application built with Azure pipelines, create a release pipeline for it and then deploy it to Azure storage as a static website.

We’ve really only scrapped the surface of what is possible with Azure pipelines. It’s an amazing tool to have at our disposal. And let’s not forget that it’s free as well. If you want to learn more, then head over to the Microsoft Docs site. They have loads of great information to check out.

Building Blazor Apps Using Azure Pipelines

This is the first of two posts on building and deploying Blazor apps using Azure Pipelines. In this first post, I’m going to show you how you can use Azure Pipelines to build your Blazor applications. In the second post, I will show you how to take your built application and deploy it to Azure Storage using release pipelines.

I would like to point out that the steps in this first post will also work for server-side Blazor applications.

What is Azure Pipelines

Azure Pipelines is part of the Azure DevOps services, formally known as Visual Studio Online and previous to that, Visual Studio Team Services. It’s a Continuous Integration (CI)/Continuous Deployment (CD) service which allows developers to build, test and deploy their code anywhere.

I’m going to presume you already have a blazor app you want to build and the code is hosted on GitHub. Don’t worry if you don’t use GitHub however, pipelines can integrate with many source control systems. You might just have to Google the specifics to connect yours.

Creating an Azure DevOps Account

If you don’t already have one, you will need to head over to devops.azure.com and create yourself a free account by clicking the “Start free” button.

You can either use an existing Microsoft account to signup or you can create a new one. Just follow the steps and you will end up at the Azure DevOps dashboard.

Installing the Azure Pipelines app

The easiest way to connect your GitHub repo to your Azure Pipelines account is use the Azure pipelines app. If you head back to GitHub and go to the Marketplace, then search for “azure pipelines”.

Select the Azure Pipelines app, you may see a slightly different screen to me as I already use Azure Pipelines for the Blazored repos. But you should see a button with either “Set up a plan” or “Set up a new plan” click that.

You will then see pricing and setup options.

Click “Install it for free”, you will then see a summary of you order.

Click “Complete order and begin installation” you will then be asked if you want to install the app to all your repos, or a specific one.

I would suggest installing to all repositories unless you have repos using other CI offerings. Click “Install” and you will be redirected to your Azure DevOps account.

Creating an Azure DevOps Project

You now need to create a project. Azure DevOps uses projects as a way of organising work. You could have a single project that stores all of your build pipelines for multiple repos, or you can create a project for each repository. I use the second option, which I believe is what Microsoft recommends.

Click “Create Project”, give it a name, then click “Continue”.

We’re now asked to select which repo we want to build code from. Select your repo and Azure Pipelines will then inspect the code and give you a few options of how to configure your build pipeline.

If it’s not the recommended option then select ASP.NET Core but pipelines is pretty decent at getting this right.

Configuring The Build

You should now be looking at your azure-pipelines.yml file. This is file we are going to use to configure how our Blazor application is built. If you’ve never heard of a YAML file before then it stands for Yet Another Markup Language.

It is going to allow us to define our build configuration as code. The big advantage of this approach is the YAML will get checked into our repo and can then be versioned along with our code.

Let’s go over the default settings before we start making any changes.

trigger:
- master

pool:
  vmImage: 'Ubuntu-16.04'

variables:
  buildConfiguration: 'Release'

steps:
- script: dotnet build --configuration $(buildConfiguration)
  displayName: 'dotnet build $(buildConfiguration)'

At the top of the file we have some global settings.

Trigger reference what branch should trigger a build to start. By default, this is set to the master branch. This means any code checked into the master branch on GitHub will trigger a build to start. Also, by default any pull requests raised to master will also trigger a build.

Pool specifies what virtual machine (build agent) should be used to built the application. Azure Pipelines offers hosted agents for Windows, Linux and even MacOS.

Variables is a place you can define variables for the build. By default, there is a single variable buildConfiguration, which is set to Release. If you have any specific build variables you can add them here.

Steps is where the tasks needed to build our application are defined. There are [a lot built in](There are a lot available and you can even create your own.) and you can also create your own. By default, there is a single script task which calls the dotnet CLI to build the application.

One last thing to point out, YAML files are both whitespace and case sensitive. So be careful when editing them.

Configuring the Blazor build

Now we understand what’s been generated by default. There are a few changes we need to make to get our Blazor app building.

First, we need to change the build agent. I’m not sure why but I’ve not been able to get a blazor app to build using the linux agents. I always get an error saying “It was not possible to find any compatible framework version”. But the windows agents seem to work fine so let’s make the change.

pool:
  vmImage: 'vs2017-win2016'

Next, we need to add in a new step, this needs to go before the existing script task.

steps:
- task: DotNetCoreInstaller@0
  displayName: 'Installing .NET Core SDK...'
  inputs:
    version: 3.0.100-preview6-012264

What this task does is install a specific version of the .NET SDK (if you’re getting any issues with building make sure you have updated the SDK version to the latest one). Microsoft keeps all their hosted agents up to date with the latest official SDK but this does not include previews. As the current version of Blazor can only be built against the latest .NET Core 3 preview we have to install it ourselves.

The next change is to the existing build task.

- script: dotnet build --configuration $(buildConfiguration) BuildAndDeploy/BuildAndDeploy.csproj
  displayName: 'Building $(buildConfiguration)...'

In my repo the .csproj file is in a directory called BuildAndDeploy. I need to alter the CLI command so it knows where to find it. You will need to alter this according to your applications structure.

Note: If you’re building a server-side Blazor application you will need to specify the .csproj file for the .server project.

Finally, we need to add in two new tasks.

- task: DotNetCoreCLI@2
  displayName: 'Publishing App...'
  inputs:
    command: publish
    publishWebProjects: true
    arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
    zipAfterPublish: false

- task: PublishBuildArtifacts@1
  displayName: 'Publishing Build Artifacts...'

The first is a .NET Core CLI task calling the publish command. This will publish the app and output the result to the $(Build.ArtifactsStagingDirectory). This is just a predefined folder location on the build agent and is provided automatically by Azure pipelines.

The second is a publish build artifacts task which publishes the result of the previous task, the build artifacts. It makes them available for download or to use in subsequent parts of a CI/CD pipeline. We’ll be using these in the next post.

Running the build

We now have our YAML file set up and ready to go. Click the ‘Save and Run’ button, this will give you the option to add a commit message and to either commit the YAML file to your master branch or to create a new branch. I would suggest going straight to master is probably fine, but the choice is yours.

Once you have committed your YAML file then a new build of your project will be started. If all goes to plan then you should have a successful build.

You can then see what’s been built by clicking on the “Artifacts” button in the top left and clicking on “drop”.

You can also download your built app by clicking “Download as zip” from the artifacts explorer.

Summary

That’s where we’ll leave things for this post. I have shown you how to use GitHub and Azure Pipelines to create an automated CI pipeline for your Blazor applications.

Next time we’ll take the build artifacts we’ve created from our build and use them in a release pipeline to deploy those files to Azure Storage.

As always if you have any questions please leave them in the comments below.

Understanding Cascading Values & Cascading Parameters

Cascading values and parameters are a way to pass a value from a component to all of its descendants without having to use traditional component parameters.

Blazor comes with a special component called CascadingValue. This component allows whatever value is passed to it to be cascaded down its component tree to all of its descendants. The descendant components can then choose to collect the value by declaring a property of the same type, decorated with the [CascadingParameter] attribute.

Basic Usage

You can setup a cascading value as follows.

<CascadingValue Value="@TheAnswer">
    <FooComponent></FooComponent>
</CascadingValue>

@code {
    int TheAnswer = 42;
}

As I said previously, FooComponent can make use of the value being cascaded down by declaring a property of the same type, decorated with the [CascadingParameter] attribute.

<h1>Foo Component</h1>

<p>The meaning of life is @MeaningOfLife.</p>

@code {
    [CascadingParameter] int MeaningOfLife { get; set; }
}

The MeaningOfLife property will automatically be populated with 42 when the component is rendered in the same way as a standard component [Parameter] property.

Just like component parameters, if a cascading value is changed the change will be passed down to all descendants. And any components using the value will be updated and automatically have StateHasChanged called.

Multiple Cascading Parameters

You may have noticed in the example above that there wasn’t any way of identifying the cascading value. The FooComponent just declares a property as a [CascadingParameter] and the value gets set. Which is fine when there is only one cascading parameter. But what happens when you have two, or three, or more?

There needs to be a way of identifying which one is which. As it happens, there are in-fact two ways of identifying cascading parameters.

By Type

The first is provided by the framework and is based on types. Say we had two cascading values, one is a string and one is an int. And there is a single child component.

<CascadingValue Value="@FruitName">
    <CascadingValue Value="@FruitCount">
        <Fruit></Fruit>
    </CascadingValue>
</CascadingValue>

@code {
    string FruitName { get; set; } = "Apple";
    int FruitCount { get; set; } = 111;
}

The Fruit component declares a cascading parameter as follows.

<p>The fruit is: @Name</p>

@code {
    [CascadingParameter] string Name { get; set; }
}

Blazor will look at the type of the Name parameter and try and find a cascading value which matches. In this case, it will match FruitName and bind Name to its value.

You may be thinking, that’s great, but what happens if both cascading values have the same type?

In that situation, the framework is still going to match based on type. Except it will use the closest ancestor to the component requesting the parameter. Modifying the previous example just a bit, we can see how this works.

<CascadingValue Value="@FirstFruit">
    <CascadingValue Value="@SecondFruit">
        <Fruit></Fruit>
    </CascadingValue>
</CascadingValue>

@code {
    string FirstFruit { get; set; } = "Apple";
    string SecondFruit { get; set; } = "Banana";
}

The Fruit component is the same as it was before.

<p>The fruit is: @Name</p>

@code {
    [CascadingParameter] string Name { get; set; }
}

This time round both cascading values are of type string. The framework is going to look for the closest ancestor to the Fruit component with a matching type. In this scenario the matching value will be SecondFruit.

By Name

The second, and most reliable way to identify cascading parameters is by name. When you create a cascading value you have the option to give it a name. Then when a child component wants to use it, they can ask for it by specifically.

Going back to our previous example with the two fruits, if we name the two cascading values.

<CascadingValue Value="@FirstFruit" Name="FirstFruit">
    <CascadingValue Value="@SecondFruit" Name="SecondFruit">
        <Fruit></Fruit>
    </CascadingValue>
</CascadingValue>

@code {
    string FirstFruit { get; set; } = "Apple";
    string SecondFruit { get; set; } = "Banana";
}

The Fruit component can now be specific about which value it wants to use.

<p>The fruit is: @Name</p>

@code {
    [CascadingParameter(Name = "FirstFruit")] string Name { get; set; }
}

Performance

This may all sound good but, what about performance?

All these cascading values are active by default. What do I mean by active? Well, if a cascading value is changed then the new value will be sent down the component tree and all components that use it will be updated. Therefor, Blazor has to keep a watch on the value continuously. This takes up resource and in a large application could end up causing performance issues.

But what if you know your cascading value will never change? It would be nice to be able to tell Blazor to not have to keep a watch on it and not take up that resource. Well, you can.

On the CascadingValue component there is a IsFixed parameter. It is set to false by default but if you set it too true you are telling Blazor to not monitor it for changes.

<CascadingValue Value="@Fruit" IsFixed="true">
    <Fruit></Fruit>
</CascadingValue>

@code {
    string Fruit { get; set; } = "Peach";
}

Now Fruit is a fixed value and the framework won’t use up any resources setting up change detection.

Updating Cascading Values

There has been a bit of confusion when it comes to updating cascading values. The important thing to understand is that updates only cascade down you can’t update a value from a descendant.

For example, say we had two components FruitBowl and LunchBox, which both received a cascading value.

<CascadingValue Value="@Fruit">
    <FruitBowl></FruitBowl>
    <LunchBox></LunchBox>
</CascadingValue>

@code {
    string Fruit { get; set; } = "Kiwi";
}
<!-- FruitBowl Component -->

<p>Fruit bowl contains @FruitName</p>

<button @onclick="ChangeFruit">Change Fruit</button>

@code {
    [CascadingParameter] string FruitName { get; set; }

    private void ChangeFruit() 
    {
        FruitName = "Pineapple";
    }
}
<!-- LunchBox Component -->

<p>Lunch box contains @FruitName</p>

@code {
    [CascadingParameter] string FruitName { get; set; }
}

If we run this code the output, ignoring the button, would look like this.

Fruit bowl contains KiwiLunch box contains Kiwi

If we click the Change Fruit button in the FruitBowl component, it will not trigger an update in the LunchBox component. The output would look like this.

Fruit bowl contains PineappleLunch box contains Kiwi

If you need to update a cascading value from a descendant then you will need to choose a different mechanism to achieve it. I’ve got a couple of options to show you, the first is using events.

Using Events

Using the example above, we can modify it to use an event to trigger an update of the cascading value.

<CascadingValue Value="@Fruit">
    <FruitBowl OnFruitChange="ChangeFruit"></FruitBowl>
    <LunchBox></LunchBox>
</CascadingValue>

@code {
    private string Fruit { get; set; } = "Kiwi";

    private void ChangeFruit(string newFruit)
    {
        Fruit = newFruit;
        StateHasChanged();
    }
}
<!-- FruitBowl Component -->

<p>Bowl contains @Fruit</p>

<button @onclick="ChangeFruit">Change Fruit</button>

@code {

    [CascadingParameter] string Fruit { get; set; }

    [Parameter] public Action<string> OnFruitChange { get; set; }

    private void ChangeFruit()
    {
        OnFruitChange?.Invoke("Pineapple");
    }

}
<!-- LunchBox Component -->

<p>Lunch box contains @Fruit</p>

@code {
    [CascadingParameter] string Fruit { get; set; }
}

Now when we run the code above we will get the same initial output as before.

Fruit bowl contains KiwiLunch box contains Kiwi

But now when we click the Change Fruit button we will get the following.

Fruit bowl contains PineappleLunch box contains Pineapple

Using Complex Types

Another option is to pass a complex type down instead of an individual property, a component instance for example. Descendant components can then perform actions against the instance using its methods and bind to its properties.

Let’s look at an example.

<!-- FruitDispenser Component -->

<CascadingValue Value="this">
    <FruitBowl></FruitBowl>
    <LunchBox></LunchBox>
</CascadingValue>

@code {
    public string Fruit { get; private set; } = "Kiwi";

    public void ChangeFruit(string newFruit)
    {
        Fruit = newFruit;
        StateHasChanged();
    }
}
<!-- FruitBowl Component -->

<p>Bowl contains @FruitDispenser.Fruit</p>

<button @onclick="ChangeFruit">Change Fruit</button>

@code {

    [CascadingParameter] FruitDispenser FruitDispenser { get; set; }

    [Parameter] public Action<string> OnFruitChange { get; set; }

    private void ChangeFruit()
    {
        FruitDispenser.ChangeFruit("Pineapple");
    }

}
<!-- LunchBox Component -->

<p>Lunch box contains @FruitDispenser.Fruit</p>

@code {
    [CascadingParameter] FruitDispenser FruitDispenser { get; set; }
}

Just as in the previous example using events, when we run the code above we will get this initial output.

Fruit bowl contains KiwiLunch box contains Kiwi

And when we click the Change Fruit button we will continue to get the following.

Fruit bowl contains PineappleLunch box contains Pineapple

As you can see we have achieved the same result as using events and with a bit less code.

The question is, should we really be passing complex types around like this just to update a single property value?

Drawback and Trade-offs

Just like any tool, there are drawback and trade-offs. Cascading values are no different.

While it’s early days I can see a couple of things which may end up becoming an issue when bigger, more real world applications become common.

Over Use

I can see cascading values being over used quite easily. I think you could see apps which end up declaring a load of cascading values in their main layouts, then every other component is declaring and using them as well. I think this could lead to code that’s hard to understand and difficult to follow.

Time will tell with this and we won’t really know till bigger applications get built so I guess we’ll have to wait and see.

Updating Values

We looked at a couple of ways of updating a cascading value from a descendant earlier. In the event version, it was a simple example and the component which updated the value was declared within the same markup.

But what if we wanted to trigger an update from a component deeper in the component tree that was declared in a different component? Let me show you an example.

<!-- Index.cshtml -->

<CascadingValue Value="@SomeValue">
    <ChildComponent></ChildComponent>
<CascadingValue>
    
@code {
    string SomeValue { get; set; } = "Initial Value";
}
<!-- ChildComponent.cshtml -->

<AnotherChildComponent><AnotherChildComponent>
<!-- AnotherChildComponent.cshtml -->

<p>@SomeValue</p>

@code {
    [CascadingParameter] string SomeValue { get; set; }
    
    [Parameter] public Action<string> OnSomeValueChanged { get; set; }

    private void ChangeValue()
    {
        OnSomeValueChanged?.Invoke("New Value");
    }
}

With the setup we have above, how do we handle raising the OnSomeValueChanged event from the AnotherChildComponent to the Index component?

The answer is we would probably have to declare an intermediate event on the ChildComponent as well. So the whole thing would look something like this.

<!-- Index.cshtml -->

<CascadingValue Value="@SomeValue">
    <ChildComponent OnChildSomeValueChanged="@UpdateValue"></ChildComponent>
<CascadingValue>
    
@code {
    string SomeValue { get; set; } = "Initial Value";
    
    void UpdateValue(string newValue)
    {
        SomeValue = newValue;
        StateHasChanged();
    }
}
<!-- ChildComponent.cshtml -->

<AnotherChildComponent OnSomeValueChanged="ChangeValue"><AnotherChildComponent>

@code {    
    [Parameter] public Action<string> OnChildSomeValueChanged { get; set; }

    private void ChangeValue(string newValue)
    {
        OnSomeChildValueChanged?.Invoke(newValue);
    }
}
    
<!-- AnotherChildComponent.cshtml -->

<p>@SomeValue</p>

@code {
    [CascadingParameter] string SomeValue { get; set; }
    
    [Parameter] public Action<string> OnSomeValueChanged { get; set; }

    private void ChangeValue()
    {
        OnSomeValueChanged?.Invoke("New Value");
    }
}

This is not good in my opinion and I would suggest that if you start going down this route, to consider using a common service to manage things.

In fact, I think if you’re passing object instances around as per the other updating example, then you should probably ask yourself if a service might be a better option as well.

Summary

That brings us to the end of this post. I hope you’ve managed to learn something here today. We’ve covered what cascading values and parameters are, some ways they can be used and some possible drawback to look out for.

What are your opinions on them? Have you found any positives or negatives I’ve not mentioned. If so, please leave a comment below and tell me about your experience.

Blazored Modal Released

Quick Update

Wow, didn’t just January fly by! I just wanted to give you all a quick update since my last post, Announcing Blazored and Blazored Toast.

We’ve had the first official preview release of server-side Blazor. This is really cool and I’ve been having a play around with that and will probably write a post about it very soon.

I’ve also been busy working like mad on developing Blazored. I’ve moved over my old LocalStorage and Localisation packages to the new Blazored org. Both of these packages are now available via NuGet as Blazored.LocalStorage and Blazored.Localisation. I’ll delist the old packages soon, to avoid any confusion. I’ve also written an article for Telerik, Creating a Reusable, JavaScript-free Modal Blazor Modal. Plus, I’m currently working on a pretty big change for the blog, but you’ll get to find out more about that soon! :)

Blazored Modal is here

Please note: There have been several breaking changes since this post was written. Please check out https://modal.blazored.com/ or https://github.com/Blazored/Modal for the most up to date instructions for using the modal.

Introducing Blazored Modal. This package is inspired from the article I wrote above but I’ve extended the functionality and made a few tweaks and improvements. It’s the fourth package available from Blazored and there are a lot more on the way. Anyway, I’m sure you’re all itching to try it out so let me take you through getting setup and using Blazored Modal.

Getting Started

The first thing you will need to do is install the package from NuGet. You can do this in a number of ways.

In Visual Studio you can right click on Dependencies and click Manage NuGet Packages. Then just search for Blazored.Modal and install from there.

If you prefer you can also use the command line either via the Package Manager using the following command:

Install-Package Blazored.Modal

Or via the dotnet CLI using this command:

dotnet add package Blazored.Modal

Once you have the package installed, you need to do a few things to get it setup.

Register services

First, you need to add the Blazored Modal services. To do this just use the AddBlazoredModal extension method in your Startup.cs ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    services.AddBlazoredModal();
}

Add imports

Second, you must add a few lines to you root _ViewImports.cshtml.

@using Blazored.Modal
@using Blazored.Modal.Services

@addTagHelper *, Blazored.Modal

Add BlazoredModal component

The third and final step is to add the <BlazoredModal /> component to your apps MainLayout.cshtml.

@inherits BlazorLayoutComponent

<BlazoredModal />

<!-- Other code omitted for brevity -->

If you are using the package in a Blazor Server app then you will need to add a link to the CSS in your _Hosts.cshtml file.

<link href="_content/Blazored.Modal/blazored-modal.css" rel="stylesheet"/>

That’s it! Everything is now setup and you can start to use the modal. Let’s look at that next.

Usage

The modal is triggered via the IModalService, you don’t interact directly with the modal component. The interface exposes 2 things, the ShowModal method and the OnClose event.

There are two overloads of the ShowModal method, one takes a title, and the type of the component to display. The other takes the same 2 arguments plus a ModalParameters instance. The second overload is used when you need to pass values to the modal.

You can attach a handler to the OnClose event if required. This will be triggered once the modal closes in case you wish to perform any actions, such as a data reload.

Showing a modal with no parameters

If you wish to show a component which requires no parameters, you can call the first overload of the ShowModal method.

For example, if I had a component which displayed a list of movies called MovieList. And I wanted to show it from a page component called Movies. I would need to inject the IModalService into the Movies component. I could then add a button which invokes the modal.

@page "/movies"
@inject IModalService Modal

<button @onclick="@(() => Modal.Show("Movie List", typeof(MovieList)))"

Showing a modal and passing parameters

While the above is great for simple scenarios, there is usually a need to pass some information to whatever component the modal is rendering.

Keeping with the movie theme, if I wanted to edit the details of a movie I might want to pass up the current instance of the movie or at least the ID of it. That would look something like this.

@page "/movies"
@inject IModalService Modal

<button @onclick="@(() => EditMovie(11))">Edit Movie</button>

@code {

    void EditMovie(int Id) 
    {
        var parameters = new ModalParameters();
        parameters.Add("MovieId", Id);
        Modal.Show("Edit Movie", typeof(EditMovie), parameters);
    }

}

// EditMovie Component

@code {

    [CascadingParameter] ModalParameters Parameters { get; set; }
    
    int MovieId { get; set; }

    protected override void OnInitialized()
    {
        MovieId = Parameters.Get<int>("MovieId");
        LoadMovie(MovieId);
    }
    
}

Reacting to modal closed

The last feature is the ability to react to the modal having closed. This is achieved by attaching a handler to the OnClose event of the ModalService. This is currently a little limited as no data can be passed back, but in a future update you will be able to pass data back via this event.

Following on from the previous example, if I wanted to refresh my main Movies component when the modal closes, then I could do this.

@page "/movies"
@inject IModalService Modal

<button @onclick="@(() => EditMovie(11))">Edit Movie</button>

@code {

    void EditMovie(int Id) 
    {
        var parameters = new ModalParameters();
        parameters.Add("MovieId", Id);
        Modal.Show("Edit Movie", typeof(EditMovie), parameters);
        Modal.OnClose += RefreshMovies
    }

    void RefreshMovies()
    {
        // Reload movie data
        // Unregister from event
        Modal.OnClose -= RefreshMovies
    }

}

Wrapping up

I hope you like Blazored Modal, this is obviously a very early version and I plan on adding a more features over time, as with all the Blazored libraries.

If you have any questions or any ideas for things you would like to see then please get in touch on GitHub, Twitter or in the comments.

Announcing Blazored and Blazored Toast!

It’s been a busy few weeks but I’ve got a couple of things I’ve been working on that I’d like to share with you all…

Blazored GitHub Org

Firstly, let me introduce Blazored. This is a new org I’ve created on GitHub and is going to be a dedicated space for all the Blazor libraries I want to build and share with the community. It’s something that I’ve been wanting to do for over 6 months but things just kept getting in the way.

I’ve got quite a few ideas for things I want to do and I’ll obviously talk about them here and on Twitter as I work through them. But if you have any ideas for libraries or controls you would like to see then please get in touch. Obviously, all of the Blazored packages will be open source so please get involved on GitHub with any feature requests, PRs or bug reports you have.

I’ll also be moving over the BlazoredLocalStorage & BlazoredLocalisation packages into the Blazored namespace and into the GitHub & NuGet orgs in the near future so look out for that, but I’ll probably stick something on Twitter when I do it.

Blazored.Toast

My second announcement is the first official Blazored package, Blazored.Toast. In a recent post I wrote, Blazor Toast notifications using only C#, HTML and CSS, I gave an example of toast notifications for Blazor apps. It was a bit rough round the edges and very much an example rather than something you would potentially use.

But I’ve taken the idea and written something which I think is a lot closer to a production use package. It’s also still a 100% JavaScript free implementation, which is going to be my main aim when building for Blazor. I’ve got quite a few enhancements in the pipelines for it already but you can add it to your Blazor applications today.

Published on VSM: An Introduction to Templated Components in Blazor

I’ve just had an article, An Introduction to Templated Components in Blazor, published on Visual Studio Magazine. The article tells you all about templated components and gives you some ideas about how you can take advantage of them in your Blazor projects.

This is actually my 3rd article for VSM but I’ve not mentioned them on here before, I’m not sure why. But from now on, I’ll be adding a quick post whenever I have an article published elsewhere.

Here are the links to my other posts on VSM.

Blazor Toast Notifications using only C#, HTML and CSS

This post is part of the second annual C# advent. Two new posts are published every day between 1st December and 25th December.

In this post, I’m going to show you how to build toast notifications for your Blazor/Razor Components applications. By the time we’re done you’ll be able to show 4 different toasts depending on the level of importance (information, success, warning and error). The best bit, this will all be achieved without using a single line of JavaScript.

For those of you who are new to Blazor and want a bit more info first. You can checkout some of my other posts:


All of the code in this post is available on my GitHub account.


Overview

Before we go any further I just want to give you a quick run down of the solution we will be building. We are going to create a component called Toast which will react to events invoked by a ToastService. The ToastService will be able to be injected into other components in the application. These components can then use it to issue toast messages. Make sense? I hope so, if not, it should all become clear shortly.

Prerequisites

For this post I’m going to be using JetBrains Rider, but you can use Visual Studio or Visual Studio Code instead. You will also need to have the lastest .NET SDK installed.

Creating a new project (optional)

I’m going to start start by creating a new stand-alone Blazor WebAssembly app. But feel free to use a different Blazor template or, if you wish, you can add the upcoming code to an existing project.

Building the Toast service

The first thing we need to do is create a new folder called Services and add a couple of bits. The first is an enum called ToastLevels, in here we need to add the 4 different types of toast as follows.

public enum ToastLevel
{
    Info,
    Success,
    Warning,
    Error
}

The second is a new class called ToastService with the following code.

public class ToastService : IDisposable
{
    public event Action<string, ToastLevel>? OnShow;
    public event Action? OnHide;
    private Timer? Countdown;

    public void ShowToast(string message, ToastLevel level)
    {
        OnShow?.Invoke(message, level);
        StartCountdown();
    }

    private void StartCountdown()
    {
        SetCountdown();

        if (Countdown!.Enabled)
        {
            Countdown.Stop();
            Countdown.Start();
        }
        else
        {
            Countdown!.Start();
        }
    }

    private void SetCountdown()
    {
        if (Countdown != null) return;
        
        Countdown = new Timer(5000);
        Countdown.Elapsed += HideToast;
        Countdown.AutoReset = false;
    }

    private void HideToast(object? source, ElapsedEventArgs args) 
        => OnHide?.Invoke();

    public void Dispose() 
        => Countdown?.Dispose();
}

The ToastService is going to be the glue that binds any component wanting to issue a toast, with the toast component which will actually display the toast. It has a single public method, ShowToast() which takes the string to be shown in the toast and the level of the toast.

The service also has two events, OnShow and OnHide, and a timer, Countdown. Our toast component will subscribe to the events and use them to show and hide itself. The timer is used internally by the service and is set at 5 seconds. When it elapses it invokes the OnHide event.

Building the Toast component

With the toast service sorted we now need to build the toast component. This will work with the service to get toasts on the screen.

Let’s start by creating a new component in the Shared folder called Toast.razor. At the top of the component, we’re going to inject the ToastService and make the component implement IDisposable.

@inject ToastService ToastService
@implements IDisposable

Then in the code block add the following logic.

@code {
    private string? _heading;
    private string? _message;
    private bool _isVisible;
    private string? _backgroundCssClass;
    private string? _iconCssClass;

    protected override void OnInitialized()
    {
        ToastService.OnShow += ShowToast;
        ToastService.OnHide += HideToast;
    }

    private void ShowToast(string message, ToastLevel level)
    {
        BuildToastSettings(level, message);
        _isVisible = true;
        StateHasChanged();
    }

    private void HideToast()
    {
        _isVisible = false;
        StateHasChanged();
    }
    
    private void BuildToastSettings(ToastLevel level, string message)
    {
        switch (level)
        {
            case ToastLevel.Info:
                _backgroundCssClass = $"bg-info";
                _iconCssClass = "info";
                _heading = "Info";
                break;
            case ToastLevel.Success:
                _backgroundCssClass = $"bg-success";
                _iconCssClass = "check";
                _heading = "Success";
                break;
            case ToastLevel.Warning:
                _backgroundCssClass = $"bg-warning";
                _iconCssClass = "exclamation";
                _heading = "Warning";
                break;
            case ToastLevel.Error:
                _backgroundCssClass = "bg-danger";
                _iconCssClass = "times";
                _heading = "Error";
                break;
            default:
                throw new ArgumentOutOfRangeException(nameof(level), level, null);
        }

        _message = message;
    }

    void IDisposable.Dispose()
    {
        ToastService.OnShow -= ShowToast;
        ToastService.OnHide -= HideToast;
    }
}

Hopefully the above makes sense but let’s walk through it just to be sure.

To start, we’re defining a few fields that will be used in the markup portion of the component.

Next, we’re overriding one of Blazors component lifecycle events, OnInitialized (you can read about Blazors other lifecycle events in this post). In here, we’re wiring up the events we defined in the ToastService to handlers in the component.

Then we have the event handlers, ShowToast and HideToast. ShowToast takes the message and the toast level and passes them to BuildToastSettings. This then sets various CSS class names, the heading and message. The IsVisible property is then set on the component and StateHasChanged is called. HideToast just sets IsVisible to false and calls StateHasChanged.

Quick note on StateHasChanged

You may be wondering what StateHasChanged is and why are we calling it? Let me explain.

A component usually needs to re-render when its state changes, a property value updates for example. When this update comes from within the component itself or via a value passed into the component using the [Parameter] directive, i.e. something the component knows about and can monitor. Then a re-render is triggered automatically.

However, if an update happens to the components state which is from an external source, for example an event. Then this automatic process is bypassed and a manual call has to be made to let the component know something has changed. This is where StateHasChanged comes in.

In our case we’re updating the components values based on an external event, OnShow from the ToastService. This means we have to call StateHasChanged to let the component know it needs to re-render.

Now we have the component’s logic in place let’s move onto the markup.

<div class="cs-toast @(_isVisible ? "cs-toast-visible" : null) @_backgroundCssClass">
    <div class="cs-toast-icon">
        <i class="fa fa-@_iconCssClass" aria-hidden="true"></i>
    </div>
    <div class="cs-toast-body">
        <h5>@_heading</h5>
        <p>@_message</p>
    </div>
</div>

The markup defines a div that has a bit of logic on it to toggle the cs-toast-visible class based on the _isVisible field. We then have a div for the icon and a div for the body of the toast.

We also need some styling to go with our markup.

.cs-toast {
    display: none;
    padding: 1.5rem;
    color: #fff;
    z-index: 999;
    position: absolute;
    width: 25rem;
    top: 2rem;
    border-radius: 1rem;
    right: 2rem;
}

.cs-toast-icon {
    display: flex;
    flex-direction: column;
    justify-content: center;
    padding: 0 1rem;
    font-size: 2.5rem;
}

.cs-toast-body {
    display: flex;
    flex-direction: column;
    flex: 1;
    padding-left: 1rem;
}

.cs-toast-body p {
    margin-bottom: 0;
}

.cs-toast-visible {
    display: flex;
    flex-direction: row;
    animation: fadein 1.5s;
}

@keyframes fadein {
    from {
        opacity: 0;
    }

    to {
        opacity: 1;
    }
}

Putting everything together

We almost have a working toast component, we just need to wire up a couple of things then we should be able to give it a test.

Registering with DI

We need to register our ToastService with Blazors DI container. This is done in Program.cs in the same way you would with any ASP.NET Core application.

builder.Services.AddScoped<ToastService>();

We’re registering the service as scoped, the reason for this is that it will give the correct behaviour in both Blazor WebAssembly and Blazor Server.

If you’re interested in how the various service lifetimes work in Blazor WebAssembly and Blazor Server applications checkout my imaginatively named post, Service lifetimes in Blazor.

Adding the Toast component to the main layout

We also need to add the Toast component into our MainLayout component as follows.

@inherits LayoutComponentBase

<Toast />

<div class="page">
    <div class="sidebar">
        <NavMenu/>
    </div>

    <main>
        <div class="top-row px-4">
            <a href="https://docs.microsoft.com/aspnet/" target="_blank">About</a>
        </div>

        <article class="content px-4">
            @Body
        </article>
    </main>
</div>

As well as a link to FontAwesome in the head tag of Index.html.

<head>
    ...
    <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.6.1/css/all.css" integrity="sha384-gfdkjb5BdAXd+lj+gudLWI+BXq4IuLW5IT+brZEZsLFm++aCMlF1V92rMkPaX4PP" crossorigin="anonymous">
    ...
</head>

So we don’t have to use full qualified named in our components when injecting the ToastService we can add a using statement to the _Imports.razor in the root of the application.

@using BlazorToastNotifications.Services

Add toast calls to the index page

The last thing to do is to modify the Index component so we can show off our new toasts.

@page "/"
@inject ToastService toastService

<PageTitle>Index</PageTitle>

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

<button class="btn btn-info" @onclick="@(() => toastService.ShowToast("I'm an INFO message", ToastLevel.Info))">Info Toast</button>
<button class="btn btn-success" @onclick="@(() => toastService.ShowToast("I'm a SUCCESS message", ToastLevel.Success))">Success Toast</button>
<button class="btn btn-warning" @onclick="@(() => toastService.ShowToast("I'm a WARNING message", ToastLevel.Warning))">Warning Toast</button>
<button class="btn btn-danger" @onclick="@(() => toastService.ShowToast("I'm an ERROR message", ToastLevel.Error))">Error Toast</button>

The finished result!

With all that in place we should now be able to spin up our app and click each of the buttons and see the 4 different toast messages.

Summary

I hope you have enjoyed reading this post and if you’re new to Blazor I hope I’ve piqued your interest and inspired you to find out more. Here are a few links that are worth checking out.

I think it’s really exciting to see how much is already achievable in Blazor. Being able to create notifications like this using just HTML, CSS and C# without having to write a single line of JavaScript is just fantastic. And things are only going to get better as WebAssembly and .NET Core runtimes continue to develop.

All the code for this post can be found on GitHub. I will also be packing this all up into a Nuget package in the next couple of days so you can just install it into your Blazor projects.

Simple Localisation in Blazor

Firstly, let me apologise for the lack of blog posts in November, I was busy getting married and enjoying some time away with the wife! But I’m now all recharged and normal service should now be resumed.

In this post, I’m going to show you a way to set the current culture in your Blazor apps based on the users browser. Just to be clear this is going to be for client-side only apps. I’ve actually bundled this up as a Nuget package to make things a bit easier if you want to do the same in your app.

As a little disclaimer, I’ve not had a vast amount of experience with localising applications, I’ve mainly worked on in-house systems, so if I’m missing anything please let me know in the comments. But after doing a bit of testing this method seems to work pretty well.

Blazor and (lack of) localisation

For those of you who are not aware currently Blazor (client-side) does not have a built-in mechanism for handling localisation. This is not Blazors fault however, it is due to a missing timezone implementation in the Mono WASM runtime, which is being tracked here. This means that Blazor applications have no current culture and calling something like DateTime.Now will return in UTC format regardless of the users settings.

When developing client-side applications this is obviously a bit of an issue and can cause a lot of confusion for the user. Take a date such as 01/02/2018. Is that the 1st February 2018 or 2nd January 2018? Well that depends where in the world you are. Here in the UK it would be the 1st February. But in the US it would be 2nd January.

After a bit of messing about this is what I’ve come up with.

Finding the users locale

First I need to know the locale of the user. This is usually expressed in the form of culture codes they look like this, en-GB. This code represents English (United Kingdom). It turns out browsers actually expose a few different ways to get this information.

Like most things JavaScript not all these options return the same thing or what you might expect. My initial attempt at solving this problem was to use Intl.DateTimeFormat().resolvedOptions().locale.

This seemed to work fine however I run a Mac and swap between MacOS and Windows 10 running on Parallels. I noticed on MacOS this returned en-GB as I was expecting but on Windows 10 it returned en-US. I checked all my settings and everything appeared to be correctly set to UK culture. This made me slightly concerned that this was not working completely correctly.

After a bit of Googling and reading I decided to go with a combination of the other options. Most people seem to agree this gives the most accurate result in the majority of situations. It looks like this.

getBrowserLocale: function () {
    return (navigator.languages && navigator.languages.length) ? navigator.languages[0] : navigator.userLanguage || navigator.language || navigator.browserLanguage || 'en';
}

Setting CurrentCulture in Blazor

Now I have a way of getting the users locale I just need a way of setting CultureInfo.CurrentCulture in Blazor.

I started by doing the following in the Main method of Program.cs just to see if it worked.

public static void Main(string[] args)
{
    CultureInfo.CurrentCulture = new CultureInfo("en-GB");
    CreateHostBuilder(args).Build().Run();
}

I then just printed out DateTime.Now on the default Blazor template homepage component.

IT WORKED!

Before

After

That’s great but it needs to be set dynamically based on the value in the browser not hardcoded in Program.Main. It also needs to be set before anything tries to render any UI. The Startup.Configure method seemed the best bet, so I created an extension method on IBlazorApplicationBuilder. In order to make this work I had to make the call synchronous by down casting JSRuntime to IJSInProcessRuntime. The end product looks like this.

public static void UseBrowserLocalisation(this IBlazorApplicationBuilder app)
{
    var browserLocale = ((IJSInProcessRuntime)JSRuntime.Current).Invoke<string>("blazoredLocalisation.getBrowserLocale");
    var culture = new CultureInfo(browserLocale);

    CultureInfo.CurrentCulture = culture;
    CultureInfo.CurrentUICulture = culture;
}

Wiring it up

All that’s left to do is add a call to UseBrowserLocalisation to Startup.Configure and I should now have Culture Info in Blazor! Just to be sure I printed out the current culture information onto the home component as well as you can see in the screenshot below.

public void Configure(IBlazorApplicationBuilder app)
{
    app.UseBrowserLocalization();
    app.AddComponent<App>("app");
}

Conclusion

In this post I’ve shown a way to set the current culture in a client-side Blazor application based on settings in the users browser. As it turns out it wasn’t to bad, as usual the code can be found on my GitHub. I’ve also packaged it all up as a Nuget package which you can install from the nuget package manager or using the following dotnet CLI command.

dotnet add package Blazored.Localisation

If you have any questions or have found a massive hole in my code please use the comments below or feel free to raise a PR on GitHub.

Service Lifetimes in Blazor

If you’ve had previous experience with ASP.NET Core apps you may have used the built-in dependency injection system. For those who haven’t you can check out the Microsoft Docs site for more info.

When registering services with the service container you must specify the lifetime of the service instance. You can specify one of 3 options singleton, scoped or transient.

_Singleton services are created once and the same instance is used to fulfil every request for the lifetime of the application.

_Scoped services are created once per request. Within a request you will always get the same instance of the service across the application.

_Transient services provide a new instance of the service whenever they are requested. Given a single request, if two classes needed an instance of a transient service they would each receive a different instance.

In this post, I want to show some slight differences in behaviour with dependency injection lifetimes in client-side Blazor and server-side Blazor. I’m going to create a client-side and a server-side Blazor app. In each one I’m going to create the following 3 interfaces and classes, one scoped to each of the 3 lifetimes above.

public interface ISingletonService 
{
    Guid ServiceId { get; set; }
}

public interface IScopedService 
{
    Guid ServiceId { get; set; }
}

public interface ITransientService 
{
    Guid ServiceId { get; set; }
}
public class SingletonService : ISingletonService
{
    public Guid ServiceId { get; set; }

    public SingletonService()
    {
        ServiceId = Guid.NewGuid();
    }
}

public class ScopedService : IScopedService
{
    public Guid ServiceId { get; set; }

    public ScopedService()
    {
        ServiceId = Guid.NewGuid();
    }
}

public class TransientService : ITransientService
{
    public Guid ServiceId { get; set; }

    public TransientService()
    {
        ServiceId = Guid.NewGuid();
    }
}

As you can see, they all expose a ServiceId property which is set in the constructor. I’m going to display it for each service on two different pages in each app, the home page and counter page. I’m then going to do 4 tests and record the values for each one.

  1. Load the home page.
  2. Navigate to the counter page.
  3. Perform a full page refresh.
  4. Open the application in a new tab.

By doing this we should be able to get a clear understanding of how each of the lifetimes behave.

Let’s get started.

Blazor

I’ve created a new Blazor app and added each of the services to the services container with the appropriate lifetime scopes. I’ve then changed the Index and Counter components to display the ServiceId from each instance.

@page "/"
@inject ISingletonService singletonService
@inject IScopedService scopedService
@inject ITransientService transientService

<h1>Blazor Service Lifetimes</h1>

<SurveyPrompt Title="How is Blazor working for you?" />

<div class="row">
    <div class="col-md-12">
        <ul>
            <li>Singleton Service - @singletonService.ServiceId</li>
            <li>Scoped Service - @scopedService.ServiceId</li>
            <li>Transient Service - @transientService.ServiceId</li>
        </ul>
    </div>
</div>
@page "/counter"
@inject ISingletonService singletonService
@inject IScopedService scopedService
@inject ITransientService transientService

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

<hr />

<div class="row">
    <div class="col-md-12">
        <ul>
            <li>Singleton Service - @singletonService.ServiceId</li>
            <li>Scoped Service - @scopedService.ServiceId</li>
            <li>Transient Service - @transientService.ServiceId</li>
        </ul>
    </div>
</div>

@code {
    int currentCount = 0;

    void IncrementCount()
    {
        currentCount++;
    }
}

Let’s see what happens when we start up the app.

Load home page

Reload the page

Open app in new incognito tab

Ok, that’s a lot of GUIDs! Let’s work our way through the results.

At first glance things look pretty normal in the first two tests. The singleton and transient services are behaving as expected but the ID for the scoped service is the same on both pages. This is because Blazor doesn’t have the concept of a scoped lifetime, at least not currently, scoped simply acts the same as singleton.

In the final two tests we can see we are getting totally different results for each service. This is because Blazor is running on the client only and a full page refresh or opening the app in a new tab creates a new instance of the application.

Now let’s take a look at server-side Blazor.

Server-side Blazor

Once again, I’ve started with a fresh new app and registering all the services with the service container. I’ve also updated the Index and Counter components to display the service ID’s, the same way I did for the Blazor app.

Let’s run the same tests and see what we get.

Load home page

Reload the page

Open app in new incognito tab

Again, at first glance things look the same as the Blazor app. We are getting the same singleton instance across both pages and different transient instances. We are also getting the same scoped instance across pages just as we saw previously.

I guess this isn’t really a surprise, after all, client-side and server-side Blazor are just different hosting models for the same framework. But look at the last two tests, the singleton service is the same as it was for the first two tests but the scoped services are different.

Unlike client-side Blazor, server-side Blazor lives on the server, well, almost. When a user loads a server-side Blazor application, a SignalR connection is established between the client and the server. Scoped services are scoped to this connection. This means that the user will continue to receive the same service instance for the duration of their session, as it’s all considered part of the same request.

This explains why we are getting a different scoped instance for the last two tests, each test is creating a new request hence a new scoped service instance.

Wrapping up

I hope you have a better understanding of how service scopes work in both client-side and server-side Blazor.

From our testing we now know that in Blazor apps there are actually only 2 service lifetimes, singleton and transient. And that we can count on these scopes behaving as expected while running the app in a single session.

With server-side Blazor, we saw all 3 service lifetimes are available however, scoped instances seemed to behave a bit different. The scoped service lived much longer than a scoped service in a say, a traditional MVC application.

Building a blogging app with Blazor: Adding Authentication

Last time I added editing and deleting to the blogging app, this finished off the admin functions. In this final post I’m going to add authentication to protect those admin functions. Let’s get started.

The Server

For the purposes of this demo app I’m going to add basic authentication using JSON web tokens. The majority of the server code is inspired by this blog series by Jon Hilton. Just to be clear though, you will need a more robust way of authenticating username and passwords. But as our focus is more on the Blazor side of things, this will be fine.

I’m going to start by adding a appsettings.json file to the project with the following key-value pairs.

{
    "JwtSecurityKey": "RANDOM_KEY_MUST_NOT_BE_SHARED",
    "JwtIssuer": "https://localhost",
    "JwtExpiryInDays": 14
}

Then the following to the Startup.cs

public IConfiguration Configuration { get; }

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
    Configuration = builder.Build();
}

This code is loading the key value pairs from the appsettings.json into a IConfiguration instance so they are available for use within the app.

Next I’m going to add a LoginController with a Login method, this where I’ll submit the username and password to from the Blazor client.

public class LoginController : Controller
{
    private readonly IConfiguration _configuration;
    
    public LoginController(IConfiguration configuration)
    {
        _configuration = configuration;
    }
    
    [HttpPost(Urls.Login)]
    public IActionResult Login([FromBody] LoginDetails login)
    {
        if (login.Username == "admin" && login.Password == "SuperSecretPassword")
        {
            var claims = new[]
            {
                new Claim(ClaimTypes.Name, login.Username)
            };
            var key = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_configuration["JwtSecurityKey"]));
            var creds = new SigningCredentials(key, SecurityAlgorithms.HmacSha256);
            var expiry = DateTime.Now.AddDays(Convert.ToInt32(_configuration["JwtExpiryInDays"]));
            var token = new JwtSecurityToken(
                _configuration["JwtIssuer"],
                _configuration["JwtIssuer"],
                claims,
                expires: expiry,
                signingCredentials: creds
            );
            
            return Ok(new { token = new JwtSecurityTokenHandler().WriteToken(token) });
        }
        
        return BadRequest("Username and password are invalid.");
    }
}

Once again, hard-coding usernames and passwords this way is not good. In a real app I’d be checking against something like Azure AD or the like.

I’m not going to go into detail about what this code is doing. As I mentioned before, you can read all about it in Jon Hiltons blog. But to summarise, if the username and password match a valid JWT will be generated and returned to the caller. Otherwise, they will receive a 400 Bad Request.

I need to add a couple of items to the shared project, the LoginDetails class and a new login route to the Urls class.

public class LoginDetails
{
    public string Username { get; set; }
    public string Password { get; set; }
}
public const string Login = "api/login";

Back in the Startup.cs I need to enable authentication and specifically JWT bearer authentication. First I’ll add the following to the ConfigureServices method above the call to register the MVC services.

services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
        .AddJwtBearer(options => 
        {
            options.TokenValidationParameters = new TokenValidationParameters
            {
                ValidateIssuer = true,
                ValidateAudience = true,
                ValidateLifetime = true,
                ValidateIssuerSigningKey = true,
                ValidIssuer = Configuration["JwtIssuer"],
                ValidAudience = Configuration["JwtIssuer"],
                IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Configuration["JwtSecurityKey"]))
            };
        });

Then I’ll add the following to the Configure method just above the app.UseMvc statement.

app.UseAuthentication();

I’ve now got my app setup to use JWT bearer authentication. All that’s left to do is add the [Authorize] attribute above any endpoints I want to require an authorised user. In the BlogPostController that is the AddBlogPost, UpdateBlogPost and DeleteBlogPost methods.

That concludes the changes needed in the server project, now let’s move on to the client.

The Client

I’m going to start by adding a nuget package called Blazored.LocalStorage. This is a simple library I built to provide access to the browsers local storage APIs from Blazor. I’m going to be using it to store the JWT that comes from server after a successful login.

Next is the AppState class. This will contain the log in and log out methods as well as maintain if the user is logged in or not. This class will use the library above to save the auth token to local storage.

public class AppState
{
    private readonly HttpClient _httpClient;
    private readonly ILocalStorage _localStorage;

    public bool IsLoggedIn { get; private set; }

    public AppState(HttpClient httpClient,
                    ILocalStorage localStorage)
    {
        _httpClient = httpClient;
        _localStorage = localStorage;
    }

    public async Task Login(LoginDetails loginDetails)
    {
        var response = await _httpClient.PostAsync(Urls.Login, new StringContent(Json.Serialize(loginDetails), Encoding.UTF8, "application/json"));

        if (response.IsSuccessStatusCode)
        {
            await SaveToken(response);
            await SetAuthorizationHeader();

            IsLoggedIn = true;
        }
    }

    public async Task Logout()
    {
        await _localStorage.RemoveItem("authToken");
        IsLoggedIn = false;
    }

    private async Task SaveToken(HttpResponseMessage response)
    {
        var responseContent = await response.Content.ReadAsStringAsync();
        var jwt = Json.Deserialize<JwToken>(responseContent);

        await _localStorage.SetItem("authToken", jwt.Token);
    }

    private async Task SetAuthorizationHeader()
    {
        if (!_httpClient.DefaultRequestHeaders.Contains("Authorization"))
        {
            var token = await _localStorage.GetItem<string>("authToken");
            _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
        }
    }
}

The code is pretty straightforward, the main work is being done in the Login method. It posts the login details up to the API, if it’s successful then it saves the token to local storage, applies the authorization header to the HttpClient and marks the user as logged in.

I need to register AppState with the DI container. This is done in Startup.cs in the ConfigureServices method. I’ll also add the services for BlazoredLocalStorage while I’m here as I forgot to do this earlier.

public void ConfigureServices(IServiceCollection services)
{
    services.AddLocalStorage();
    services.AddSingleton<AppState>();
}

I can now track the logged in state of a user and I’ve got the ability to perform log in and log out requests. Now I’m going to make some changes to the MainLayout component to make use of some of that functionality.

First, I’m going to add a model class as currently I’ve only got the component markup defined. Up till now there hasn’t been any logic required in this component. But as I’ve said previously, I prefer to keep the logic separated from the markup as much as possible.

public class MainLayoutModel : BlazorLayoutComponent
{
    [Inject] protected AppState AppState { get; set; }

    protected async Task Logout()
    {
        await AppState.Logout();
    }
}

Notice I’m inheriting from BlazorLayoutComponent here not the usual BlazorComponent. This is because BlazorLayoutComponent exposes a RenderFragment called Body which holds the content to be rendered in the layout.

Now I have access to AppState and I have a log out method on my layout component. I’m going to make some changes to the main menu.

<ul class="navbar-nav ml-auto">
    <li class="nav-item">
        <NavLink href="/" Match="NavLinkMatch.All">Home</NavLink>
    </li>
    @if (AppState.IsLoggedIn)
    {
    <li class="nav-item">
        <NavLink href="/addpost">Add Post</NavLink>
    </li>
    <li class="nav-item">
        <button class="logout" @onclick=@Logout>Log Out</button>
    </li>
    }
    else
    {
    <li class="nav-item">
        <NavLink href="/login">Log In</NavLink>
    </li>
    }
</ul>

As you can see I’ve added a check to see if the user is logged in, if they are then I’m showing the link to add a post. I’m also showing a button which allows the user to log out if they wish.

If the user isn’t logged in then I’m showing a log in link which will send them to the log in component I’m going to build next.

I’m going to add a new folder called Login to the Features folder. Then add the following class, Login.cshtml.cs.

public class LoginModel : BlazorComponent
{
    [Inject] private AppState _appState { get; set; }
    [Inject] private IUriHelper _uriHelper { get; set; }
    
    protected LoginDetails LoginDetails { get; set; } = new LoginDetails();
    protected bool ShowLoginFailed { get; set; }

    protected async Task Login()
    {
        await _appState.Login(LoginDetails);

        if (_appState.IsLoggedIn)
        {
            _uriHelper.NavigateTo("/");
        }
        else
        {
            ShowLoginFailed = true;
        }
    }
}

Then I’m going to add the component markup, Login.cshtml.

@page "/login" 

@layout MainLayout
@inherits LoginModel

<WdHeader Heading="WordDaze" SubHeading="Please Enter Your Login Details"></WdHeader>

<div class="container">
    <div class="row">
        <div class="col-md-4 offset-md-4">
            <div class="editor">
                @if (ShowLoginFailed)
                {
                    <div class="alert alert-danger">
                        Login attempt failed.
                    </div>
                }
                <input type="text" @[email protected] placeholder="Username" class="form-control" />
                <input type="password" @[email protected] placeholder="Username" class="form-control" />
                <button class="btn btn-primary" @onclick="@Login">Login</button>
            </div>
        </div>
    </div>
</div>

The component renders a simple form with username, password inputs. When a user attempts to login a call is made to the Login method on the AppState class. Once this is completed the IsLoggedIn property is checked on AppState. If this is true then the user is redirected to the home page and will now see the links available to authenticated users. If it’s false, then a message is displayed stating the login attempt failed.

At this point things are looking pretty good! You should be able to fire up the app and log in, submit a post as well as be able to edit and delete it. Pretty cool, but there are a couple of little things I want to tidy up before I call it a day.

Currently a non-authenticated user can go directly to the add post component and while they will not be able to post anything this feels a bit clunky. I can check if the user is logged in in the OnInitAsync method. If they’re not then I can simply redirect them back to the home page. Which should work nicely.

[Inject] private AppState _appState { get; set; }

protected override async Task OnInitAsync()
{
    if (!_appState.IsLoggedIn) 
    {
        _uriHelper.NavigateTo("/");
    }

    if (!string.IsNullOrEmpty(PostId))
    {
        await LoadPost();
    }
}

That’s much better. The only other issue I have is the hard-coded author name. Now I have an authenticated user it would be good to use that name on posts. So I’m going to removed the hard-coded value from SavePost method on the PostEditorModel class. Then back in the server project I’m going to update the AddBlogPost method on the BlogPostsController to the following.

[Authorize]
[HttpPost(Urls.AddBlogPost)]
public IActionResult AddBlogPost([FromBody]BlogPost newBlogPost)
{
    newBlogPost.Author = Request.HttpContext.User.Identity.Name;
    var savedBlogPost = _blogPostService.AddBlogPost(newBlogPost);

    return Created(new Uri(Urls.BlogPost.Replace("{id}", savedBlogPost.Id.ToString()), UriKind.Relative), savedBlogPost);
}

All done.

Wrapping up

In this post I’ve covered adding authentication to my blogging application. A user can now log in and add new posts and edit or delete existing ones. This has been achieved by the use of JSON web tokens.

This post also marks the end of this series on building a blogging app with Blazor. I hope you’ve enjoyed reading them and have found something useful. As always, if you have any questions or comment then please let me know below. You can find all the code for this series on my GitHub.

Building a blogging app with Blazor: Editing & Deleting Posts

To recap my last post, I added the ability to add a blog post. I wanted the writing experience to be clean and efficient so I added Markdown support. I also removed the hard coded data I’d been using. In this post I’m going to add the ability to edit and delete posts. Lets get started.

The Server

I’m going to start by adding two new method to the BlogPostService, UpdateBlogPost and DeleteBlogPost.

public void UpdateBlogPost(int postId, string updatedPost, string updateTitle)
{
    var originalBlogPost = _blogPosts.Find(x => x.Id == postId);

    originalBlogPost.Post = updatedPost;
    originalBlogPost.Title = updateTitle;
}

public void DeleteBlogPost(int postId) 
{
    var blogPost = _blogPosts.Find(x => x.Id == postId);

    _blogPosts.Remove(blogPost);
}

With these in place I have something to call from my controller. Before I add new endpoints though I’m going to add the routes in the Urls class in the shared project.

public const string UpdateBlogPost = "api/blogpost/{id}";
public const string DeleteBlogPost = "api/blogpost/{id}";

I know the are the same, but in the future if I wanted to adjust the routes independently, I could. Now the routes are taken care of I need to add two new endpoints on my API controller.

[HttpPut(Urls.UpdateBlogPost)]
public IActionResult UpdateBlogPost(int id, [FromBody]BlogPost updatedBlogPost)
{
    _blogPostService.UpdateBlogPost(id, updatedBlogPost.Post, updatedBlogPost.Title);

    return Ok();
}

[HttpDelete(Urls.DeleteBlogPost)]
public IActionResult DeleteBlogPost(int id)
{
    _blogPostService.DeleteBlogPost(id);

    return Ok();
}

That’s all I need on the server, time to get going on the client-side.

The Client

The first thing I want to do is to correct a bug which I spotted after writing the last post. I’d been getting some weird issues with routing on the client. If I tried to view a blog post directly by typing the URL in the address bar nothing would load. Also when clicking the home link from say, the add post page, the entire app would hard refresh.

After a bit of checking I realised that when I implemented the theme I forgot to add the <base /> tag to the <head> section. This is used by Blazor as the base URI for requests and for the router to understand what routes to handle. I just need the following to sort the issue.

<head>

    ...

    <title>WordDaze - Blazor Powered Blogging App</title>

    <base href="/" />

    ...

</head>

Now that’s fixed, I can get going with the changes for editing and deleting.

Editing Posts

For the purpose of this demo app I just want the ability to edit the title and the post itself. I’ve already got the UI for this in the AddPost component. In fact, with a few small upgrades it should serve for both adding and editing posts.

I’m going to start by changing it’s name to be a better representation of it’s responsibilities. PostEditor seems a better fit to me.

using System;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Blazor;
using Microsoft.AspNetCore.Blazor.Components;
using Microsoft.AspNetCore.Blazor.Services;
using Microsoft.JSInterop;
using WordDaze.Shared;

namespace WordDaze.Client.Features.PostEditor
{
    public class PostEditorModel : BlazorComponent
    {
        [Inject] private HttpClient _httpClient { get; set; }
        [Inject] private IUriHelper _uriHelper { get; set; }

        [Parameter] protected string PostId { get; set; }

        protected string Post { get; set; }
        protected string Title { get; set; }
        protected int CharacterCount { get; set; }
        protected BlogPost ExistingBlogPost { get; set; } = new BlogPost();
        protected bool IsEdit => string.IsNullOrEmpty(PostId) ? false : true;

        protected ElementRef editor;

        protected override async Task OnInitAsync()
        {
            if (!string.IsNullOrEmpty(PostId))
            {
                await LoadPost();
            }
        }

        public async Task UpdateCharacterCount() => CharacterCount = await JSRuntime.Current.InvokeAsync<int>("wordDaze.getCharacterCount", editor);

        public async Task SavePost() 
        {
            var newPost = new BlogPost() {
                Title = Title,
                Author = "Joe Bloggs",
                Post = Post,
                Posted = DateTime.Now
            };

            var savedPost = await _httpClient.PostJsonAsync<BlogPost>(Urls.AddBlogPost, newPost);

            _uriHelper.NavigateTo($"viewpost/{savedPost.Id}");
        }

        public async Task UpdatePost() 
        {
            await _httpClient.PutJsonAsync(Urls.UpdateBlogPost.Replace("{id}", PostId), ExistingBlogPost);

            _uriHelper.NavigateTo($"viewpost/{ExistingBlogPost.Id}");
        }

        private async Task LoadPost() 
        {
            ExistingBlogPost = await _httpClient.GetJsonAsync<BlogPost>(Urls.BlogPost.Replace("{id}", PostId));
            CharacterCount = ExistingBlogPost.Post.Length;
        }
    }
}

Let me break down the changes above. I’ve added PostId parameter. This will be used if the component is going to be in edit mode. It will be populated from a PostId in the URL.

I’ve added an ExistingBlogPost property which will be populated when LoadPost is called when editing. I’ve also added a IsEdit property which I will use to show and hide UI in the component.

Finally, I’ve added an UpdatePost method which makes the call to the update API endpoint I built earlier.

Now for the component.

@page "/addpost"
@page "/editpost/{PostId}"

@layout MainLayout
@inherits PostEditorModel

@if (IsEdit)
{
    <WdHeader Heading="WordDaze" SubHeading="Edit Post"></WdHeader>
}
else 
{
    <WdHeader Heading="WordDaze" SubHeading="Add Post"></WdHeader>
}

<div class="container">
    <div class="row">
        <div class="col-md-12">
            @if (IsEdit)
            {
            <div class="editor">
                <input @[email protected] placeholder="Title" class="form-control" />
                <textarea @ref="editor" @[email protected] @onkeyup="@UpdateCharacterCount" placeholder="Write your post (Supports Markdown)" rows="25"></textarea>
                <div class="character-count text-blaxk-50 float-left">@CharacterCount Characters</div>
                <button class="btn btn-primary float-right" onclick="@UpdatePost">Update</button>
            </div>
            }
            else
            {
            <div class="editor">
                <input @bind=@Title placeholder="Title" class="form-control" />
                <textarea @ref="editor" bind=@Post @onkeyup="@UpdateCharacterCount" placeholder="Write your post (Supports Markdown)" rows="25"></textarea>
                <div class="character-count text-blaxk-50 float-left">@CharacterCount Characters</div>
                <button class="btn btn-primary float-right" onclick="@SavePost">Post</button>
            </div>
            }
        </div>
    </div>
</div>

The first thing I’ve done is declared two @page directives. In Blazor, components can be accessed via multiple routes. In this case the component will be in edit mode if accessed via a route such as /editpost/1. But it will be in add mode if accessed from a route of /addpost.

I’ve used the IsEdit property to show different UI depending on which mode the component is in. If in edit mode, I’m binding the controls to the ExistingBlogPost property. In add mode, I’m binding to the original Title and Post properties.

Delete Post

With all the changes done to the PostEditor component I just need to add a delete button and have it call a method on the component model and I should be done.

public async Task DeletePost() 
{
    await _httpClient.DeleteAsync(Urls.DeleteBlogPost.Replace("{id}", ExistingBlogPost.Id.ToString()));

    _uriHelper.NavigateTo("/");
}
@if (IsEdit)
{
<div class="editor">
    <input @[email protected] placeholder="Title" class="form-control" />
    <textarea @ref="editor" @[email protected] @onkeyup="@UpdateCharacterCount" placeholder="Write your post (Supports Markdown)" rows="25"></textarea>
    <div class="character-count text-blaxk-50 float-left">@CharacterCount Characters</div>
    <button class="btn btn-primary float-right" @onclick="@DeletePost">Delete</button>
    <button class="btn btn-primary float-right" @onclick="@UpdatePost">Update</button>
</div>
}

One last thing

I nearly forgot, I need a link to edit the post. On the ViewPost component I’m going to add a NavLink component in a new row under where I currently display the blog post.

<div class="row">
    <div class="col-md-12">
        <NavLink class="btn btn-primary float-right" href="@($"/editpost/{BlogPost.Id}")">Edit</NavLink>
    </div>
</div>

Wrapping up

With that, the penultimate post of this series comes to an end. In the last post I’m going to be putting all the add, edit and delete functionality behind a login. Some may argue I should have done that first but that would be boring.

Enjoying things so far? If you have any questions or suggestions then please let me know in the comments below. As always all the source code to accompany this series is available on GitHub.

Building a blogging app with Blazor: Add Post

In the last post, I added the ability to view a blog post. But the blog data was just coming from a hard-coded list. So in this post I’m going to add the ability to write a new post. And just like any good blogging platform I want to be able to write using Markdown. Lets get started.

The Server

Up till now I have been using a hard-coded list of blog posts, but I want to make things a bit more real world. The first thing I am going to do is create a new class in the server project called BlogPostService.cs with the following code.

using System;
using System.Collections.Generic;
using System.Linq;
using WordDaze.Shared;

namespace WordDaze.Server
{
    public class BlogPostService
    {
        private List<BlogPost> _blogPosts;

        public BlogPostService()
        {
            _blogPosts = new List<BlogPost>();
        }

        public List<BlogPost> GetBlogPosts() 
        {
            return _blogPosts;
        }

        public BlogPost GetBlogPost(int id) 
        {
            return _blogPosts.SingleOrDefault(x => x.Id == id);
        }

        public BlogPost AddBlogPost(BlogPost newBlogPost)
        {
            newBlogPost.Id = _blogPosts.Count + 1;
            _blogPosts.Add(newBlogPost);

            return newBlogPost;
        }
    }
}

Then I’m going to register this new service with the DI container in Startup.cs.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();

    services.AddResponseCompression(options =>
    {
        options.MimeTypes = ResponseCompressionDefaults.MimeTypes.Concat(new[]
        {
            MediaTypeNames.Application.Octet,
            WasmMediaTypeNames.Application.Wasm,
        });
    });

    services.AddSingleton(typeof(BlogPostService));
}

Finally, I’m going to rework the BlogPostsController.

using WordDaze.Shared;
using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;
using System.Linq;

namespace WordDaze.Server.Controllers
{
    public class BlogPostsController : Controller
    {
        private readonly BlogPostService _blogPostService;

        public BlogPostsController(BlogPostService blogPostService)
        {
            _blogPostService = blogPostService;
        }

        [HttpGet(Urls.BlogPosts)]
        public IActionResult GetBlogPosts()
        {
            return Ok(_blogPostService.GetBlogPosts());
        }

        [HttpGet(Urls.BlogPost)]
        public IActionResult GetBlogPostById(int id)
        {
            var blogPost = _blogPostService.GetBlogPost(id);

            if (blogPost == null)
                return NotFound();

            return Ok(blogPost);
        }
    }
}

With the changes above, I’ve removed the hard-coded list of blog posts. I’ve replaced that hard-coded list with a new service which will be responsible for managing blog posts going forward. As this is an demo app, I’ve scoped the BlogPostService as a singleton and I’m using a simple list as a persistence mechanism. But in a real world app the service could be backed by Entity Framework or some other ORM/data access mechanism.

I can now move on to adding the new endpoint for adding a blog post. First I’ll add the route to the Urls class in the shared project.

public const string AddBlogPost = "api/blogposts";

Then the following method to the BlogPostsController.

[HttpPost(Urls.AddBlogPost)]
public IActionResult AddBlogPost([FromBody]BlogPost newBlogPost)
{
    var savedBlogPost = _blogPostService.AddBlogPost(newBlogPost);

    return Created(new Uri(Urls.BlogPost.Replace("{id}", savedBlogPost.Id.ToString()), UriKind.Relative), savedBlogPost);
}

I now have an endpoint which I can post my new blogs to. I think that will do for the server side of things. Lets move on to the client.

The Client

The first thing I’m going to do is create a folder called AddPost in the Features folder. I’m then going to create a file called AddPost.cshtml.cs.

using System;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Blazor;
using Microsoft.AspNetCore.Blazor.Components;
using Microsoft.AspNetCore.Blazor.Services;
using WordDaze.Shared;

namespace WordDaze.Client.Features.AddPost
{
    public class AddPostModel : BlazorComponent
    {
        [Inject] private HttpClient _httpClient { get; set; }
        [Inject] private IUriHelper _uriHelper { get; set; }

        protected string Post { get; set; }
        protected string Title { get; set; }

        public async Task SavePost() 
        {
            var newPost = new BlogPost() {
                Title = Title,
                Author = "Joe Bloggs",
                Post = Post,
                Posted = DateTime.Now
            };

            var savedPost = await _httpClient.PostJsonAsync<BlogPost>(Urls.AddBlogPost, newPost);

            _uriHelper.NavigateTo($"viewpost/{savedPost.Id}");
        }
    }
}

And, as usual, a file called AddPost.cshtml.

@page "/addpost"
@layout MainLayout
@inherits AddPostModel

<WdHeader Heading="WordDaze" SubHeading="Add Post"></WdHeader>

<div class="container">
    <div class="row">
        <div class="col-md-12">
            <div class="editor">
                <input @bind=@Title placeholder="Title" class="form-control" />
                <textarea @bind=@Post placeholder="Write your post" rows="25"></textarea>
                <button class="btn btn-primary float-right" @onclick="@SavePost">Post</button>
            </div>
        </div>
    </div>
</div>

That is the AddPost component in place. So whats going on here?

In the model class I’ve added a single method which is responsible for posting the new blog post back to the server. Once this is done it will then redirect the user to view the post using the IUriHelper. This helper is provided by Blazor and allows navigation to be performed programmatically via Blazors router.

The AddPost component itself if pretty straight forward. I’ve added an input for the title and bound it to the Title property on the model. And I’ve added a textarea which is bound to the Post property.

Spicing things up

While this is all very functional it’s all a bit boring. Plain text just isn’t very exciting after all. As I said at the start I want to be able to write posts using Markdown. It would also be quite nice to know exactly how many characters I’ve written as well. Lets start with that.

Character Counter

In order to output a live character count I’m going to use some JS Interop. The reason for this is that Blazors bind directive uses JavaScripts onchange event under the covers. And this event doesn’t get fired until the element loses focus. So, if I want to see my character count change as I type then I’m going to have to resort to interop.


NOTE: Just before finishing this post I was speaking about this in the Blazor Gitter chat. And there is in-fact a way to handle this with out resorting to using Blazors JSRuntime API, but it’s a little ugly and I wouldn’t recommend it. But I’ve added it here for reference. Continue reading to see the JS interop version.

<div class="editor">
    <input @bind=@Title placeholder="Title" class="form-control" />
    <textarea @bind=@Post @onkeyup="this.dispatchEvent(new Event('change', { 'bubbles': true }));" placeholder="Write your post (Supports Markdown)" rows="25"></textarea>
    <div class="float-left">@(Post?.Length ?? 0) Characters</div>
    <button class="btn btn-primary float-right" @onclick="@SavePost">Post</button>
</div>

This first step is to add a new JS folder in the wwwroot and then a new file called site.js.

window.wordDaze = {
    getCharacterCount: function(element) {
        return element.value.length;
    }
}

I have written a very simple function that is going to take a reference to an element and then return the length of its value property. In line with the JS interop changes which were introduced in Blazor 0.5.0. I’m attaching my function to the global window object under a scope of wordDaze.

With my JS file in place I just need to reference it in the index.html file.

<body>

    <app>Loading...</app>

    <script src="_framework/blazor.webassembly.js"></script>
    <script src="js/site.js"></script>
</body>

The last step is to make some changes in the AddPost component.

<div class="editor">
    <input @bind=@Title placeholder="Title" class="form-control" />
    <textarea @ref="editor" @bind=@Post @onkeyup="@UpdateCharacterCount" placeholder="Write your post (Supports Markdown)" rows="25"></textarea>
    <div class="float-left">@CharacterCount Characters</div>
    <button class="btn btn-primary float-right" onclick="@SavePost">Post</button>
</div>

I’ve added a ref attribute to the textarea. This captures a reference to this control which I can use when I call the getCharacterCount function. I’m also now binding to the onkeyup event which will call a method called UpdateCharacterCount which I’m going to add to the model class in a second. I’ve also added a new div to display the character count. I’m now going to make the changes to the model class.

protected int CharacterCount { get; set; }
protected ElementRef editor;

public async Task UpdateCharacterCount() => CharacterCount = await JSRuntime.Current.InvokeAsync<int>("wordDaze.getCharacterCount", editor);

I’ve added a new property for the character count. As well as a field to hold the reference to the textarea we saw previously. Finally I have added the UpdateCharacterCount method which calls the JS function I created earlier.

Thats it, I now have a working character counter.

Markdown Support

To wrap things up I am going to add markdown support for my posts. I’m not actually going to make any changes in the AddPost component to achieve this. All the work needs to be done when displaying the post. So I’m going to be making some changes to the ViewPost component.

The first thing I’m going to need to something to parse any markdown in my posts into valid HTML. One of the more popular .NET libraries for this is Markdig. As this is .NET Standard compatible it should work just fine so I’m going to install this into the client project.

With Markdig in place I’m going to make a slight change to the LoadBlogPost method on the ViewPost component.

private async Task LoadBlogPost() 
{
    BlogPost = await _httpClient.GetJsonAsync<BlogPost>(Urls.BlogPost.Replace("{id}", PostId));
    BlogPost.Post = Markdown.ToHtml(BlogPost.Post);
}

After getting the blog post from the API I’m running the post through Markdig in-order to convert any markdown to HTML. I now need to make one last change in the ViewPost component.

@((MarkupString)BlogPost.Post)

I’m casting the blog post to a MarkupString. This is another feature added in Blazor 0.5.0 and allows the rendering of raw HTML.

BE WARNED: Rendering raw HTML is a security risk!

That should be it, it’s now possible to write a blog post using markdown. Oh! I almost forgot, I haven’t added a link to be able to get to the add post page.

<ul class="navbar-nav ml-auto">
    <li class="nav-item">
        <NavLink href="/">Home</NavLink>
    </li>
    <li class="nav-item">
        <NavLink href="/addpost">Add Post</NavLink>
    </li>
</ul>

That’s everything. I have omitted a couple of things from this post which are in the repo on GitHub. Things such as CSS styles and a couple of superficial tweaks to some code. I’ve not talked about them in the post as they’re not really that interesting or relevant but feel free to check out the repo.

Wrapping up

In this post I have covered adding the ability to write new blog posts using markdown. I’ve also added a live character counter using JS interop as well as a non-interop version. Finally I’ve used the Markdig library and Blazors new MarkupString to output an HTML representation of the post. Next time I’m going to look at editing posts.

Enjoying things so far? If you have any questions or suggestions then please let me know in the comments below. As always all the source code to accompany this series is available on GitHub.

Building a blogging app with Blazor: View Post

In the previous post, I built the home page of the app. I added a WebAPI endpoint which returned some hard-coded blog posts. Then built a couple of Blazor components to get that data and display it.

In this post I’m going to build the ability to view a blog post. Following a similar format to last time. I’ll add a new endpoint which will return a blog post then move over to the Blazor side.

The Server

First off I’m going to add a new constant to the Url file in the shared project. This will be the route for the new endpoint.

public const string BlogPost = "api/blogposts/{id}";

With that in place I’m going to add the following code the to BlogPostsController.

[HttpGet(Urls.BlogPost)]
public IActionResult GetBlogPostById(int id) 
{
    var blogPost = _blogPosts.SingleOrDefault(x => x.Id == id);

    if (blogPost == null)
        return NotFound();

    return Ok(blogPost);
}

All I’m doing here is attempting to find a blog post with the id specified. If I can’t find one I’m returning a 404 not found, otherwise I’m returning the blog post.

The Client

With the API in place, I’m going to add a new folder called ViewPost in the Features folder_._ I’m then going to add a new file, ViewPost.cshtml, with the following code.

@page "/viewpost/{postId}"
@layout MainLayout
@inherits ViewPostModel

<header class="masthead" style="background-image: url('/img/home-bg.jpg')">
    <div class="overlay"></div>
    <div class="container">
        <div class="row">
            <div class="col-lg-8 col-md-10 mx-auto">
                <div class="post-heading">
                    <h1>@BlogPost.Title</h1>
                    <span class="meta">Posted by <a href="#">@BlogPost.Author</a> on @BlogPost.Posted.ToShortDateString()</span>
                </div>
            </div>
        </div>
    </div>
</header>

<article>
    <div class="container">
        <div class="row">
            <div class="col-lg-8 col-md-10 mx-auto">
                @BlogPost.Post
            </div>
        </div>
    </div>
</article>

First, I’m defining the route for this component using the @page directive. I’ve then added a route parameter into the template to capture the ID of the post to view. I’m then rendering some pieces of meta data from the blog post in the <header> element. Finally, I’m outputting the actual blog post data.

As with the home component, the heavy lifting is being done in the ViewPostModel class, which this view is inheriting from. Let’s take a look at that.

using System;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Blazor;
using Microsoft.AspNetCore.Blazor.Components;
using WordDaze.Shared;

namespace WordDaze.Client.Features.ViewPost
{
    public class ViewPostModel : BlazorComponent 
    {
        [Inject] private HttpClient _httpClient { get; set; }

        [Parameter] protected string PostId { get; set; }

        protected BlogPost BlogPost { get; set; } = new BlogPost();

        protected override async Task OnInitAsync()
        {
            await LoadBlogPost();
        }

        private async Task LoadBlogPost() 
        {
            BlogPost = await _httpClient.GetJsonAsync<BlogPost>(Urls.BlogPost.Replace("{id}", PostId));
        }
    }
}

Similar to the home component in the previous post, I’m injecting a HTTP client and overriding the OnInitAsync method. I make my call to the API and load the specified blog post.

I’ve added a PostId property which matches the route parameter I defined in the route template above. As with all component arguments in Blazor, they must be decorated with the [Parameter] attribute and be non-public. I then use this value in the LoadBlogPost method to call the API.

Time to refactor

I’ve noticed I’ve duplicated some markup. Both the Home and ViewPost components are using almost exactly the same code for the header. This is a great opportunity to refactor that code into a shared component.

using System;
using Microsoft.AspNetCore.Blazor.Components;

namespace WordDaze.Client.Shared
{
    public class WdHeaderModel : BlazorComponent
    {
        [Parameter] protected string Heading { get; set; }
        [Parameter] protected string SubHeading { get; set; }
        [Parameter] protected string Author { get; set; }
        [Parameter] protected DateTime PostedDate { get; set; }
    }
}

I’m starting by creating the model class for my new shared header component. As you can see it’s pretty simple. I’ve identified the different arguments I need the component to take and declared them as properties, remembering to decorate them with the parameter attribute.

@inherits WdHeaderModel

<header class="masthead" style="background-image: url('/img/home-bg.jpg')">
    <div class="overlay"></div>
    <div class="container">
        <div class="row">
            <div class="col-lg-8 col-md-10 mx-auto">
            <div class="@((Author != null && PostedDate != null) ? "post-heading" : "site-heading" )">
                <h1>@Heading</h1>
                @if (SubHeading != null)
                {
                <span class="subheading">@SubHeading</span>
                }
                @if (Author != null && PostedDate != null) 
                {
                <span class="meta">Posted by <a href="#">@Author</a> on @PostedDate.ToShortDateString()</span>
                }
            </div>
            </div>
        </div>
    </div>
</header>
``
`

I've replaced the hard-coded values with razor code which will print the values of the components arguments. I noted that there are two combinations of arguments I'm going to need to pass into this new component. The first, when on the _Home_ component I need to pass a heading and a sub heading. The second, when I'm on the _ViewPost_ component and I need to pass a heading, an author and a posted date. With this in mind I've added some checks so I don't try to print out anything that may not have a value.

If you're wondering why I've called the component _WdHeader,_ it's so I don't collide with the HTML `<header>` element. Currently with Blazor you can't name your components the same as existing HTML elements but differ them with case, as you can in libraries like React. This is something the team have said they will look at implementing but it's currently not a priority.

Now I have the new header component I just need to refactor the _Home_ and _ViewPost_ components to use it.

```html
@page "/"
@layout MainLayout
@inherits HomeModel

<WdHeader Heading="WordDaze" SubHeading="A Blazor Powered Blogging App"></WdHeader>

<div class="container">
    <div class="row">
        <div class="col-lg-8 col-md-10 mx-auto">
            @foreach (var post in blogPosts)
            {
                <BlogPostPreview BlogPost=@post></BlogPostPreview>
                <hr />
            }
        </div>
    </div>
</div>

@page "/viewpost/{postId}"
@layout MainLayout
@inherits ViewPostModel

<WdHeader [email protected] [email protected] [email protected]></WdHeader>

<article>
    <div class="container">
        <div class="row">
            <div class="col-lg-8 col-md-10 mx-auto">
                @BlogPost.Post
            </div>
        </div>
    </div>
</article>

Oh! I almost forgot, I also need to add the WordDaze.Client.Shared namespace to the _ViewImports.cshtml. This will save me having to add using statements to the Home and ViewPost components.

The finished article now looks like this.

Wrapping Up

I’m going to wrap things up there. I’m happy with the progress so far, the app is now able to list blog posts and to view specific posts. Everything is still very basic at the moment but don’t worry I’ll be adding a bit more flare in the next few posts. Next time I’m going to add the functionality for writing a blog post.

Enjoying things so far? If you have any questions or suggestions then please let me know in the comments below. As always all the source code to accompany this series is available on GitHub.

Building a blogging app with Blazor: Listing Posts

In part 1 of the series I did a load of setup work, getting the solution and project structures in place. I also added a theme to the site to make things look a bit prettier.

In this post, I’m going to work on the home page of the app. I’ll create the first endpoint on the API and get the Blazor app making requests to it. By the end of this post, the app will be able to show a list of blog posts on the home screen.

The Server

I’m going to start by creating the API to return the list of blog posts. The first thing I want to do is define a class to represent a blog post. In the shared project I’m going to create a new class called _BlogPost.cs _ and add the following code.

using System;

namespace WordDaze.Shared
{
    public class BlogPost
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Author { get; set; }
        public DateTime Posted { get; set; }
        public string Post { get; set; }
        public string PostSummary 
        { 
            get {
                if (Post.Length > 50)
                    return Post.Substring(0, 50);

                return Post;
            }
        }
    }
}

One of the great things about Blazor is I can now reference this class from both my Server and Client projects. No more code duplication, how cool is that!

Now I’m going to define the endpoint which will return the list of blog posts. I’m going to create a new controller in the server project, called BlogPostsController.cs, with the following code.

using WordDaze.Shared;
using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;

namespace WordDaze.Server.Controllers
{
    public class BlogPostsController : Controller
    {
        private List<BlogPost> _blogPosts { get; set; } = new List<BlogPost> {
            new BlogPost {
                Id = 1,
                Title = "If only C# worked in the browser",
                Post = "Lorem ipsum dolor sit amet...",
                Author = "Joe Bloggs",
                Posted = DateTime.Now.AddDays(-30)
            },
            new BlogPost { 
                Id = 2, 
                Title = "400th JS Framework released", 
                Post = "Lorem ipsum dolor sit amet...",
                Author = "Joe Bloggs",
                Posted = DateTime.Now.AddDays(-25)
            },
            new BlogPost { 
                Id = 3, 
                Title = "WebAssembly FTW", 
                Post = "Lorem ipsum dolor sit amet...",
                Author = "Joe Bloggs",
                Posted = DateTime.Now.AddDays(-20)
            },
            new BlogPost { 
                Id = 4, 
                Title = "Blazor is Awesome!", 
                Post = "Lorem ipsum dolor sit amet...",
                Author = "Joe Bloggs",
                Posted = DateTime.Now.AddDays(-15)
            },
            new BlogPost { 
                Id = 5, 
                Title = "Your first Blazor App", 
                Post = "Lorem ipsum dolor sit amet...",
                Author = "Joe Bloggs",
                Posted = DateTime.Now.AddDays(-10)
            },
        };

        [HttpGet(Urls.BlogPosts)]
        public IActionResult BlogPosts()
        {
            return Ok(_blogPosts);
        }
    }
}

There is nothing special here. I’ve just added a single endpoint which will return some hard-coded test data. I will replace this test data with real data in due course, but this will do for now.

The only thing I will point out is how I’ve defined the route for this endpoint. I’m taking advantage of the code sharing between client and server once again. And I’m defining my API routes in a URLs class in the Shared project.

namespace WordDaze.Shared
{
    public static class Urls
    {
        public const string BlogPosts = "api/blogposts";
    }
}

I like this as it gets rid of magic strings which I’m not a great fan of. Plus, if I change my URLs for any reason I can do it in one place.

The Client

The first thing I want to do is to add a new file in the Home feature called Home.cshtml.cs. Now this may seem a little odd so let me explain.

When creating a Blazor component both markup and logic go into the same cshtml file. While there is nothing particularly wrong with this, I’ve spent a lot of time using both MVC and Angular. Because of this I’ve really grown to like the separation of the template or view, from the logic behind it.

There have been many conversations around creating Blazor components as partial classes to allow a code behind file. This would allow the view code and the logic to be separated, however, this is not currently possible. But there is an @inherits directive available which does allow this separation to be achieved now.

The idea is to have two files for each component. One file contains the view template, the other contains the C# logic. The view then inherits from the logic class via the @inherits directive.

In the new _Home.cshtml.cs _ file I’m going to add the following code.

using Microsoft.AspNetCore.Blazor.Components;
using Microsoft.AspNetCore.Blazor.Services;
using Microsoft.AspNetCore.Blazor;
using System.Collections.Generic;
using System.Net.Http;
using System.Threading.Tasks;
using WordDaze.Shared;
using System.Linq;

namespace WordDaze.Client.Features.Home
{
    public class HomeModel : BlazorComponent 
    {
        [Inject] private HttpClient _httpClient { get; set; }
        
        protected List<BlogPost> blogPosts { get; set; } = new List<BlogPost>();

        protected override async Task OnInitAsync() 
        {
            await LoadBlogPosts();
        }

        private async Task LoadBlogPosts() 
        {
            var blogPostsResponse = await _httpClient.GetJsonAsync<List<BlogPost>>(Urls.BlogPosts);
            blogPosts = blogPostsResponse.OrderByDescending(p => p.Posted).ToList();
        }
    }
}

I’m overriding the OnInitAsync method and calling LoadBlogPosts. This is using the injected HttpClient to call the API endpoint I created earlier. As in the controller I’m getting the URL from the shared Urls class. I’m then ordering the posts so the newest posts show first. That’s it for the HomeModel.

Next I want to adjust the markup in the _Home.cshtml _ file.

@page "/"
@layout MainLayout
@inherits HomeModel

<header class="masthead" style="background-image: url('img/home-bg.jpg')">
    <div class="overlay"></div>
    <div class="container">
        <div class="row">
            <div class="col-lg-8 col-md-10 mx-auto">
            <div class="site-heading">
                <h1>WordDaze</h1>
                <span class="subheading">A Blazor Powered Blogging App</span>
            </div>
            </div>
        </div>
    </div>
</header>

<div class="container">
    <div class="row">
        <div class="col-lg-8 col-md-10 mx-auto">
            @foreach (var post in blogPosts)
            {
                <BlogPostPreview BlogPost=@post></BlogPostPreview>
                <hr />
            }
        </div>
    </div>
</div>

I’ve now added the @inherits __ directive as I explained earlier. I’ve also added a foreach to display each of the blog posts that were returned from the API. I want to encapsulate the markup for a blog post preview as I may want to use it elsewhere in the future, so I’m going to create a BlogPostPreview component.

using Microsoft.AspNetCore.Blazor.Components;
using WordDaze.Shared;

namespace WordDaze.Client.Features.Home
{
    public class BlogPostPreviewModel : BlazorComponent 
    {
        [Parameter] protected BlogPost blogPost { get; set; }
    }
}
@inherits BlogPostPreviewModel

<div class="post-preview">
  <NavLink href="@($"viewpost/{blogPost.Id}")">
    <h2 class="post-title">
      @blogPost.Title
    </h2>
    <h3 class="post-subtitle">
      @blogPost.PostSummary
    </h3>
  </NavLink>
  <p class="post-meta">Posted by
    <NavLink href="/">@blogPost.Author</NavLink>
    on @blogPost.Posted</p>
</div>

Following the same format as the Home component, the model class is very simple and just defines a property marked with Blazors Parameter attribute. If you are new to Blazor, any property on a component which is populated from a parent must be decorated with this attribute and be either private or protected. If I’m honest I’m still trying to understand why this is better than just having public and private properties. But anyway, this is the way things are.

The view file then just defines the template for a blog post summary and outputs the various bits of blog post information where required.

With that in place I can now spin up the app by running the following command in the server project directory.

dotnet run

I now have a home page that is displaying a list of blog posts.

Wrapping Up

I think that’s some good progress and the app can show a list of blog posts on the home page, albeit hard-coded for now. In the next instalment I’ll move onto displaying individual blog posts.

Enjoying things so far? If you have any questions or suggestions then please let me know in the comments below. As always all the source code to accompany this series is available on GitHub.

Building a blogging app with Blazor: Getting Setup

In this series I’m going to be building a simple blogging platform using Blazor. I’ll be using the client-side configuration of Blazor for these posts. And as I’m working on a Mac I’m going to use VS Code as much as possible. While it would be great to stick to it the whole time. I have found that sometimes it is necessary to spin up Windows and Visual Studio when debugging.

All the code is also available on my GitHub and I’m going to try and keep a branch for each post so those that are interested can see how the code develops.

Getting Setup

If you’re new to Blazor development then you will need to get a few things setup first.

Once you have the latest SDK installed you can install the Blazor project templates using the following command at your terminal of choice.

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

You can then run

dotnet new

And you should see then see the Blazor templates in the list.

For this project I’m going to use the Blazor (hosted in ASP.NET Core) Template. Next Ill show you how I structure my solutions.

Solution Structure

When I’m working on projects I pretty much always use the same structure and it looks like this.

I don’t have anything ground breaking to tell you about this structure. It seems to be fairly common in the projects I follow on GitHub and I just like it.

For anyone new to Blazor, the client, server and shared projects are created by the Blazor template. They are pretty self explanatory but the server project is a ASP.NET WebAPI but configured with the UseBlazor<T>() middleware. This will serve the client project upon startup. The shared project is referenced by both of the other projects and is a place to put shared files such as DTOs.

Project Structure (Client)

For this project I’m going to be using the following structure for my Blazor project.

I’m a big fan of feature folders when structuring projects. I find it scales well and makes finding items pretty easy. As an example in the screenshot above, I have refactored the files that come in the template to this structure.

Theme

Just a quick word on theming. I’ll obviously be focusing on the Blazor aspect during these posts. But I just thought I’d quickly mention that I will be using a theme called Clean Blog from startbootstrap.com for styling. I’m not going to talk through integrating it with the Blazor template as there is nothing special there. But you can view the code on the getting-setup branch on GitHub.

Wrapping Up

That’s it for the first instalment. In the next post I’m going to start getting the API in place and then get the home page showing a list of blog posts.

Enjoying things so far? If you have any questions or suggestions then please let me know in the comments below. As always all the source code to accompany this series is available on GitHub.

Introduction to Server-side Blazor

Please Note: As of .NET Core Preview 8, server-side Blazor is now known as a Blazor Server App. Client-side Blazor is known as a Blazor WebAssembly App.

It has been a significant couple of weeks in the Blazor world. First there was the 0.5.0 release which gave us server-side Blazor. Then in the ASP.NET community stand up on 7th August, Dan Roth told us that the server-side Blazor model would be part of .NET Core 3.0.

Today I’m going to cover what server-side Blazor is. I’ll talk a bit about Blazors architecture. Then move on to how the server-side model works along with its pros and cons. Then finish off by talking a bit about how it fits in with the client-side model and how things will progress going forward.

Let’s get started.

What is Server-side Blazor?

I’m going to start by stating the obvious, server-side Blazor executes on the server. Now I’m sure anyone who’s used MVC will be asking themselves

S_o what’s the difference between server-side Blazor and MVC?!_

Let me tell you a bit more about server-side Blazor and hopefully I can answer that question.

For starters I’ve told you a bit of a half truth, server-side Blazor doesn’t actually run completely on the server.

The great thing about Blazors architecture is its flexibility. It has the ability to separate the execution of a Blazor application from the rendering process. This opens up a mass of possibilities for application development.

For example, Blazor can be run a in Web Worker thread with events coming in from the UI thread and Blazor pushing back UI updates. This would allow Blazor to be used for developing Progressing Web Applications or PWAs. Another option would be to develop desktop apps using Blazor with Electron. In fact the Blazor team already have a working demo of a Blazor Electron app which you can go and play with right now.

Now we know a bit more about Blazors architecture let me explain how Blazor server-side actually works.

When you make the initial request your browser will download an index.html file and a small JavaScript file called blazor.server.js. Blazor then establishes a connection between your browser and the server using SignalR.

The server will execute the component logic server side producing HTML. This is then compared, on the server, to a snap-shot of the DOM currently on the client. From this comparison the changes required to make the DOMs match are produced. These changes are then packaged up and sent down to the browser via the SignalR connection. Once at the browser the HTML changes are unpackaged and applied to the DOM.

When you interact with the application, say, by clicking a button. That event is packaged up and sent back to the server via the same connection. Where it will be processed and the resulting DOM changes will be sent back to the browser to be rendered.

Server-side Model Benefits

I’ve touch on a few of the benefits already, but let me go through them all properly.

.NET Core APIs

Because the application is running server-side you have access to all of the .NET Core APIs. If you were looking to convert a MVC application for example, you should be able to access everything you do in the MVC app in Blazor.

SPA Feel

As I’ve mentioned previously, you will get that SPA feel with your server-side app. There will no no unnecessary page refreshes and the app will have a very rich interactive feel.

Much Smaller Downloads

In comparison to the the client-side version. Server-side Blazor has a much smaller download. This leads to significantly quicker startup times for your application.

More Clients

Carrying on from the previous benefit, smaller download and the fact server-side doesn’t require WebAssembly means your app can be consumed by a wider range of clients. And as the main processing is done server-side lower powered devices can be targeted as well.

Development Experience

The current development experience with client-side is still very much a work in progress. And while it’s improving every release, you still can’t hit F5 in Visual Studio and get the normal debugging experience we are all used to. But that is not the case with server-side apps, you get access to all the great developer tools you’re used to.

Single Codebase

Depending on how you architect it, it’s possible to write your Blazor app once. Regardless of which rendering model you intend to use, server-side or client-side. The app must avoid using synchronous JS interop for example. This is great for those who are more interested in the full client-side Blazor. They can start developing Blazor apps with the server-side model, then switch over to the client-side model at a later date. And switching can take seconds, it’s just a small change to how the app is bootstrapped.

No such thing as a free lunch

As with everything in life there are trade-offs. Server-side Blazor, at first glance, looks to be the best of both worlds. You can have all the benefits of executing your app server-side with all the nice UX of a single page app. While that is all true you are also going to have to weigh up the cost.

Online Only

Because server-side Blazor needs the server to do the actual work, if it’s not available your app will stop working. This may not be a biggie depending on what you need your application to do. If you are converting an MVC app for example you already have this limitation. But if you are looking to convert something like an Angular app then you are going to be losing something.

Latency

While SignalR is very efficient there is still a lot of chattiness with the server-side implementation. This can lead to a slightly sluggish feel at times with server-side Blazor apps. Every time a user interacts with your application a round trip has to be made to the server to process the interaction. This may return DOM updates which have to be processed and applied client-side.

Scalability

The server has to manage the connections to every client that is currently using your application. On top of this it is also responsible for keeping track of the state of each of those clients. We don’t really know yet how well server-side Blazor will scale with heavy use applications. This is something to keep in mind when planning your next app and considering server-side Blazor.

Persisting App State

A current short coming as of 0.5.1 is that if there has been no communication between the server and client the SignalR connection can be lost at which point the app will break. If this happens any app state will also be lost. Currently there is no nice story for developers in terms of maintaining app state. You must find your own way of managing things for now until the team put something in place.

What does this mean for client-side Blazor

For now it will continue to be an experimental project at Microsoft. But just to be clear:

The goal of the Blazor team is still to deliver the client-side model, that has not changed and will continue to be pursued.

The simple fact is that the client-side model relies not only on WebAssembly but also the efforts of the Mono team and their WASM .NET runtime. While progress is being made extremely quickly it’s not not quite there yet. AOT is not an option, there is extremely limited debugging, performance needs to be improved, download sizes are to big, etc…

The server-side model gives Microsoft a chance to get Blazor out there to people almost immediately. And as it runs on good old .NET Core it’s also got a solid base. It’s also important to remember that due to the Blazor architecture, with its separation of app execution from rendering, any developments made will benefit both models. So client-side is not going to get left behind.

Wrapping up

I’m going to leave it there for now. I hope I have managed to give you a good overview of what server-side Blazor is as well as some of the pros and cons. If you have any questions then please post them in the comments and I will do my best to answer them.

Oh, and coming back to the question right at the start of this post, w_hat is the difference between server-side Blazor and MVC?_

I think my answer is, a lot.

Global Error Handling in ASP.NET Core MVC

When generating a new MVC app the ExceptionHandler middleware is provided by default. This will catch any unhandled exceptions within your application and allow you to redirect to a specified route if you so choose. But what about other non-success status codes? Errors such as 404s will not be captured by this middleware as no exception was thrown.

To handle these types of errors you will need to use the StatusCodePages middleware. In this post I’m going to cover how to setup an MVC application to handle both exceptions as well as non-success status codes.

Handling Exceptions

I’m going to start here as the majority of the work is already done by the out of the box template. When you create a new MVC application you will get a Startup.cs with a Configure method which looks like this.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }
    else
    {
        app.UseExceptionHandler("/Home/Error");
    }

    app.UseStaticFiles();
    app.UseCookiePolicy();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

The line that is important here is app.UseExceptionHandler("/Home/Error"). This statement is registering the ExcepionHandler middleware and it is going to direct the user to the /Home/Errors route whenever an unhandled exception occurs.

All I’m going to do is make a small change so the line reads as follows.

app.UseExceptionHandler("/Error/500");

With that small change in place the next thing I’m going to do is add a new controller, ErrorsController, which is going to handle all the errors from the application.

public class ErrorsController : Controller
{
    [Route("Error/500")]
    public IActionResult Error500()
    {
        return View();
    }
}

NOTE: Do not add HTTP method attributes to the error action method. Using explicit verbs can stop some errors reaching the method.

I’m using attribute routing here as it’s my personal preference feel free to use the default route templates if you prefer.

Next I want to be able to get some decent information about what went wrong in my application so I can log it or email it or do any other logic I may deem necessary. In order to do this I’m going to add the following line to my Error500 action.Note: You will also need to import the Microsoft.AspNetCore.Diagnostics namespace.

var exceptionData = HttpContext.Features.Get<IExceptionHandlerPathFeature>();

When the ExceptionHandler middleware runs it sets an item on the Features collection for the request called IExceptionHandlerPathFeature. This is one of 2 features added the other is IExceptionHandlerFeature. They both contain a property called Error which has the details of the Exception. But the IExceptionHandlerPathFeature also contains the path from which the exception was thrown. Based on this I would always recommend using IExceptionHandlerPathFeature.

Now I have some information about what went wrong I want to do something with it. Now for the sake for this post I’m just going to add some details to the ViewBag so I can show them on a view. However in a real application I would most likely want to log them and then show the user a friendlier screen.

[Route("Error/500")]
public IActionResult Error500()
{
    var exceptionFeature = HttpContext.Features.Get<IExceptionHandlerPathFeature>();

    if (exceptionFeature != null)
    {
        ViewBag.ErrorMessage = exceptionFeature.Error.Message;
        ViewBag.RouteOfException = exceptionFeature.Path;
    }

    return View();
}

I can now handle any unhandled exceptions that my application throws, then print out the details. Next I want to deal with non-exception based issues, things such as 404s or any other non-success status code my app may produce.

Non-success Status Codes

The StatusCodePages middleware deals with any status codes returned by the app that are between 400 and 599 and don’t have a body. There are three different extensions for the middleware available.

UseStatusCodePages

This is the simplest extension. When this is added to the pipeline any status code produced which matches the criteria above will be intercepted and a simple text response will be returned to the caller. Below is an example of what would be returned if a request was made for a page that didn’t exist.

Status Code: 404; Not Found

While this may have its uses, in reality you are probably going to want to do something a bit more sophisticated.

UseStatusCodePagesWithRedirect

This extension will allow you to configure a user friendly error page rather than just the plain text option above.

This extension and the next are extremely similar in how they work except for one key difference. This will redirect the response to the error page location however, in doing so the original error response is lost. The caller will see a 200 status code from the loading of the error page but not the actual status code which triggered the error.

Now this may not matter to you but it is technically wrong as you will be returning a success status code when there was actually an error. You will have to decided if this is OK for your use case.

UseStatusCodePagesWithReExecute

This is the configuration I will be using, it’s also the one I would suggest is best for most cases. The middleware will pick up any matching status codes being returned and then re-execute the pipeline. So when the user friendly error page is returned the correct error status code is returned as well.

I’m going to add the following line underneath the ExceptionHandler middleware from earlier.

app.UseStatusCodePagesWithReExecute("/Error/{0}");

I’ve used the {0} placeholder when defining my Error route. This will be populated with the status code which triggered the middleware. I can then pick this up in an action on the ErrorsController.

[Route("Error/{statusCode}")]
public IActionResult HandleErrorCode(int statusCode)
{
    var statusCodeData = HttpContext.Features.Get<IStatusCodeReExecuteFeature>();

    switch (statusCode)
    {
        case 404:
            ViewBag.ErrorMessage = "Sorry the page you requested could not be found";
            ViewBag.RouteOfException = statusCodeData.OriginalPath;
            break;
        case 500:
            ViewBag.ErrorMessage = "Sorry something went wrong on the server";
            ViewBag.RouteOfException = statusCodeData.OriginalPath;
            break;
    }

    return View();
}

Much like with the ExceptionHandler middleware the StatusCodePages middleware populates an object to give a bit more information about whats happened. This time I can request a IStatusCodeReExecuteFeature from the current requests Features collection. With this I can then access three properties.

This allows me access to the route that triggered the status code along with any querystring data that may be relevant. Again for the purposes of this post I am just setting some ViewBag data to pass down to the view. But in a real world application I would be logging this information somewhere.

In the example above I have defined a single action to handle all status codes. But you could quite easily define different actions for different status codes. If, for example, you wanted to have a dedicated endpoint for 404 status codes you could define it as follows:

[Route("Error/404")]
public IActionResult HandlePageNotFound()
{
    ...
}

Wrapping up

In this post I’ve gone over a couple of pieces of middleware you can use for global error handling in your ASP.NET Core MVC applications. As always if you have any questions please ask away in the comments and I will do my best to answer them.

It's been a while + Blazored Local Storage v0.3.0 Released

It has been far to long since I have written anything and it’s time for that to change! I have been busy with the day job and have struggled to find the time or, if I’m honest, motivation to sit down and write blog posts.

I’ve also really struggled with “writers block” which probably sounds really stupid as there is literally thousands of things to write about when it comes to software development. Writing doesn’t come very easily for me and writing a blog post with around a 3-4 minute read time can take me 4+ hours to write. This can become a rather daunting prospect when trying to come up with new ideas and find the time to write about them along side a full time job.

Anyway, these are all ultimately excuses and as the saying goes nothing worth having comes easy.

So on that note…

Blazored Local Storage v0.3

Along with blogging I’ve also not been able to do much Blazor stuff for the last month or two. But I’m hopefully now going to be able to get back into things again.

I’ve started with updating BlazoredLocalStorage to version 0.3.0. This update brings an upgrade to Blazor 0.5.1. Along with a rewrite of the underlying JavaScript to match Blazor new interop APIs.

If you want to give it a go then all you need to do is run the following command from the Package Manager Console.

Install-Package BlazoredLocalStorage -Version 0.3.0

Or if you are using the dotnet cli then you will need to use this command.

dotnet add package BlazoredLocalStorage --version 0.3.0

Once you have the package installed you will need to add the following line to the ConfigureServices method of your startup.cs.

services.AddLocalStorage();

This will register the LocalStorage service with the DI container and from there you can inject ILocalStorage into your components or services.

Its worth noting that as of this release all of the APIs are now async. Once again this is to fit with Blazors new interop model. I’ve also included a really basic sample application in the GitHub repo now. If you have any issues or problems please let me know on GitHub.

Creating Blazor Component Libraries

What are component libraries?

With the release of Blazor 0.2.0 came the ability to create component libraries. Component libraries are how you can share components or JS interop functionality between projects. You can also generate nuget packages from a component library and share via nuget.org.

This is achieved with the introduction of a new project template, Blazor Library. However, at this time the template can only be generated via the dotnet CLI and not via Add > New Project in Visual Studio.

I jumped on this feature as soon as Blazor 0.2.0 was released and created my first ever nuget package, Blazored.LocalStorage. This a small, simple library which gives access to the browsers local storage API in your Blazor apps. So if you’re in need of access to local storage in your Blazor app please give it a go. If you’re interested in the source code it’s on my GitHub.

Now I’ve shamelessly plugged my one and only nuget package, let’s look at how to build a reusable component. As an example we’re going to create a simple in memory list component. Which will look like this when we’re done.

Full source code can be found on the blazor-component-libraries repo on my GitHub.

Getting Setup

If you need to get Blazor installed on your system please checkout my earlier post here which covers how to get Blazor installed.

The first thing we need to do is create a new blazor project.

Once that is done, open your command line of choice and navigate to the solution directory.

From here we are going to create a new directory for our component library project to live, then move into it.

mkdir SharedComponents
cd SharedComponents

Then run the following command using the dotnet CLI tool.

dotnet new blazorlib

This command is going to create us a Blazor Library project using the name of the folder we created above, SharedComponents. If you want to specify a different name you can use the -n yourprojectname option. To see all available options run dotnet new --help.

If you get an error running the above command, check you have the latest Blazor templates installed by running the following command.

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

Now we’ve created our component library we just need to add it to our solution which I’m going to do back in Visual Studio but you could use the dotnet CLI if you wish.

Once that has been added your solution should look something like this.

Creating the Simple List Component

Before we start creating our shared component first delete Component1.cshtml and ExampleJsInterop.cs as well as everything in the content folder.

Now we have a clean base lets start by adding a new Razor View. Right click on the project and select Add > New Item or Ctrl+Shift+A. Call it SimpleList.cshtml.

Be sure to select Razor View and not Razor Page as a Razor page has a code behind which will not work with Blazor.

You can then add the following code to SimpleList.cshtml.

<div class="simple-list">

    <div class="simple-list-header">
        <h1>Simple List</h1>

        <input bind="@newItem" placeholder="Enter item to add to list" />
        <button onclick="@AddItem">Add Item</button>
    </div>

    <div class="simple-list-list">

        @if (!listItems.Any())
        {
            <p>You have no items in your list</p>
        }
        else
        {
            <ul>
                @foreach (var item in listItems)
                {
                    <li>@item</li>
                }
            </ul>
        }
    </div>

</div>


@functions {

    private List<string> listItems = new List<string>();
    private string newItem;

    private void AddItem()
    {
        if (string.IsNullOrEmpty(newItem))
            return;

        listItems.Add(newItem);
        newItem = "";
    }

}

Now we have defined our component it would be good if we could ship it with a default look and feel. In order to achieve this we are going to add a CSS file called simple-list.css to the content folder of our SharedComponents project.

The content folder in a Blazor library project is similar to the wwwroot folder in a Blazor project. It is where static assets such as JavaScript, CSS or images should be placed. When your shared library is consumed, whatever is placed in this folder will be included in the consuming Blazor projects index.html file.

In our case it means when we use the SimpleList component in the main Blazor project, our simple-list.css file will be injected into the index.html automatically for us.

Once you’ve created the simple-list.css in the content folder paste in the following css… I hope you like orange!

.simple-list {
    border: 1px solid #bdbdbd;
    width: 450px;
    margin: 0 auto;
}

.simple-list-header {
    padding: 20px;
    background: #ff9800;
    filter: progid:DXImageTransform.Microsoft.gradient(GradientType=0,startColorstr='#ff9800',endColorstr='#e65100');
    background: -webkit-linear-gradient(305deg,#ff9800 0%,#e65100 100%);
    background: linear-gradient(145deg,#ff9800 0%,#e65100 100%);
}

    .simple-list-header h1 {
        margin: 0 0 20px 0;
        font-size: 24px;
        font-weight: bold;
        text-align: center;
        color: #ffffff;
    }

.simple-list-header {
    text-align: center;
}

    .simple-list-header input {
        padding: 5px;
        width: 300px;
    }

    .simple-list-header button {
        padding: 5px;
    }

.simple-list-list {
    padding: 20px;
}

.simple-list-list p {
    text-align: center;
    margin-bottom: 0;
    font-style: italic;
    color: #616161;
}

.simple-list-list ul {
    margin: 0;
    padding: 0;
    list-style: none;
}

    .simple-list-list ul li:first-of-type {
        border-top: 1px solid #e0e0e0;
    }

    .simple-list-list ul li {
        padding: 10px 0;
        border-bottom: 1px solid #e0e0e0;
    }

That’s it, we’ve now created our component and styled it. So how do we consume it?

Using Shared Components

As we didn’t do this earlier go ahead and add a project reference to the SharedComponents project from your Blazor UI project. Once you’ve done this open the _ViewImports.cshtml file in the root of the Blazor UI project and add the following line.

@addTagHelper *, SharedComponents

What this line is doing is making the components in our SharedComponents project available to the components in this project, similar to a using statement.

With the tag helper in place all we have to do is add our component to the index.cshtml.

<SimpleList></SimpleList>

You can now run the solution and you should see something like this.

Congratulations! You have successfully created your first shared component.

Wrapping up

This was obviously a very simple example of a reusable component. But the power of this feature is plain to see. This will allow a whole ecosystem of Blazor specific packages to grow. I’m sure in time there will be packages available for lots of thing such as Bootstrap, Telerik controls, Material UI, etc…

If you’ve got any questions please ask them in the comments. What would you like to see available as a Blazor component library?

Blazor Bites - Layouts

When developing an application you typically want it to maintain a consistent look and feel. Back in ye old days of web development we had static HTML files and common elements, such as headers and footers, would have to be copied between files. And a change to the layout would mean updating every file in the site. As time moved on we had includes with Classic ASP, master pages with WebForms and Razor has the concept of Layouts. All these things made defining and maintaining site wide look and feel much easier.

So how does Blazor handle layouts?

I’m glad you asked - Layout components.

Layout components are very similar to a normal Blazor component except for one difference. They inherit from LayoutComponentBase. This class has a single property, Body. This property must be specified in the Razor markup of the component as this where the content of the layout will be rendered.

Example```html @inherits LayoutComponentBase

The example above is a simple layout component. I have also referenced another component, SiteHeader, inside the layout just to show that that is possible as well. In fact you can do anything with a layout component you can do with a regular one, such as inject services or use data binding. They can even be defined in pure C# if you like.

Loading a component into a layout

In order to load a component into a certain layout you must define which layout it is to use. This is done using the @layout directive. Then when the component is requested its content is loaded into the layout component at the point the @Body tag is defined.

If you are writing a component in pure C# and you wish to have it use a layout component. Then you can decorate it with the [LayoutAttribute], as this is what the compiler will convert the @layout directive to anyway.

Example```html

@layout MainLayout @page “/”


Example```cs
// Using pure C#

[Layout(typeof(MainLayout))]
public class Overview : ComponentBase
{
    ...
}

Nesting layouts

At times you may want to nest layouts. You may wish to have a slightly different layout between certain areas of your site for example, while keeping a consistent header and footer. This is all supported with Blazor layout components.

Example```html

@layout AccountLayout @page “/account/expenses”


```html
<!-- AccountLayout -->

@layout MainLayout
@inherits LayoutComponentBase

<nav>
    <NavLink href="/account/incomes">
        Income
    </NavLink>
    <NavLink href="/account/expenses">
        Expenses
    </NavLink>
</nav>

<section>
    @Body
</section>
<!-- MainLayout -->

@implements ILayoutComponent

<SiteHeader />

<div class='container-fluid'>
    <div class='row'>
        <div class='col-sm-12 padding-top-15'>
            @Body
        </div>
    </div>
</div>

In the code examples above the final output would be the ExpensesComponent being rendered in the AccountLayouts @Body tag. Which would then be rendered in the MainLayouts @Body tag.

Summary

In this post, we start by understanding layout component in Blazor. We then looked at how we can load a component into a specific layout. We finished with looking at nesting layouts.

Blazor Bites - Routing

Blazor comes with a router out of the box, similar to other SPA applications such as Angular. While the Blazor router is much simpler than those found in other frameworks, you can already build useful multi-page apps.

How does the router in Blazor actually work?

Blazors router is just another component and implements IComponent the same as any other. Currently it takes an AppAssembly parameter which is the assembly for the Blazor application.

The router uses the provided AppAssembly to find the component which matches the URL of the current request, then it loads that component.

Route Templates

In Blazor, you define routes using route templates. You can define a route template by adding the @page directive to the top of a component.

@page "/home"

<h1>Hello World</h1>

The above component would be loaded when the user navigated to www.mydomaim.com/home.

If you are defining your component as a pure C# class then you can specify its route template by decorating it with the route attribute, [RouteAttribute("/home")]. Which is ultimately what the @page directive gets compiled to.

It’s also valid to specify multiple route templates for a component. You can achieve this by defining multiple @page directives or [RouteAttributes].

@page "/"
@page "/home"

<h1>Hello World</h1>

Route Parameters

The example above is fine for simple routes but what if you need to pass some data via the URI? For example, you have a product component that needs a product id to load that products information. That is where route parameters come in.

When defining a route template you can use curly brackets to include a parameter, @page "/products/{ProductId}". This parameter is then assigned to a property of the same name on the component.

@page "/products/{ProductId}"
@inject IProductService ProductService

...

@code {

    [Parameter] public int ProductId { get; set; }
    
    private Product product;
    
    protected override async Task OnInitAsync()
    {
        product = await ProductService.GetProduct(ProductId);
    }

}

Similar too routes in ASP.NET Core, you can also define route constraints on route parameters. In the example above you may wish to enforce that the ProductId was an int. In order to achieve this you could change the route template as follows, @page "/products/{ProductId:int}".

If someone then tried to navigate to this component with a URI like /products/foo then the router would not match the route with the above component. The currently supported types for enforcement are:

Linking Pages

There are three ways to link pages, one is to use regular a tags, another is to use the NavLink component, the last is programmatically.

To use traditional a tags you just have to specify the relative URI to the page you wish to link to. You will need to have a base tag defined in your index.html, but if you are using one of the Blazor templates this is already done for you. Then Blazors router will automatically handle navigation for you without causing any postbacks.

The next option is to use the NavLink component provided by Blazor. It takes a href as a parameter which it then uses to render a standard a tag. But whats really useful is that when the current URI matches the href it will add an active class to the link.

<!-- Defining link -->
<NavLink href="/home">
    Home
</NavLink>

<!-- Rendered link not matching current URI -->
<a class="null" href="/home">Home</a>

<!-- Rendered link matching current URI -->
<a class="active" href="/home">Home</a>

You can also define a Match parameter on a NavLink which tells the component how to decide if the current URI matches the href. There are currently two options available. All and Prefix.

All is the default and tells the NavLink component to apply the active class only when the whole URI matches. The second option is Prefix and this tells the NavLink component to apply the active class when the prefix of the current URI matches.

This is useful when you have a menu with sub sections where you may wish to apply styling to the section link and the currently active sub-section link.

<NavLink href="/expenses" Match=NavLinkMatch.Prefix>Expenses</NavLink>
<NavLink href="/expenses/shopping" Match=NavLinkMatch.All>Shopping</NavLink>
<NavLink href="/expenses/bills" Match=NavLinkMatch.All>Bills</NavLink>
<NavLink href="/expenses/groceries" Match=NavLinkMatch.All>Groceries</NavLink>

In this example, when the /expenses/bills URI was requested both the Expenses and Bills links would have the active class applied.

The final way to is to navigate programmatically. In order to do this you will need to use NavigationManager. This helper contains a few handy methods but the one we’re interested in is NavigateTo. This method takes a string, which is the URI to navigate to, then performs the navigation.

NavigationManager.NavigateTo("/home");

In order to use it you will need to inject it into your component or service. To inject into a component you can either use the @inject directive, or the [Inject] attribute.

@inject NavigationManager LocalNavigationManager
// C# only component
public class MyComponent : ComponentBase
{
    [Inject]
    protected NavigationManager LocalNavigationManager { get; set; };
    
    ...
}

If you need to use it from somewhere other than a component such as a service then you must use constructor injection.

public class MyService 
{
    private NavigationManager _navigationManager;
    
    public MyService(NavigationManager navigationManager)
    {
        _navigationManager = navigationManager;
    }
    
    ...
}

Summary

In this post, we started by understanding how Blazors router works. We then moved on to route templates and route parameters. We finished off with how to link pages together using either anchor tags, the NavLink component or using NavigationManager.

Blazor Bites - JavaScript Interop

It’s awesome we can now use C# in the browser with Blazor. But unfortunately we can’t do everything with it, yet. Currently, WebAssembly isn’t able to directly access the DOM API, which means that Blazor isn’t able to either.

So how do we manage this?… The answer is JavaScript interop.

Part of Blazor is implemented and lives in the JavaScript world. It is though this interface that Blazor is able to manipulate the DOM to render the UI and to hook into various DOM events. It’s also how developers can register and call their own JavaScript functions from C# code.

Calling JavaScript functions from C#

To call a JavaScript function from C# it must be registered on the window object. Let’s look at how we can call the following function from C#. It is just a simple alert which will display whatever message is passed to it.

window.ShowAlert = (message) => {
    alert(message);
}

In order to make this call we need to inject a IJSRuntime into our C# code. In the example below, I’ve setup a basic component which has an input so we can enter a message. And a button which will invoke the JavaScript call.

@inject IJSRuntime jsRuntime

<input type="text" @bind="message" />
<button @onclick="ShowAlert">Show Alert</button>

@code {

    string message = "";

    private async Task ShowAlert()
    {
        await jsRuntime.InvokeAsync<object>("ShowAlert", message);
    }
}

Calling C# methods from JavaScript

What about call into C# from JavaScript? Yes, that is also possible, and necessary. For example, if you are calling an asyncronous JavaScript function you will need it to callback into your code when the operation completes.

In order for a C# method to be called from JavaScript it must be decorated with the [JSInvokable] attribute, for example.

namespace JSInteropExamples
{
    public static class MessageProvider
    {
        [JSInvokable]
        public static Task GetHelloMessage()
        {
            var message = "Hello from C#";
            return Task.FromResult(message);
        }
    }
}

We can then call this from JavaScript using DotNet.invokeMethodAsync in the following way.

window.WriteCSharpMessageToConsole = () => {
    DotNet.invokeMethodAsync('JSInteropExamples', 'GetHelloMessage')
      .then(message => {
        console.log(message);
    });
}

Summary

In this post, we looked at what JavaScript interop is as well as why we need it. We then looked at an example of how to call a JavaScript function from C# using IJSRuntime. Finally, we looked at an example of how to call a static C# method from JavaScript.

JavaScript interop can be a complex aspect of developing with Blazor. I have written an in depth post covering using JavaScript interop with Blazor. I would really recommend giving it a read before getting to stuck in with code.

Blazor Bites - Data Binding & Events

Data Binding

Displaying data in Blazor is really simple and if you have any experience with Razor then this should all be very familiar to you. If not, don’t worry it’s really easy to pick up.

One-way binding

This is used for printing values in your views. These values could be strings such as a title in a <h1> tag or items in a list. But it can also be used to dynamically inject values into the HTML such as the name of a CSS class on an element.

Example```html

Welcome to my one-way binding example. Here’s a list of colours.

@if (DateTime.Now.DayOfWeek == DayOfWeek.Saturday || DateTime.Now.DayOfWeek == DayOfWeek.Sunday) { It’s the weekend! } else { It’s @DateTime.Now.DayOfWeek.ToString() }

@code {

private string Title { get; set; } = "Welcome to one-way binding";
private List<string> Colours { get; set; } = new List<string> { 
    "Red", "Blue", "Green", "Yellow"; 
};  
private string weekendFontStyle { get; set; } = "party-time";

}


### Two-way Binding

You know how to print out values to a view, but what if you would like a user to be able to update those values? This is where two-way binding comes in.

Blazor uses a directive called `bind` to achieve this. You can see an example of the `bind` directive in the next section.

### Formatting Bindings

If you wish to format a bound value you can use the `format-value` attribute. At the moment this is limited to just the `DateTime` type. With it you can provide a format string to specify how .NET values should be bound to attribute values.

Example```html
<h1>Record your current favourite colour</h1>

<p>
    <input @bind="Name" />
</p>

<p>
    <select @bind="FavouriteColour">
        <option>Red</option>
        <option>Blue</option>
        <option>Green</option>
        <option>Yellow</option>
    </select>
</p>

<p>
    <input @bind="DateRecorded" @bind:format="dd/MM/yyyy" />
</p>

<hr />

<p>Hi @Name</p>
<p>Your favourite colour on @DateRecorded was @FavouriteColour</p>


@code {

    public string Name { get; set; }
    public string FavouriteColour { get; set; }

    private DateTime dateRecorded = DateTime.Now;
    public string DateRecorded
    {
        get => dateRecorded.ToShortDateString();
        set => DateTime.TryParse(value, out dateRecorded);
    }
    
}

Events

In Blazor, we can access virtually any event you would be able to using JavaScript. The syntax looks like this, on[eventname] and is used as an attribute in your markup.

This attribute takes a delegate which is registered with the event. For example, <input @onkeyup="HandleKeyUp" /> will call the HandleKeyUp method whenever the onkeypress event is trigged. In the HandleKeyUp method you are able to accept KeyboardEventArgs which will give you access to the details of the event.

There are specific args available for most events such as MouseEventArgs for mouse events. To see a complete list its worth checking out this file in the Blazor source code.

You can also define your own events, for example, when doing component to component communication but I’ll cover that in a future post.

Example```html <input @onkeypress=“LogKeyPressed” />

@code {

private List<string> keyLog { get; set; } = new List<string>();

void LogKeyPressed(KeyboardEventArgs eventArgs)
{
    keyLog.Add(eventArgs.Key);
}

}


Example```html
<!-- Using a lambda -->

<button @onclick="@(e => buttonClicks++)">Click me</button>

<p>You clicked the button @buttonClicks times</p>


@code {

    private int buttonClicks { get; set; }
    
}

Summary

In this post, we’ve taken a look at one-way and two-way data binding. We also saw how we can format certain bind values. We then tackled events in Blazor and how you can use event args to gain extra data about the event.

Blazor Bites - Component Lifecycle Methods

Component Lifecycle Methods

When you create a component in Blazor it should derive from ComponentBase. There are two reasons for this.

First, is that ComponentBase implements IComponent and Blazor uses this interface to locate components throughout your project. It doesn’t rely on a folder convention. Second, is that ComponentBase contains important lifecycle methods, let’s take a look at what those are.

OnInitialized() / OnInitializedAsync()

Once the component has received its initial parameters from its parent in the render tree, the OnInitialized and OnInitializedAsync methods are called.

OnInitialized is called first, then OnInitializedAsync. Any asynchronous operations, which require the component to re-render once they complete, should be placed in the OnInitializedAsync method.

Both of these methods will only fire once in the components lifecycle, as apposed the other lifecycle methods which will fire every time the component is re-rendered.

Example```html

@code {

private string Title { get; set; }
private string TimeRendered { get; set; }

protected override void OnInitialized()
{
    Title = "Hello World";
    TimeRendered = DateTime.Now.ToShortTimeString();
}

}


Example```html
@inject IBudgetService BudgetService 

<h1>View Expenses Async</h1>

@if (Expenses == null)
{
    <p>Loading...</p>  
}
else
{
    <table>
        @foreach (var expense in expenses) 
        {
            <tr>
                <td>
                    @expense.Description
                </td>
                <td>
                    @expense.Amount
                </td>
            </tr>
        }
    </table>
}

@code {

    private Expense[] expenses;

    protected override async Task OnInitializedAsync()
    {
        expense = await BudgetService.GetExpensesAsync();
    }

}

OnParametersSet() / OnParametersSetAsync()

The OnParametersSet and OnParametersSetAsync methods are called when a component is first initialised and each time new or updated parameters are received from the parent in the render tree.

Example```html

<button @onclick="@IncrementCounter">Increment

@code {

int CounterValue = 0;

void IncrementCounter()
{
    CounterValue = CounterValue += 2;
}

}


```html
<!-- Child Component -->

@counterOutput

@code {

    [Parameter] public int Counter { get; set; }

    private string counterOutput;

    protected override void OnParametersSet()
    {
        counterOutput = Counter.ToString() + " and counting...";
    }
    
}

OnAfterRender / OnAfterRenderAsync

The OnAfterRender and OnAfterRenderAsync methods are called after each render of the component. At the point they are called you can expect that all element and component references are populated. This is also the place to put any JavaScript Interop calls when using server-side Blazor.

This means that if you need to perform an action, such as attaching an event listener, which requires the elements of the component to be rendered in the DOM. Then these methods are where you can do it. Another great use for these lifecycle methods are for JavaScript library initialisation, which require DOM elements to be in place to work.

Misc

While the next couple of methods aren’t part of a components lifecycle they are very closely related to it, so I wanted to briefly cover them. These methods are ShouldRender and StateHasChanged.

ShouldRender()

This method returns a boolean to indicate if a components UI can be rendered. However, it’s only called after the initial render of a component. Meaning even if it returns false, the component will still render once . This is mentioned by Steve Sanderson in this GitHub issue.

StateHasChanged()

This method notifies the component that its state has changed and queues a re-render. It’s called after any lifecycle method has been called and can also be invoked manually to trigger a re-render.

This method looks at the value returned from ShouldRender to decide if a re-render should happen. However, as mentioned before, this only happens after the component has been rendered for the first time.

Summary

In this post, we’ve covered the various lifecycle methods available in Blazor. These are OnInitialized/OnInitializedAsync, OnParametersSet/OnParametersSetAsync and OnAfterRender/OnAfterRenderAsync.

We also had a brief look at two methods which are closely related to Blazor component lifecycle, ShouldRender and StateHasChanged.

Blazor Bites - Creating Components

Creating Components

Like all modern front end frameworks Blazor has components at its core. Blazor uses a combination of C#, Razor and HTML to create components. There are 4 styles for components in Blazor.

One thing to note, regardless of which method you choose to build your components, components all end up as classes. If you’re interested to see what these classes look like you can view them in the obj\Debug\netstandard2.1\Razor directory of any Blazor project.

Inline

This is the simplest way to create a component in Blazor and is what you will find bundled with the starter project. It is just a single .razor file which the Blazor compiler turns into a C# class at build time.

When using an inline style you add your view markup and logic all in the same file. Logic is separated by using a Razor codeblock as shown in the example below.

<!-- HelloWorld.razor -->

<h1>@Title</h1>

@code {
    const string Title = "Hello World - Inline";
}

Code behind with base class

With this style of component view markup and logic are separated into different files. This is achieved using the @inherits directive.

Using the @inherits directive instructs Blazor to derive the class generated from the razor view, from the class specified with the directive. The code behind class specified with the directive must itself be derived from ComponentBase. This class provides all base functionality for components in Blazor.

A convention has become widely used when naming the base class file. Which is shown in the example below. The convention is to name the base class file the same as the view file but with .cs appended. Visual Studio will then nest the files in the Solution Explorer which help keep things tidy and help emphasise the relationship between the files.

// HelloWorld.razor.cs

public class HelloWorldBase : ComponentBase
{
    public const string Title = "Hello World - Code Behind";
}
<!-- HelloWorld.razor -->

<h1>@Title</h1>

One thing to remember with this style is that the base class cannot have the same name as the view. As all razor views get compiled down to classes, if both the razor view and base class were named the same it would cause a compile time error.

Code behind with partial class

Partial classes have been supported for a long time now with Blazor components, although this was not always the case. If you want to separate your code block from your markup but you’re not keen on the base class approach, then this is the option for you.

The setup is very similar to the previous base class approach except the code-behind class can have the same name, it just needs to be marked with the partial keyword.

// HelloWorld.razor.cs

public partial class HelloWorld : ComponentBase
{
    public const string Title = "Hello World - Code Behind";
}
<!-- HelloWorld.razor -->

<h1>@Title</h1>

Class only

The final style for building components in Blazor is to only use a class. Ultimately all components end up as a classes anyway. So using this style could be seen as cutting out the middle man.

However, I would strongly recommend against this style for reasons I cover in this post.

// HelloWorld.cs

public class HelloWorld : ComponentBase
{
    public const string Title = "Hello World - Class Only";

    protected override void BuildRenderTree(RenderTreeBuilder builder)
    {
        builder.OpenElement(1, "h1");
        builder.AddContent(2, Title);
        builder.CloseElement();
    }
}

As I stated before, the ComponentBase class contains all the base functionality for components. One of the methods it contains is BuildRenderTree() which I have overridden in the code above. I have then used the RenderTreeBuilder instance to programmatically define my components view. At runtime the component will output the following html.

<h1>Hello World - Class Only</h1>

Summary

In this post, I’ve covered 3 styles for defining components in Blazor. The first was to create components using just a single .razor file. The second was to use a code behind style with a base class containing the logic and a .razor view containing the markup. The third was to manually build the component using just a C# class.

Blazor Bites - Creating a New Project

Prerequisites

Depending on which development experience you prefer, you can get up and running with Blazor using either Visual Studio 2019 or Visual Studio Code. Regardless of which you choose you must install the latest .NET Core 3 SDK.

You will also need to install the latest templates via the dotnet CLI.

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

Visual Studio

Start by downloading the latest Visual Studio 2019. Make sure to select the the ASP.NET and Web Development workload during installation.

Visual Studio Code

If you’d prefer, you can use VS Code for development which now has Blazor support. You can create and run new project via the dotnet CLI. Run dotnet new to see a list of the available project types. The two types for Blazor are in the image below.

Creating a new project with Visual Studio

Once you’ve installed all the prerequisites, open up Visual Studio 2019 and create a new project. Select Blazor App from the list.

Give you application a name and click Create. You will then see the following screen where you can choose the type of Blazor app you want.

The two options are Blazor Server App and Blazor WebAssembly App.

Blazor Server App

This project type is for running Blazor on the server on the full ASP.NET Core runtime. It uses a SignalR connection to handle user interactions. You can read more about Blazor Server Apps in a post I published here.

Blazor WebAssembly App (Preview)

This option creates a Blazor app which runs on WebAssembly entirely in the clients browser. This is a stand-alone project type and has no backing API. If you require a backing API then select the ASP.NET Core Hosted checkbox on the options panel to the right.

The API project is configured to serve the Blazor WebAssembly App as well as act as a backing API.

Once you’ve selected the project type you’re interested in click Create and after a few seconds you will have a new project ready to go.

You can run the project and you should see the basic starter app after a few seconds.

Summary

In this post we’ve covered how to setup our Blazor development tools either using Visual Studio 2019 or Visual Studio Code. We’ve also covered how to create a new project either using the dotnet CLI or via Visual Studio. Finally we looked at the different project type available to us for Blazor.

What is Blazor and why is it so exciting?

I’m just going to say it right from the start, .NET running in the browser. No plugins, no add-ons, no weird transpilation voodoo. THIS IS NOT Silverlight. Just .NET running in the browser.

If you’re a .NET developer who’s even remotely interested in web development I should firmly have your attention. Now what if I told you it’s not just an idea. But a reality and you can go and try it for yourself right now?… Say hello to Blazor.

A Word On WebAssembly

Before I go into detail about what the Blazor project is, first let’s cover WebAssembly. If you’ve never heard of WebAssembly or know very little then I recommend checking out the official site for more in depth info. I’m just going to give the 50,000 foot view.

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

The above description comes from the official WebAssembly site. In summary, high level languages can be compiled down to WebAssembly and run in the browser at native speeds. This is a really exciting advance for web development, now many traditional server based languages can potentially be run in the browser.

What’s more, WebAssembly is now supported by all the major browsers.

But what about legacy browsers? I hear you cry.

No problem WebAssembly has you covered, it will gracefully fall back to JavaScript on older browsers and run as normal.

The reason I wanted to make you aware of WebAssembly is because this amazing piece of work is what makes the Blazor project possible.

What is Blazor?

Blazor is a .NET web framework which runs in the browser. Think Angular or React but powered by C# and Razor. Developers create Blazor applications using a mixture of C#, Razor and HTML. These applications execute .NET assemblies using the Mono .NET runtime implemented via WebAssembly. Blazor uses the latest web standards and requires no additional plugins or add-ons to run, this is a not another Silverlight.

Blazors History

Blazor started off as a personal project created by Steve Sanderson of Microsoft. He showed it off at NDC Oslo in 2017. This first version was built upon a interpreted .NET CIL (Common Intermediate Language) runtime called DotNetAnywhere. While the features were limited the potential of Blazor was obvious right away.

Since this demo Microsoft have added Blazor to their ASP.NET GitHub organisation as an experimental project. What this means is that Microsoft are investigating both the demand, and the technical issues around delivering Blazor as a full product before committing. At the time of writing the repo currently has just over 3300 stars and 17 contributors.

As part of the official adoption Blazor has been rewritten from scratch by the ASP.NET team. DotNetAnywhere has been replaced with Mono which has a much more advanced and fully featured offering in terms of .NET runtime. Mono has been part of Microsoft since 2016 and is the official runtime for client platforms. Powering frameworks such as Xamarin and Unity, it makes a lot sense for it to also power Blazor.

How does all this actually work?

As I mentioned Blazor is a web framework similar to Angular or React. Blazor applications don’t contain the actual .NET runtime, that’s what Mono brings to the party. Mono aims to run in 2 modes via WebAssembly, interpreted and AOT

Interpreted Mode

In this mode the Mono runtime is compiled down to WebAssembly, which can then be loaded and executed by the browser. Blazor application dlls, which are just standard .NET dlls, can then be loaded and executed by the Mono runtime.

Interpreted mode seems likely to be the quickest in terms of development speed as reloading application dlls is very quick.

AOT Mode

In Ahead-of-Time mode thing are slightly different. The blazor application is compiled directly to WebAssembly at build time. Parts of the mono runtime are still loaded by the browser to handle low level operations such as garbage collection. But essentially the application is executed as regular WebAssembly.

AOT mode could have the potential to be the better option for production as it would allow the chance to perform IL stripping, a .NET equivalent to tree shaking. However this is all still being assessed.

What can I do with Blazor today?

Blazor is currently in the very early stages of its development. While this is a pre-alpha preview there is already a lot you can do with Blazor.

On 22nd March 2018 the first public preview was released.On 17th April 2018 the second public preview was released.

It is possible to create components, use dependency injection, do basic component navigation, use JavaScript interop, get IntelliSense, and maybe a few other bits but I haven’t had long to play with it so far. So already you can try out the framework and get a feel for what it’s going to be like to develop applications with it.

I think another point to highlight is how stable things feel even at this early stage. I installed all the required parts and had a running application in under 10 minutes. I had to install the latest Visual Studio preview otherwise it would have been going in less than 5, thats really impressive!

What’s next for Blazor?

The groundwork is already well underway to make it a fully featured web framework which will support all the features you would expect. Taken from the official repo:

And as I said many of these features are already available in the framework and can be tried out today.

Wrapping up

As you can probably tell I’m pretty excited about this project. I’ve been a .NET developer for 15 years and I always thought it was a pipe dream to use C# client side. But it looks like with Blazor it’s actually happening. There is still a long way to go before we see a production capable version, after all it’s still only an experimental project. But I really think this is the start of something special.

Just thinking of being able to write C# client-side as well as server-side is making me smile. Having had more than a few heated conversations in recent times regarding which JS framework we should be using. I for one, welcome the thought of C# everywhere. I would like to clarify that I don’t see this as the death of JS and I’m not a JS hater. But I do welcome the competition which Blazor will bring. Plus, trying to keep up with everything that’s going on in the JS world is becoming a full time job!

Anyway, if you want to get involved then head over to the GitHub repo, there is also a Gitter chat to talk about the project as well. The team are looking for as much feedback as possible and the GitHub issue tracker is already looking pretty lively with suggestions and feature requests.

I’ve built a demo app which I’m developing as a way of showing what Blazor can do as well as to use in future posts, you can find it on my GitHub. I’m also going to keep it updated with each Blazor release.

You can also read my Blazor Bites Series:

Which are bite size posts about the various features of Blazor.

Unit Testing ILogger in ASP.NET Core

I’ve been creating a new template solution for our ASP.NET Core projects. As I was writing some tests for an API controller, I hit a problem with mocking the ILogger<T> interface. So I thought I would write a quick blog post about what I found, mainly so I won’t forget in the future!

I had a setup similar to the following code.

public class CatalogueController : Controller
{
    private readonly ILogger<CatalogueController> _logger;
    private readonly ICatalogueService _catalogueService;

    public CatalogueController(ILogger<CatalogueController> logger, ICatalogueService catalogueService)
    {
        _logger = logger;
        _catalogueService = catalogueService;
    }

    [HttpGet("api/catalogue")]
    public async Task<IActionResult> GetActiveStockItemsAsync()
    {
        try
        {
            var stockItems = await _catalogueService.GetActiveStockItemsAsync();

            return Ok(stockItems);
        }
        catch (Exception exception)
        {
            _logger.LogError("Error returning active stock catalogue items", exception);

            return new StatusCodeResult((int)HttpStatusCode.InternalServerError);
        }
  }

public class CatalogueControllerTests
{
    private readonly IFixture _fixture;
    private readonly Mock<ILogger<CatalogueController>> _mockLogger;
    private readonly Mock<ICatalogueService> _mockCatalogueService;
    private readonly CatalogueController _catalogueController; 

    public CatalogueControllerTests()
    {
        _fixture = new Fixture();

        _mockLogger = new Mock<ILogger<CatalogueController>>();
        _mockCatalogueService = new Mock<ICatalogueService>();
        _catalogueController = new CatalogueController(_mockLogger.Object, _mockCatalogueService.Object);
    }

    [Fact]
    public async Task GetActiveStockItems_LogsErrorAndReturnsInternalServerError_When_ErroOnServer()
    {
        // Arrange
        _mockCatalogueService.Setup(x => x.GetActiveStockItemsAsync()).Throws<Exception>();

        // Act
        var result = await _catalogueController.GetActiveStockItemsAsync();

        // Assert
        _mockLogger.Verify(x => x.LogError("Error returning active stock catalogue items", It.IsAny<Exception>()), Times.Once);
        var errorResult = Assert.IsType<StatusCodeResult>(result);
        Assert.Equal(500, errorResult.StatusCode);
    }
}

When I ran my test expecting it to pass I got the following error.

Message: System.NotSupportedException : Invalid verify on an extension method: x => x.LogError("Error returning active stock catalogue items", new[] { It.IsAny<Exception>() })

It turns out LogInformation, LogDebug, LogError, LogCritial and LogTrace are all extension methods. After a quick Google I came across this issue on GitHub with an explanation from Brennan Conroy as to why the ILogger interface is so limited.

Right now the ILogger interface is very small and neat, if you want to make a logger you only need to implement 3 methods, why would we want to force someone to implement the 24 extra extension methods everytime they inherit from ILogger?

Solution 1

There is a method on the ILogger interface which you can verify against, ILogger.Log. Ultimately all the extension methods call this log method. So a quick change to the verify code in my unit test and I had a working test.

Unit Test - Solution 1

public class CatalogueControllerTests
{
    private readonly IFixture _fixture;
    private readonly Mock<ILogger<CatalogueController>> _mockLogger;
    private readonly Mock<ICatalogueService> _mockCatalogueService;
    private readonly CatalogueController _catalogueController; 

    public CatalogueControllerTests()
    {
        _fixture = new Fixture();

        _mockLogger = new Mock<ILogger<CatalogueController>>();
        _mockCatalogueService = new Mock<ICatalogueService>();
        _catalogueController = new CatalogueController(_mockLogger.Object, _mockCatalogueService.Object);
    }

    [Fact]
    public async Task GetActiveStockItems_LogsErrorAndReturnsInternalServerError_When_ErroOnServer()
    {
        // Arrange
        _mockCatalogueService.Setup(x => x.GetActiveStockItemsAsync()).Throws<Exception>();

        // Act
        var result = await _catalogueController.GetActiveStockItemsAsync();

        // Assert
        _mockLogger.Verify(x => x.Log(LogLevel.Error, It.IsAny<EventId>(), It.IsAny<FormattedLogValues>(), It.IsAny<Exception>(), It.IsAny<Func<object, Exception, string>>()), Times.Once);
        var errorResult = Assert.IsType<StatusCodeResult>(result);
        Assert.Equal(500, errorResult.StatusCode);
    }
}

Solution 2

Another way to solve the problem I found on the same GitHub issue from Steve Smith. He’s written a blog post about the issues of unit testing the ILogger and suggests creating an adapter for the default ILogger.

public interface ILoggerAdapter<T>
{
    void LogInformation(string message);
    void LogError(Exception ex, string message, params object[] args);
    ...
}
public class LoggerAdapter<T> : ILoggerAdapter<T>
{
    private readonly ILogger<T> _logger;
 
    public LoggerAdapter(ILogger<T> logger)
    {
        _logger = logger;
    }
 
    public void LogError(Exception ex, string message, params object[] args)
    {
        _logger.LogError(ex, message, args);
    }
 
    public void LogInformation(string message)
    {
        _logger.LogInformation(message);
    }
    
    ...
}

The LoggerAdapter will need to be added to DI as well.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();

    services.AddSingleton(typeof(ILoggerAdapter<>), typeof(LoggerAdapter<>));
}

With this in place I could then change my code as follows.

public class CatalogueController : Controller
{
    private readonly ILoggerAdapter<CatalogueController> _logger;
    private readonly ICatalogueService _catalogueService;

    public CatalogueController(ILoggerAdapter<CatalogueController> logger, ICatalogueService catalogueService)
    {
        _logger = logger;
        _catalogueService = catalogueService;
    }

    [HttpGet("api/catalogue")]
    public async Task<IActionResult> GetActiveStockItemsAsync()
    {
        try
        {
            var stockItems = await _catalogueService.GetActiveStockItemsAsync();

            return Ok(stockItems);
        }
        catch (Exception exception)
        {
            _logger.LogError("Error returning active stock catalogue items", exception);

            return new StatusCodeResult((int)HttpStatusCode.InternalServerError);
        }
  }

public class CatalogueControllerTests
{
    private readonly IFixture _fixture;
    private readonly Mock<ILoggerAdapter<CatalogueController>> _mockLogger;
    private readonly Mock<ICatalogueService> _mockCatalogueService;
    private readonly CatalogueController _catalogueController; 

    public CatalogueControllerTests()
    {
        _fixture = new Fixture();

        _mockLogger = new Mock<ILoggerAdapter<CatalogueController>>();
        _mockCatalogueService = new Mock<ICatalogueService>();
        _catalogueController = new CatalogueController(_mockLogger.Object, _mockCatalogueService.Object);
    }

    [Fact]
    public async Task GetActiveStockItems_LogsErrorAndReturnsInternalServerError_When_ErroOnServer()
    {
        // Arrange
        _mockCatalogueService.Setup(x => x.GetActiveStockItemsAsync()).Throws<Exception>();

        // Act
        var result = await _catalogueController.GetActiveStockItemsAsync();

        // Assert
        _mockLogger.Verify(x => x.LogError("Error returning active stock catalogue items", It.IsAny<Exception>()), Times.Once);
        var errorResult = Assert.IsType<StatusCodeResult>(result);
        Assert.Equal(500, errorResult.StatusCode);
    }
}

Wrapping up

To summarise, there are a couple of ways you can go about unit testing when ILogger is involved.

The first is to verify against the Log method, the downside here is that it may not seem very obvious why you are doing it this way. The second option is to wrap the logger with your own implementation. This allows you to mock and verify methods as normal. But the downside is having to write the extra code to achieve it.

Do you know of any other ways to test the ILogger? If so let me know in the comments.

Test Driven Development: In Practice

This is part two of a series:


Previously I talked about the basics of Test Driven Development or TDD, as well as some of the rules around the practice. While it’s important to get the theory, lets face facts, it can be a little dry. So in this post I want to take you though some practical TDD.

The Scope

I’m going to take you through developing a simple message service for an app. As the app could be online or offline the service will have to meet the following acceptance criteria.

The Setup

I will be coding this using Visual Studio 2017 for Mac with XUnit and Moq. While there may be some syntax differences between testing frameworks the techniques will be the same.

For those of you unfamiliar with Moq. It is a library for creating mock objects in .NET. By using Moq I don’t have to create hard-coded test classes myself. I’ve found doing this soon leads to test suites which are hard to maintain, difficult to understand and hard to change. If you haven’t tried using it in your tests, I highly recommend it.

Let’s get cracking!

Solution Setup

To start I’m going to create a new solution, then add a class library and a unit test project. While I’m here I’ll add a project reference between the unit test project and the class library. It should look something like this.

The First Test

As I wrote in the What Is Test Driven Development post.

Tests should be documentation for what the code does

Looking at the first acceptance criteria, if there is a network connection then send the message directly. I’m going to need a mechanism for checking the state of the network connection. This certainly isn’t the responsibility of the message service, this should be handled by a dedicated class. So I’m going to delegate this work to a NetworkInformation class which will expose the state of the connection.

I’m just going to create a new test class called MessageServiceTests. I prefer to append “Tests” to the end of the name of the class I’m testing. It will hold the tests for the MessageService class. Then add the following code.

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
}

Red Phase

I’m now in a red phase, there is no such thing as an INetworkInformation.

When I identify something I see as an external responsibility. I will mock it out in this way and fill in the detail later. It means I can get on and write my class, without having to dive off and build the NetworkInformation class first. More importantly, it’s good design. My Message Service is depending on an abstraction from that start. And I can design that abstraction before having to worry about creating its concrete implementation.

Now I will write just enough code to get back to a green phase.

In the PracticalTDD.Application project. I’m going to create a folder called Network, then a new interface inside called INetworkInformation. To finish I’ll add this using statement to the test class.

using PracticalTDD.Application.Network;

I should now be back to green phase with no compile issues. On with the test…

Green Phase

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();    
}

Red Phase

I need a mechanism for sending directly. I tend to prefer wrapping the HttpClient, so I’ve gone with an IHttpProvider which will do just that.

Much the same as above I don’t want this class to have to worry about how the message will get sent. So I’m putting that responsibility behind an interface and moving on.

In the application project I’ll create a Http folder with a IHttpProvider inside it. Then add the following using statement to the test class.

using PracticalTDD.Application.Http;

Green Phase

Next I’m going to setup my mock NetworkInfomation class. I need something to call in order to check if there is a network connection. I think a property called HasConnection will work nicely. This will return a boolean indicating the current state of the connection.

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    
    mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);
}

Red Phase

As I want to send via HTTP when there is a network connection. I have set the mock to return true from the HasConnection property. But of course, I don’t have that property as I only created a empty interface.

I’ll switch over to the INetworkInformation interface and create the property.

bool HasConnection { get; }

I should now be able to compile again.

Green Phase

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    
    mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);
    
    var message = new Message();
}

Red Phase

Obviously I need a message to send. For now I’ll create a blank class called Message in the root of the Application project. Then add the using statement.

using PracticalTDD.Application;

Green Phase

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    
    mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);
    
    var message = new Message();
    var messageService = new MessageService(mockNetworkInformation.Object, mockHttpProvider.Object);
}

Red Phase

This should be the last part of the arrange step. I now need to create the actual message service class.

In the Application project, I’ll add a new class called MessageService with the following code.

readonly IHttpProvider httpProvider;
readonly INetworkInformation networkInformation;

public MessageService(INetworkInformation networkInformation, IHttpProvider httpProvider)
{
    this.networkInformation = networkInformation;
    this.httpProvider = httpProvider;
}

Green Phase

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    
    mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);
    
    var message = new Message();
    var messageService = new MessageService(mockNetworkInformation.Object, mockHttpProvider.Object);
    
    // Act
    messageService.Send(message);
}

Red Phase

I haven’t created a Send method yet on the MessageService class, I’ll add that in now.

public void Send(Message message)
{    
}

Green Phase

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    
    mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);
    
    var message = new Message();
    var messageService = new MessageService(mockNetworkInformation.Object, mockHttpProvider.Object);
    
    // Act
    messageService.Send(message);
    
    // Assert
    mockHttpProvider.Verify(x => x.Send(message), Times.Once);
}

Red Phase

I now need to add the Send method to the IHttpProvider interface.

void Send(Message message);

Green Phase

I should now have the first test completed and compiling.

I’ll now run the test and make sure it fails. Which will put me back into a red phase.

Red Phase

Now I need to write the code to make the test pass. So I’m going to check if there is a connection and if there is I’m going to call send on the HttpProvider.

if (_networkInformation.HasConnection)
    _httpProvider.Send(message);

Green Phase

I now have a passing test!

It may seem like I’ve done a lot for what is essentially a few lines of production code. But I’ve already been able to make a couple of important design decisions. I now have a interface for checking the network state. I also have an interface to manage HTTP calls. With these pieces now in place I can start to write more test code before hitting a red phase.

Most importantly though, I’m starting with something that fails which I then make pass. For me this adds a lot of security to what I’m developing.

Let’s carry on with the next test.

Refactor

I don’t feel there is anything worth refactoring at this point in time. I’m only one test in and I don’t have any duplication yet. So I’m going to move onto the next test.

The Second Test

This test will cover the second acceptance criteria, If there is no network connection, then put the message on the message queue. My plan is to pass the message to a MessageQueue when there is no network connection.

public void SendsMessageToQueue_When_NetworkConnectionIsNotAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    
    mockNetworkInformation.Setup(x => x.HasConnection).Returns(false);
    
    var message = new Message();
    
    var mockMessageQueue = new Mock<IMessageQueue>();
}

Red Phase

I have a compile error, I need to create the IMessageQueue interface. I’m going to add a new folder to the application project called MessageQueue and create a IMessageQueue interface inside.

As I mentioned in part one of this series. In practice, it is not always best to rigidly stick to the 3 Laws. In the last test I did. But from here on I’m going to be a bit less rigid.

So, while I’m here I’m going to add the following method to IMessageQueue.

void Add(Message message);

Then go back to the test and add the following using statement.

using PracticalTDD.Application.MessageQueue;

Green Phase

public void SendsMessageToQueue_When_NetworkConnectionIsNotAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    
    mockNetworkInformation.Setup(x => x.HasConnection).Returns(false);
    
    var message = new Message();
    
    var mockMessageQueue = new Mock<IMessageQueue>();
    
    var messageService = new MessageService(mockNetworkInformation.Object, mockHttpProvider.Object, mockMessageQueue.Object);
}

Red Phase

I now need to alter the MessageService class to take an IMessageQueue in its constructor.

readonly IMessageQueue _messageQueue;

public MessageService(INetworkInformation networkInformation, IHttpProvider httpProvider, IMessageQueue messageQueue)
{
    _networkInformation = networkInformation;
    _httpProvider = httpProvider;
    _messageQueue = messageQueue;
}

I’ve changed the signature of the constructor and by doing so I’ve broken the first test. I need to update that test to have a mockMessageQueue and to pass that to the MessageService constructor.

public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable() 
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();
    var mockMessageQueue = new Mock<IMessageQueue>();
    
    networkInformation.Setup(x => x.HasConnection).Returns(true);
    
    var message = new Message();
    var messageService = new MessageService(mockNetworkInformation.Object, mockHttpProvider.Object, mockMessageQueue.Object);
    
    // Act
    messageService.Send(message);
    
    // Assert
    mockHttpProvider.Verify(x => x.Send(message), Times.Once);
}

Green Phase

public void SendsMessageToQueue_When_NetworkConnectionIsNotAvailable()
{
    // Arrange
    var mockNetworkInformation = new Mock<INetworkInformation>();
    var mockHttpProvider = new Mock<IHttpProvider>();

    mockNetworkInformation.Setup(x => x.HasConnection).Returns(false);

    var message = new Message();

    var mockMessageQueue = new Mock<IMessageQueue>();

    var messageService = new MessageService(mockNetworkInformation.Object, mockHttpProvider.Object, mockMessageQueue.Object);

    // Act
    messageService.Send(message);

    // Assert
    mockMessageQueue.Verify(x => x.Add(message), Times.Once);
}

With those changes I should now be able to compile and run both tests…

The first test is passing once again, the second is failing as expected.

Red Phase

In the MessageService I just need to add an else to the if statement. Then call the Add method on the _messageQueue.

if (_networkInformation.HasConnection)
    _httpProvider.Send(message);
else
    _messageQueue.Add(message);

Green Phase

I now have both tests passing. Also did you notice how much quicker I got through the second test?

Refactor

I have started to duplicate some code. I have multiple places where I am initialising the INetworkInformation and IHttpProvider mock objects. The same goes for the Message class and the MessageService class.

With xUnit a new instance of the test class is created for every test. So I can put common setup code into the constructor. Also, if I implement the IDisposable interface on the test class, I can add any clean up code there. If you are using NUnit for your testing there are the [SetUp] and [TearDown] attributes. You can place these on methods in your test class which will then be called before and after each test which will achieve the same result.

Mocks

I’m going to add three private fields to the test class for each of the mocks. Then I’m going to instantiate each of them in the constructor. Finally I’m going to null each instance in the Dispose method, possibly a bit unnecessary but I like to be safe.

private Mock<INetworkInformation> _mockNetworkInformation;
private Mock<IHttpProvider> _mockHttpProvider;
private Mock<IMessageQueue> _mockMessageQueue;

public MessageServiceTests()
{
    _mockNetworkInformation = new Mock<INetworkInformation>();
    _mockHttpProvider = new Mock<IHttpProvider>();
    _mockMessageQueue = new Mock<IMessageQueue>();
}

public void Dispose()
{
    _mockNetworkInformation = null;
    _mockHttpProvider = null;
    _mockMessageQueue = null;
}

The next step is to refactor all the methods to use the new class fields instead of the old method instances.

[Fact]
public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable()
{
    // Arrange
    _mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);

    var message = new Message();

    var messageService = new MessageService(_mockNetworkInformation.Object, _mockHttpProvider.Object, _mockMessageQueue.Object);

    // Act
    messageService.Send(message);

    // Assert
    _mockHttpProvider.Verify(x => x.Send(message), Times.Once);
}

[Fact]
public void SendsMessageToQueue_When_NetworkConnectionIsNotAvailable()
{
    // Arrange
    _mockNetworkInformation.Setup(x => x.HasConnection).Returns(false);

    var message = new Message();

    var messageService = new MessageService(_mockNetworkInformation.Object, _mockHttpProvider.Object, _mockMessageQueue.Object);

    // Act
    messageService.Send(message);

    // Assert
    _mockMessageQueue.Verify(x => x.Add(message), Times.Once);
}

I’m now going to run my tests to make sure that everything is still green after my refactor…

All green.

Message instances

There are several repetitions of the Message class being instantiated. I’m going do exactly the same as I did for the mocks I’ve just cleaned up.

private Message _message;

public MessageServiceTests()
{
    _mockNetworkInformation = new Mock<INetworkInformation>();
    _mockHttpProvider = new Mock<IHttpProvider>();
    _mockMessageQueue = new Mock<IMessageQueue>();

    _message = new Message();
}

[Fact]
public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable()
{
    // Arrange
    _mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);

    var messageService = new MessageService(_mockNetworkInformation.Object, _mockHttpProvider.Object, _mockMessageQueue.Object);

    // Act
    messageService.Send(_message);

    // Assert
    _mockHttpProvider.Verify(x => x.Send(_message), Times.Once);
}

[Fact]
public void SendsMessageToQueue_When_NetworkConnectionIsNotAvailable()
{
    // Arrange
    _mockNetworkInformation.Setup(x => x.HasConnection).Returns(false);

    var messageService = new MessageService(_mockNetworkInformation.Object, _mockHttpProvider.Object, _mockMessageQueue.Object);

    // Act
    messageService.Send(_message);

    // Assert
    _mockMessageQueue.Verify(x => x.Add(_message), Times.Once);
}

As before, I’m now going to run all my tests to check that everything is still working…

All green.

Message Service instances

The creation of the MessageService is repeated in every test as well. This is the last piece of duplication that I want to factor out. Again, I will just repeat what I did for the mocks and the message class above.

public class MessageServiceTests : IDisposable
{
    private MessageService _messageService;
    private Mock<INetworkInformation> _mockNetworkInformation;
    private Mock<IHttpProvider> _mockHttpProvider;
    private Mock<IMessageQueue> _mockMessageQueue;
    private Message _message;

    public MessageServiceTests()
    {
        _mockNetworkInformation = new Mock<INetworkInformation>();
        _mockHttpProvider = new Mock<IHttpProvider>();
        _mockMessageQueue = new Mock<IMessageQueue>();

        _message = new Message();

        _messageService = new MessageService(_mockNetworkInformation.Object, _mockHttpProvider.Object, _mockMessageQueue.Object);
    }

    [Fact]
    public void SendsMessageViaHttp_When_NetworkConnectionIsAvailable()
    {
        // Arrange
        _mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);

        // Act
        _messageService.Send(_message);

        // Assert
        _mockHttpProvider.Verify(x => x.Send(_message), Times.Once);
    }

    [Fact]
    public void SendsMessageToQueue_When_NetworkConnectionIsNotAvailable()
    {
        // Arrange
        _mockNetworkInformation.Setup(x => x.HasConnection).Returns(false);

        // Act
        _messageService.Send(_message);

        // Assert
        _mockMessageQueue.Verify(x => x.Add(_message), Times.Once);
    }

    public void Dispose()
    {
        _mockNetworkInformation = null;
        _mockHttpProvider = null;
        _mockMessageQueue = null;
        _messageService = null;
    }
}

Now to run the tests…

All green.

Now that looks a lot better. The test class looks very clean and much more maintainable. The refactor step is so important in TDD. Once you have something that works always go back and try and make it better.

The Third Test

Finally, I’ll cover the third acceptance criteria. If sending the message directly then make sure the message queue has been cleared first. This is going to be pretty simple. I need to call a method that will send all messages currently held on the queue. Before I call the Send method on the HttpProvider.

public void ClearsMessageQueueBeforeSendingViaHttp_When_NetworkConnectionIsAvailable()
    {
        // Arrange
        var sendAllMessagesCallTime = DateTime.MinValue;
        var sendCallTime = DateTime.MinValue;

        _mockNetworkInformation.Setup(x => x.HasConnection).Returns(true);

        _mockMessageQueue.Setup(x => x.SendAllMessages())
                                      .Callback(() => {
                                          sendAllMessagesCallTime = DateTime.Now;
                                      });

        _mockHttpProvider.Setup(x => x.Send(_message))
                                     .Callback(() => {
                                         sendCallTime = DateTime.Now;
                                     });

        // Act
        _messageService.Send(_message);

        // Assert
        _mockMessageQueue.Verify(x => x.SendAllMessages(), Times.Once);
        _mockHttpProvider.Verify(x => x.Send(_message), Times.Once);
        Assert.True(sendCallTime > sendAllMessagesCallTime);
    }

In order to check what was called first I have used the callback feature in Moq to set the time the method was called. Then I’ve compared them to check that the SendAllMessages method was called first. To be honest I’m not over the moon about this code. While it does prove one was called before the other, it just doesn’t feel a great way to do it. But at this point in time I don’t know of a better way to test for something like this. So if anyone can suggest a better way then please let me know in the comments.

Red Phase

I now have a couple of compile errors. The SendAllMessages method doesn’t exist. So I’ll add that to the interface now.

void SendAllMessages();

I can run the tests again and now I have a failing test. To get this test to pass I’ll alter the Send method on the message service.

public void Send(Message message)
{
    if (_networkInformation.HasConnection)
    {
        _messageQueue.SendAllMessages();
        _httpProvider.Send(message);
    }
    else
        _messageQueue.Add(message);
}

Green Phase

After running the tests I’m looking at all green.

Refactor

With the refactor I did after test two, I don’t feel there is anything much to improve in the test class. The message service itself is really simple so there isn’t much refactoring to do there either. So at this point I’m going to declare my MessageService complete.

Wrapping Up

In this post I have gone through a practical example of TDD, albeit an extremely simple example. This has been a very challenging post to write but I hope you will find something useful in here.

Once again I want to reiterate the importance of the refactor step. In my experience far to many developers don’t treat their test suites with the same care they do their production code. But it’s so important to do this. That way you keep your test suites maintainable, which means you’re more likely to keep writing tests.

When tests become a burden they stop being run, then stop being written.

I’m still learning every day with TDD. So if you have any improvements or suggestions then please let me know in the comments below.

Do you have any methods which make TDD more effective?

Unit Testing with the HttpClient

There has been a lot of discussion about how and even if the HttpClient class is testable. And it very much is.

So I wanted to write a quick post giving you three options that you can use when you need to write tests involving the HttpClient.

Let’s assume we have a simple class which gets a list of songs from an API. I’ll use this as the example class we wish to test.

public class SongService 
{
    private readonly HttpClient _httpClient;

    public SongService(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }

    public async Task<List<Song>> GetSongs() 
    {
        var requestUri = "http://api.songs.com/songs";
        var response = await _httpClient.GetAsync(requestUri);

        var responseData = response.Content.ReadAsString();
        var songs = JsonConvert.DeserializeObject<List<Song>>(responseData);

        return songs;
    }
}

1. Wrapping HttpClient

Wrapping the HttpClient will give us the ability to have a interface. In turn we can update the Songs class to expect that interface.

This will give us two benefits. Firstly the SongService will be improved as it will be depending on an interface instead of a concrete class. Secondly, we can then mock that interface to make the SongService easier to test.

IHttpProvider.cs

public interface IHttpProvider
{
    Task<HttpResponseMessage> GetAsync(string requestUri);
    Task<HttpResponseMessage> PostAsync(string requestUri, HttpContent content);
    Task<HttpResponseMessage> PutAsync(string requestUri, HttpContent content);
    Task<HttpResponseMessage> DeleteAsync(string requestUri);
}

HttpProvider.cs

public class HttpClientProvider : IHttpClientProvider
    {
        private readonly HttpClient _httpClient;

        public HttpClientProvider(HttpClient httpClient)
        {
            _httpClient = httpClient;
        }

        public Task<HttpResponseMessage> GetAsync(string requestUri)
        {
            return _httpClient.GetAsync(requestUri);
        }

        public Task<HttpResponseMessage> PostAsync(string requestUri, HttpContent content)
        {
            return _httpClient.PostAsync(requestUri, content);
        }

        public Task<HttpResponseMessage> PutAsync(string requestUri, HttpContent content)
        {
            return _httpClient.PutAsync(requestUri, content);
        }

        public Task<HttpResponseMessage> DeleteAsync(string requestUri)
        {
            return _httpClient.DeleteAsync(requestUri);
        }
    }

I’ve wrapped the four main actions of the HttpClient. But you may want to expose more or less in your implementation. For example, if you only need the GetAsync method then just do the following.

IHttpProvider.cs

public interface IHttpProvider
{
    Task<HttpResponseMessage> GetAsync(string requestUri);
}

HttpProvider.cs

public class HttpClientProvider : IHttpClientProvider
    {
        private readonly HttpClient _httpClient;

        public HttpClientProvider(HttpClient httpClient)
        {
            _httpClient = httpClient;
        }

        public Task<HttpResponseMessage> GetAsync(string requestUri)
        {
            return _httpClient.GetAsync(requestUri);
        }
    }

Now we can change the SongService to accept our new interface like so.

Songs.cs

public class SongService 
{
    private readonly IHttpProvider _httpProvider;

    public SongService(IHttpProvider httpProvider)
    {
        _httpProvider = httpProvider;
    }

    public async Task<List<Song>> GetSongsAsync() 
    {
        var requestUri = "http://api.songs.com/songs";
        var response = await _httpProvider.GetAsync(requestUri);

        var responseData = response.Content.ReadAsString();
        var songs = JsonConvert.DeserializeObject<List<Song>>(responseData);

        return songs;
    }
}

We can now test the SongService really easily as we have an interface we can mock.

2. Fake HttpMessageHandler

The HttpClient itself is actually an abstraction layer. It is the HttpMessageHandler which determines how to send requests.

The good news from a testing point of view. Is that the HttpClient has a constructor which takes a single argument of type HttpMessageHandler. Which means we can create a fake handler setup how we want and pass it in for testing.

FakeHttpMessageHandler.cs

public class FakeHttpMessageHandler : HttpMessageHandler
{
    public virtual HttpResponseMessage Send(HttpRequestMessage request)
    {
        // Configure this method however you wish for your testing needs.
    }

    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        return Task.FromResult(Send(request));
    }
}

AwesomeUnitTests.cs

[Test]
public void AwesomeUnitTest1()
{
    // Arrange
    var fakeHttpMessageHandler = new Mock<FakeHttpMessageHandler> { CallBase = true };

    fakeHttpMessageHandler.Setup(x => x.Send(It.IsAny<HttpRequestMessage>()))
                          .Returns(new HttpResponseMessage(HttpStatusCode.OK));

    var fakeHttpClient = new HttpClient(fakeHttpMessageHandler.Object);

    // Act
    ...

    // Assert
    ...
}

Using this method we could test the SongService without needing to change any of the code. However we are stuck with our FakeHttpMessageHandler class.

3. Mocking HttpMessageHandler

I have seen this method used a few times but only using the Moq framework.

Moq has an extension method Protected() which you can access by importing the Moq.Protected namespace. What the Protected extension method does is allows you to mock non-public protected members.

This is useful to us as the HttpMessageHandler’s SendAsync methods scope is protected internal. Which means we can do the following.

AwesomeUnitTests.cs

[Test]
public async Task AwesomeUnitTest2()
{
    // Arrange
    var mockHandler = new Mock<HttpMessageHandler>();

    mockHandler.Protected()
               .Setup<Task<HttpResponseMessage>>("SendAsync", ItExpr.IsAny<HttpRequestMessage>(),
            ItExpr.IsAny<CancellationToken>())
               .ReturnsAsync(new HttpResponseMessage(HttpStatusCode.OK));
    // Act
    ...
    
    // Assert
    ...
}

Once again, we would now be able to test the SongService without having to make any code changes. But this time we are not left with an extra test class.

The slight downside to this method is that we have to use a magic string to declare the method we want to mock. But this could easily be improved by using a string constant instead.

Conclusion

So, there are three different ways which you can test a class consuming the HttpClient. Hopefully at least one of them will be useful to you.

Whats your preferred method of dealing HttpClient in unit tests? Also if you have any questions or comments please leave them below.

What is Test Driven Development

This is part one of a series:


At the start my career I had a bit of a love-hate relationship with unit testing. Several times I’d swing from “they are totally indispensable” to “well, I’ll do them if I get time”.

As I’ve matured in my career I’ve been saved more than once by what seemed like a random test failure. Now I couldn’t consider a piece of code production level without being fully tested.

Currently I am fortunate enough to lead a team of great devs. And the subject of unit testing via TDD has been a topic of discussion a number of times. So I thought it was probably time for me to write a blog post on the subject.

Test Driven Development

To start with for anyone who’s unsure of what TDD is. It is the practice of writing tests before writing any production code. Allowing the tests to shape the design and implementation of said production code. As TDD has developed over the years three rules have been formed to guide developers. They are known as the Three Laws of TDD.

The Three Laws of TDD

  1. You must write a failing unit test before you can write any production code.
  2. You must not write more of a test than is sufficient to fail, or fail to compile.
  3. You must not write more production code than is sufficient to make the currently failing unit test pass.

These laws, when followed correctly, create a cycle of development. This cycle is known as Red-Green-Refactor. Devs practicing TDD will usually find themselves doing a cycle every 30 seconds or so.

Red Phase

The red phase of TDD is where you write your test. An important point to understand is a compile error is a completion of the red phase, as stated in law 2.

Once you have either a compile error or a failing test. Then the red phase ends and you move onto the green phase.

Green Phase

The green phase involves writing just enough to make your test code compile or pass.

If you are just writing enough code to fix a compile error. Then you will now move back to the red phase and continue writing your test. If however you now have a passing test you could move into the third phase, refactor.

Refactor Phase

This phase of the cycle is not always necessary. But in my experience is crucial and you should find yourself here often.

In this phase you are free to refactor anything to do with your current tests/production code. I would use this as an opportunity to extract duplicated production code out into a private method for example.

But the important point is you must keep all you tests passing. If after a refactor you have a failing test. Then your refactor has broken your logic somewhere and now you must fix it. Taking you right back to the red phase and round and round you go.

My Experience Using The Three Laws

What I’ve written above is what you will find pretty easily from any Google search on TDD. But I think what made TDD click for me were the experiences of people who had done it for a prolonged period of time. So I wanted to share some my experiences while practicing TDD.

Law 1

I have found in reality it doesn’t make sense to stick to this rule in all circumstances.

It is just not practical to write unit tests when setting up a new application. I want to be clear here, I’m talking about the scaffolding of an application. Doing the basic plumbing. Once this is done I will move into my RGB (Reg-Green-Refactor) cycle.

Another good example is something like a Data Transfer Object. It makes no sense to write tests for a simple class like a DTO.

Law 2

When I’ve initially taught TDD to other developers, I follow Law 2 to the letter. But after a certain point I like to share with them what I’m about to share with you.

I will write a whole test even if it has a compile error. Once I have finished I will fix the compile error writing the minimum code possible. If the test is then able to run I will confirm my red phase by running it. Then move to writing the code to make the test pass.

This is only a small bend of the rule in my opinion and saves a bit of flicking between files to stub methods or properties. Obviously with modern tooling it is becoming extremely trivial to stub new methods. But I wanted to mention it.

Law 3

On the whole I will stick to Law 3. Every now and then I will write a bit of error checking code, a null check for example. Which is technically more code than I could have written to make the test pass.

I think that on the whole this is fair enough and in the sprit of the Law. I feel checking that a value is present is part of the minimum code I can write.

My Approach To Writing Unit Tests

When working with one of the members of my team. He told me of his frustration trying to write his tests. He quickly found it difficult to think of what to test. He struggled with naming of the tests. This quickly lead to frustration and he stopped his TDD.

My advice to him was something I learned from Uncle Bob.

Think of your tests as documentation.

That was my light bulb moment when learning TDD, realising that my tests were a list of things my class would do. Once I thought in that way thinking of tests became much easier.

For example, if I was developing a class which managed outgoings in a monthly budget app. I would probably write tests along the following lines.

As I just mentioned, Uncle Bob recommends thinking of your tests as documentation. Reading the above tests, I think it paints a good picture of what the OutgoingsManager class is doing.

TDD Advantages

There are many advantages to using TDD but I wanted to pull out a couple that really stand out to me.

Single Responsibility

There are many things we as developers should be striving for when writing code. But one of the big ones for me is Single Responsibility.

With TDD I find mixed concerns stick out like a sore thumb. Usually because my tests become harder to write. Or my test/production code is getting bulky. This is the sign for me that I need to rethink my design.

The result, the code I write seems simpler and easier to understand. I have more classes but they all do one thing and do it well. Which adds up to a better quality code base and easier development going forward.

Better abstractions

To often I have seen code untested in code bases I’ve worked on. When I’ve asked why, I’ve been told that the code is untestable or wasn’t important enough to test. While there are rare occasions where code genuinely can’t be tested on the whole its either bad code or bad abstractions that are the cause.

An example of this I’ve bumped into more than once is DateTime.Now. I once saw unit tests that would only pass on certain days of the week due to a direct use of DateTime.Now.DayOfWeek.

If this code had been developed using TDD this most likely would not have happened… Well thats probably a bit over optimistic. But I would hope it would have had less chance of happening.

The reason I say this is that I find TDD makes me think more about abstractions. In order to write my tests I need to use abstractions so I can mock objects. This gives me less chance of creating brittle tests.

A simple wrapper around DateTime.Now would have cured that issue and only takes seconds to do.

TDD Disadvantages

While TDD can be a massive benefit it not a silver bullet. As with all things in software development it is just a tool. And there is no point using a hammer when you need a drill.

Complex Problems

I have found that trying to practice TDD on complex issues, where I haven’t managed to fully understand the problem yet, can horrendous.

In these situations I’ve found that TDD can be much more of a hindrance. The tests will not cover all requirements, mainly because you might not have worked all out at this point. TDD just ends up slowing me down.

I may go back to TDD once I have gotten things straight in my head. But sometimes this is just not possible.

Legacy Code

When working with legacy code TDD can simple just not work. This usually is due to how that legacy code has been written. If it has not been written with testing in mind you can soon find yourself unstuck.

Obviously you could rewrite that code to make it testable but that will all depend on your deadlines or story scope.

Additional Time

Once experienced in TDD I think it can be argued that the time difference is not hugely different for a normal code > test cycle. But it is usually longer to write code using TDD.

Conclusion

Thats where I’m going to end things for this post.

I’m going to follow this up with a more hands on post. Where I will talk you take you through the techniques I’ve talked about here. I’ll also talk about some of the tools I use to make life a bit easier.

I hope you’ve enjoyed reading this, until next time…

Dockerising an ASP.NET Core application

Docker is a technology which allows applications to be packaged up and run in containers. In this post we will go through how to achieve this with an ASP.NET Core application.

All code from this post is available on my GitHub account.

Step 1 - Getting Setup

I am going to be doing this from a Mac. But you can easily use Windows or Linux if you prefer.

If you don’t have it already, you will need to install Docker onto your machine.

I have added a couple of links below to install guides on the Docker site. They will walk you through the process and get you all setup and ready to go.

Step 2 (Optional) - Create an ASP.NET Core App

You maybe looking to Dockerise an existing ASP.NET Core application in which case just skip ahead to the next step.

In true Blue Peter fashion (apologies if you have no idea what that means, but it was a TV show I watched when I was a kid) here is a post I prepared earlier.

Once you have created your application we can move onto the next step.

Step 3 - Add a .dockerignore

Acting much the same as a .gitignore file. Any files or directories listed will be ignored by docker.

In the root of your application add a .dockerignore file and paste in the following.

bin/
obj/
node_modules/

If you are adding this to an existing project, feel free to add any additional items you wish to omit.

Step 4 - Add a dockerfile

The dockerfile is a bit like a blueprint. In it we are going to tell Docker how to build the image for our application. Once we have an image we can then start a container.

In the root folder for your application. Create a new file called dockerfile, note there is no file extension. And then open it up in your favourite editor and paste the following code.

FROM microsoft/aspnetcore-build AS builder
WORKDIR /source

COPY *.csproj ./
RUN dotnet restore

COPY . .
RUN dotnet publish --output /app/ --configuration Release

FROM microsoft/aspnetcore
WORKDIR /app
COPY --from=builder /app .
ENTRYPOINT ["dotnet", "YourApp.dll"]

Let’s work through this and see what’s going on.

Stage 1 - Building

FROM microsoft/aspnetcore-build AS builder
WORKDIR /source

The first line is telling docker to use the aspnetcore-build image which is created by Microsoft. This image is specifically for building and publishing an ASP.NET application.

The second line is setting the current working directory. It is probably worth noting that if this directory doesn’t exist then docker will create it.

COPY *.csproj ./
RUN dotnet restore

In these two lines we are telling Docker to copy over any file ending with .csproj to the current working directory we specified above. Then we are executing the restore command from the dotnet CLI.

COPY . .
RUN dotnet publish --output /app/ --configuration Release

Here we are copying over the remaining files of our project and then building everything using the Release configuration. This will be the application that we end up serving when we start our container.

Stage 2 - Serving

FROM microsoft/aspnetcore
WORKDIR /app
COPY --from=builder /app .
ENTRYPOINT ["dotnet", "YourApp.dll"]

We now switch to a different base image. This time we are using the aspnetcore image from Microsoft. This image is used for running compiled applications, which is what we have just done in stage 1.

The working directory is set to /app. And the compiled app we built in stage 1 is copied over into the current working directory.

Finally, the last command tells Docker how to start our app using the .NET Core runtime.

Step 5 - Building the Image

Now we have a blueprint for our container we now need to build it into an image. Once we have our image built we can then start a container.

Run the following command in a terminal at the root of your application.

docker build -t yourapp .

You should see an output similar to the following:

Once that is complete you can run the docker images command. You should see your new image in the list.

Step 6 - Start a Container

The last step is to spin up a container with our application running inside.

docker run -p 5000:80 yourapp

We are telling Docker to create a new container using the image we just built. And to bind external port 5000 on that container to internal port 80.

You should then be able to open your browser and go to http://localhost:5000 and view your application!

Conclusion

In summary we have taken a ASP.NET Core application and made it compile and run inside a Docker container. All in under 30 minutes. Not bad if I do say so myself.

Once again, all the code I have used in this post is on my GitHub account.

I hope you have found this post useful. If you have any questions or feedback please leave a comment below. You can also find me on Twitter.

Until next time.

ASP.Net to ASP.Net Core 2 - 5 things to know

When I first heard about ASP.NET Core I couldn’t wait to try it. But for me, v1 and v1.1 just didn’t have enough APIs.

Welcome ASP.NET Core 2!

The ASP.NET Core 1 framework had around 14-16 thousand APIs available. With ASP.NET Core 2 that number is now in the region of 36 thousand.

With that I have started to move over some of my personal projects and wanted to jot down some of the biggest changes from ASP.NET.

Dependency Injection is baked in

When building any kind of large scalable application, dependency injection (DI) should be in the mix. With ASP.NET Core, Microsoft has taken away the pain of selecting and configuring DI. But don’t worry, you can still plugin and configure 3rd party implementations.

What do I need to do in order to use this fantastic feature? I hear you cry. Check out the code snippet below.

public void ConfigureServices(IServiceCollection services)
{
    services.AddTransient<ITransientService, TransientService>();
    services.AddScoped<IScopedService, ScopedService>();
    services.AddSingleton<ISingletonService, SingletonService>();
}

That’s it!

The only effort is in selecting the lifetime for the service being registered.

This is achieved by using either the AddTransient, AddScoped or AddSingleton methods.

New csproj file

The .csproj file has been overhauled in Asp.Net Core 2. It’s been greatly simplified from the .csproj file we are all used to in ASP.NET projects.

The biggest difference has come from the change to a folder based project structure. Files now no longer have to be explicitly included in the solution. I’m really loving this as I would hate to know the number of merge conflicts I have had to resolve due to new files being added to a project.

Microsoft has removed the GUID-based references to other projects in the solution. This makes the file much more readable. You can now edit the project file on the fly, without having to unload the project and then reload it.

Razor Pages

Think MVC view but without a controller.

To define a razor page use the @page directive at the top of the page.

@page

<h1>Page Title</h1>

<div>
 ...
</div>

Here I’ve defined a razor page using the @page directive I mentioned before. This directive turns the page into a MVC action. Once this happens the razor page will handle requests directly without the need for a controller.

It’s important to note that the @page directive must be the first thing declared on a page. Once declared it will affect any razor constructs used on the page.

A step further

The above example is a very simple use case for razor pages. A step further is to use a code-behind file.

I’m sure some of you are thinking what I did… WebForms!!

But don’t worry, there is no ViewState rearing its ugly head. Let me show you an example.

(pages/HelloWorld.cshtml.cs)

using System;
using Microsoft.AspNetCore.Mvc.RazorPages;

namespace AspNetCoreExamples.RazorPages
{
    public class HelloWorldModel : PageModel
    {
        public string Heading { get; private set; } = "Hello World";
    }
}
(pages/HelloWorld.cshtml)

@page
@using RazorPages
@model HelloWorldModel

<h1>@Model.Heading</h1>

There is a convention with naming which I’ve demonstrated above. The PageModel file has the same name as the Razor Page but with .cs on the end.

Routing

By default the runtime will look for Razor Pages in a folder called pages. From that starting point URLs will reflect the folder structure.

/pages/index.cshtml is returned when requesting / or /index./pages/hello-world/index.cshtml is returned when requesting /hello-world or /hello-world/index.

You get the idea.

There is a lot more to Razor Pages which I’ll cover in a future post but I’ll leave it here for now.

Global.asax is gone

ASP.NET applications are bootstrapped using the Global.asax. In this file routes, filters and bundles are some of the common items registered. Then came OWIN which changed the way an application is bootstrapped. It introduced a startup.cs file where middleware could be plugged in and configured. This was in an attempt to decouple the application from the server.

In ASP.NET Core the global.asax file is now removed completely. And while there is still a startup.cs file. The dependency on OWIN has also been removed. Bootstrapping is now the responsibility of the program.cs file. Just as in a console application the program.cs contains a Main method. This is now what is used to load the startup.cs file.

public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

Storing App Settings

In ASP.NET it is common to place certain application settings in the <appSettings> section of the web.config. This is no longer the case in ASP.NET Core.

ASP.NET Core can load data from any file. By default a new application is setup to use a file called appsettings.json. By default the file will have some like this in it.

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Warning"
    }
  }
}

To add your own settings information into the file you simple create a new object.

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "Api" {
    "BaseUrl": "http://domain.com/api"
  }
}

In order to use a setting you the settings file needs to be loaded in Startup.cs.

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

public IConfigurationRoot Configuration { get; }

You can then access any setting inside by doing the follows.

var baseUrl = Configuration.GetSection("Api")["BaseUrl"];

How I Dockerised my blog

Docker is a fantastic tool and great if you’re running a VPS like me. However a common question is… What if I want to be able to run multiple sites from a single VPS?

Most websites running in containers listen on port 80 by default. Only one of them can be bound to that port at a time, so whats the answer?

Well, unless you want to access your websites using addresses such as www.example.com:1234. Then a reverse proxy is the answer.

Nginx will be our reverse proxy. It will take all incoming requests and route them to the correct container via Dockers VIRTUAL_HOST variable.

Prerequisites

You will need to install Docker and Docker Compose. Both have guides which I have linked below:

NOTE: The Docker install guide is for Ubuntu but there are guides available for other Linux flavors as well as Windows and Mac.

Docker Networks

Docker creates three default networks upon install. They are bridge, none and host. You can view the available Docker networks at any time using the command docker network ls.

If you would like to know more about Docker network please check out the official docs.

Creating a Docker Network

The first thing we are going to do is create a new Docker network for all of the containers to connect on. Type the command docker network create nginx-proxy.

When we start any containers we will be explicitly connecting them to this network. If we don’t they won’t be able to connect to each other, which would be bad.

The three amigos: nginx, docker-gen & LetsEncrypt

In this day and age we should all be running SSL enabled sites. And with the great work of LetsEncrypt there is no excuse not to go SSL.

With this in mind we are going to bring together three great tools to not only allow our sites to run https. But to also allow it to happen automatically.

To do this we will use Docker Compose. Which for those not aware, is a tool for defining and running multi-container solutions. We are going to use it to run these 3 images:

  1. nginx
  2. docker-gen
  3. letsencrypt-nginx-proxy-companion

Step 1

The first thing we are going to do is create some folders. I have setup the following structure but feel free to define your own.

   apps/
    - nginx-proxy/
      - files/
      - config/   

The files folder is going to hold the nginx config files that will be generated by docker-gen when a new container is registered. As well as the certificates generated by LetsEncrypt.

_Why do we need this you may ask?_As we are running containers now, all files within live and die with the container. We will be linking a directory in our container with the one above in order to persist our data files. We define these links as volumes. If we didn’t do this, when we wanted to upgrade our images all our configuration files would be lost.

Step 2

Now we have all our folders in place we need to add a few files.

Starting in the apps/nginx-proxy/config/ folder run the following command:

curl https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl > nginx.tmpl

This is going to download the latest version of the nginx.tmpl, which is used to generate our nginx configs when registering new containers.

Next up is to create a docker-compose.yml in the same directory and paste in the following code:

version: '3'
services:
  nginx:
    image: nginx
    container_name: nginx
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /apps/nginx-proxy/files/conf.d:/etc/nginx/conf.d
      - /apps/nginx-proxy/files/vhost.d:/etc/nginx/vhost.d
      - /apps/nginx-proxy/files/html:/usr/share/nginx/html
      - /apps/nginx-proxy/files/certs:/etc/nginx/certs:ro

  nginx-gen:
    image: jwilder/docker-gen
    command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
    container_name: nginx-gen
    restart: unless-stopped
    volumes:
      - /apps/nginx-proxy/files/conf.d:/etc/nginx/conf.d
      - /apps/nginx-proxy/files/vhost.d:/etc/nginx/vhost.d
      - /apps/nginx-proxy/files/html:/usr/share/nginx/html
      - /apps/nginx-proxy/files/certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro

  nginx-letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: nginx-letsencrypt
    restart: unless-stopped
    volumes:
      - /apps/nginx-proxy/files/conf.d:/etc/nginx/conf.d
      - /apps/nginx-proxy/files/vhost.d:/etc/nginx/vhost.d
      - /apps/nginx-proxy/files/html:/usr/share/nginx/html
      - /apps/nginx-proxy/files/certs:/etc/nginx/certs:rw
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
      NGINX_PROXY_CONTAINER: "nginx"

networks:
  default:
    external:
      name: nginx-proxy

Step 3

With all the configuration files in place then we should be able to fire up our new Docker application using this command: docker-compose up -d

If all is well you should be able to run docker ps and see the 3 containers running.

Registering containers

Congratulations! You now have a reverse proxy all ready to go. But how do we register new containers with it?

I am going to show you how to register a new Ghost container as that is what I use for my blog. But the general principals should apply to most containerised applications.

Step 4

To start I’m going to add a couple of new folders to the previous structure.

   apps/
    - nginx-proxy/
      - files/
      - config/ 
    - blog/
      - content/

I’m then going to add a new file, blog.yml. And paste in the following code.

version: '3'

services:

   blog:
     image: ghost
     container_name: blog
     restart: unless-stopped
     expose:
           - "443"
     volumes:
       - /apps/blog/content:/var/lib/ghost/content
     environment:
       VIRTUAL_HOST: myblog.com,www.myblog.com
       VIRTUAL_PORT: 2368
       LETSENCRYPT_HOST: myblog.com,www.myblog.com
       LETSENCRYPT_EMAIL: [email protected]

networks:
    default:
       external:
         name: nginx-proxy

Much like before data needs to be persisted outside of the container. And as before we achieve this by defining volumes.

I have defined a few environment variables and these are the key to the automatic registration with out proxy.

VIRTUAL_HOSTOur docker-gen container will watch for containers started with this variable set and will automatically generate a configuration for them in nginx.

VIRTUAL_PORTThis port will be registered with nginx so it know what port to connect to the container on.

LETSENCRYPT_HOST & LETSENCRYPT_EMAILThese two variables are used by the jrcs/letsencrypt-nginx-proxy-companion container to either create or renew the SSL certificate for the container.

You’ll also notice that once again I have defined the Docker network that the container should connect on.

Step 5

All thats left to do is run docker-compose up -d followed by docker ps. And you should now see a ghost container running along with our previous 3 containers.

You should now be able to open a browser and type in the blog containers address and view your shiny new dockerised blog, or whatever container you started.

Summing up

In 5 steps we have created a reverse proxy that we can auto register new containers with SSL. And we have proven it by registering our first container. Not bad if I do say so myself.

I’ve included the links to all the projects used throughout this post. I did just want to mention docker-compose-letsencrypt-nginx-proxy-companion by Evert Ramos. This was very helpful with getting my setup working. And while I have modified it a bit for my uses the bulk of credit should go to him.

Creating an ASP.NET Core app on a Mac

Coming from a Windows background I took the plunge recently and swapped to a Mac. This has given me the perfect opportunity to get into .NET Core! In this post I’m going to take you through getting everything you need to get setup and create your first app.

Prerequisites

Lets just go over a few things to get us started

.Net Core

Now we have chosen an editor the next thing we need to do is install .Net Core itself.

Regardless of which editor you choose this step will install the required command line tools or project templates to get started. Head over here to download the SDK and follow the installation steps.

NodeJS & NPM

Finally we need to install NodeJS and NPM which are found here. We need them both so we can install Yeoman, bower and any other packages in the future. Once installed run the following command.

npm install -g yo bower

Just for reference, Yeoman is a template generator and will be used to scaffold our .Net Core app.

Creating the Application with Yeoman

Now we need to install the ASP.Net Core template generator. This will be used by Yeoman to generate our application.

npm install -g generator-aspnet

For anyone unsure, the -g flag specifies that this package will be installed globally.

All thats left before we create our first app is to make a directory to put it in. I’m just going to create a directory in my Home folder and then move into it using the following commands.

mkdir hello-world
cd hello-world

The moment of truth is here, let’s fire the command to generate our app.

yo aspnet

You should be presented with some text asking if you want to submit some anonymous usage data, answer either yes or no. Next you’ll be asked what type of application you want to create, select Web Application Basic [without Membership and Authorisation].

Then choose which UI framework you want to use, I went with Sematic UI purely because I’ve not used it before. But choose which ever framework you feel comfortable with.

Finally give your application a name, I’ve gone with the classic Hello World.

Once you have completed the above steps then Yeoman will go off and create the app. And you should then be left with some like the screenshot below.

Building & Running the App

We have been left us with some commands to run in order to get our application going. Let’s work through those and hopefully we should be able to view our shiny new app.

cd HelloWorld - Changes to the new application folder created by Yeoman.

dotnet restore - Restores all Nuget packages required by the application.

dotnet build - This is an optional step as the run command does a build anyway.

dotnet run - As mentioned above this builds everything then spins up a server on localhost port 5000 in order to view the app.

If all has gone according to plan you should see the following in the terminal.

Open your favourite browser and navigate to http://localhost:5000 and you should be looking at your new ASP.Net Core Application.

Wrapping Things Up

So there you have it, a working ASP.Net Core application on Mac! I hope you found this post useful.

HTTPS with Nginx and Let's Encrypt

I’ve been wanting to get started on this blog for a while now but I wasn’t sure what to kick things off with. Then I realised that setting this blog up has been pretty interesting. So why not start with a post about that…

The Problem

When I setup this blog I had an issue with not being able to redirect my www subdomain to non-www over HTTPS. I kept receiving a Not Secure error from Chrome stating that I had no valid certificate.

My SSL certificate is provided by the awesome Let’s Encrypt service. And after some Googling I realised that when I had setup the blog using Ghosts CLI it had only created a certificate for the non-www version of my domain.

After further Googling I found out that I would need a second SSL certificate for my www subdomain to allow safe redirect to just codedaze.io.

The Solution

To start off I installed Git, you need this to clone Lets Encrypt from their official repo on GitHub.

sudo apt-get install git

Once that’s finished then you need to clone the repo using the following command.

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Finally you need to head over to the new directory

cd /opt/letsencrypt

Creating a new SSL certificate

Let’s Encrypt will perform a series of Domain Validations to check the authenticity of your domain, once these are satisfied then your new SSL certificate will be issued.

Run the following command to generate your certificates. You can use the -d parameter to add additional domains that you wish to generate certificates for as per the example below.

sudo -H ./letsencrypt-auto certonly --standalone -d example.com -d www.example.com

But as I already had the non-www certificate I just ran this version of the command.

sudo -H ./letsencrypt-auto certonly --standalone -d www.codedaze.io

You will be asked to provide your email address so you can be contacted in emergencies or when your certificate is due to expire. So enter this when prompted.

Next you will be asked to agree to the terms of service and then if you wish to submit your email address for some data analysis and marketing. I leave this one up to you.

Hopefully the next thing you will see is something like the following:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/www.example.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/www.example.com/privkey.pem
   Your cert will expire on 2017-12-31. To obtain a new or tweaked
   version of this certificate in the future, simply run
   letsencrypt-auto again. To non-interactively renew *all* of your
   certificates, run "letsencrypt-auto renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Congratulations! You now have your shiny new SSL certificate(s). So how do you get Nginx to play ball.

Nginx Configuration

Most of my Nginx configuration was done by the Ghost CLI when I setup my blog. I have added a couple of redirects in from www to non-www for both HTTP and HTTPS variant.

In order to fix my issue with the redirect from http://www.codedaze.io to https://chrissainty.com. All I had to do was go into the server I had configured for the HTTPS redirect from www.codedaze.io to codedaze.io and all the locations of the new certificates I had generated above.

server {
	listen 443 ssl http2;
	listen [::]:443 ssl http2;

	server_name www.codedaze.io;
	return 301 https://chrissainty.com$request_uri;

	ssl_certificate /etc/letsencrypt/live/www.codedaze.io/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/www.codedaze.io/privkey.pem;
	include /var/www/codedaze.io/system/files/ssl-params.conf;
}

Conclusion

You may have noticed by I am very new to Nginx and Linux in general. So I’m not writing this as a perfect setup. And I’m sure I will soon be writing a post showing how horrible this configuration is.

But for now I’m so happy with myself for getting this issue sorted. I’m also really happy that it has inspired me to write about it. Something which I really want to do much more of.

So in conclusion, if this helps you in anyway then great. If you find it totally useless then please accept my apologies. But this is my first blog post and I’m pretty damn happy with myself for getting it done and hopefully its the start of many.