Building a Static File Server in ASP.NET Core RC2 with the CLI

The Pitch

The fine folks over at Microsoft released ASP.NET Core RC2 this week which dramatically changed a lot of things under the hood and introduced the new dotnet CLI, a command line interface that works on Windows, OSX and Linux. You can find installers for your platform at dot.net.

With this release, most of the major interfaces and apis are locked in and will most likely not change between now and RTM. I felt it was a good time to start digging in and learning the new platform. You can of course install the Visual Studio tooling and use File > New Project to generate everything for you and start working with the new MVC/Entity Framework bits right away. But I am looking for a deeper understanding of how these things fit together. So I set out to build a simple static file server using only the CLI and a text editor. I also wanted to do this completely in OSX.

The Setup

To get a development environment setup, you need to first install the platform. On OSX, this involves installing OpenSSL followed by the .NET Core SDK via a pkg package. Detailed instructions can be found here.

You will also need a text editor; you are free to use any you like but Visual Studio Code has some nice built in features that make working with C# and .NET Core similar to what you may be used to in Visual Studio on windows. I found that while a bit buggy the Insiders build offered the best full experence.

Once everything is installed, you can verify your environment by firing up bash and issuing the following commands:

dotnet --info will display details about the version of the CLI and your environment like this.

dotnet info output

code-insiders -v will display the version of Visual Studio Code like this.

code version output

Note: These validation steps work similarly on Linux and Windows (via PowerShell).

Creating a Package

Now that the development environment is setup, we can create the first code asset. We can do this via the dotnet cli fairly simply. In bash excute the following commands:

mkdir static_server

cd static_server

dotnet new

dotnet new

This will create a bare minimum .NET Core “Hello World” console project. At this level the project consists of two files Program.cs and project.json. Project.json is a metadata file describing the project and its dependencies. Program.cs is our buildable code asset containing the executable functionality.

project.json

At a minimum project.json has three bits if needed information: the version of the project, the project dependencies, and the platform framework the project is intended to execute on. Because this is a console application the build options section is needed to instruct the compiler to create an entry point.

The only dependency the package currently has is on Microsoft.NETCore.App which is a meta package that references a ton of other smaller packages. Think of this as stdlib. We can even declair explicitly that it is a platform depedency which will effect how the application is published. Platform dependecies are assumed to already be available on the deployment environment, so the deployment package does not need them.

 
{
  "version": "1.0.0-*",
  "buildOptions": {
    "emitEntryPoint": true
  },
  "dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0-rc2-3002702"
    }
  },
  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  }
}

Program.cs

Program.cs doesn’t really need much explination. This is the simplest hello world program that all C# developers have seen millions of times. There is a class called Program with a static Main method which writes “Hello World to the console.”

 
using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

Executing the Package

To execute the package there are two steps:

1. Restore Packages

The project has a single dependency that needs to be pulled on to the system in order to compile the package. This can be accomplished using the CLI in bash:

dotnet restore will inspect project.json and resolve all of the dependant packages.

dotnet new

Note: If this is the first time you have run a restore on your machine, your output will be considerably larger than what is displayed in the screenshot above. The core app meta package has dependencies on a large number of other packages that will be downloaded from Nuget.org and cached on your local machine.

2. Compile and Run

To run the project you need to build the package and execute it. This can be accomplished in two seperate steps using the build and run commands or in one using just the run command which will perform the build for you if needed.

dotnet run

dotnet run

Finally!

Let’s pause here and take a moment to celebrate .NET executing on OSX.

dotnet executing on OSX

Making the Package Respond to HTTP Requests

At this point we have a console application, but what we really want is a web application that responds to HTTP requests. To do this we will take a dependency on the Kestrel web server Nuget package.

1. Update project.json

Simply add the Kestrel dependency to the project.json.

 
{
  "version": "1.0.0-*",
  "buildOptions": {
    "emitEntryPoint": true
  },
  "dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0-rc2-3002702"
    },
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.0-rc2-final"
  },
  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  }
}

2. Run a Restore

dotnet restore will pull all the needed packages from Nuget.org.

dotnet restore

3. Update Program.cs

Now we need to wire up Kestrel in our application and tell it how to respond to HTTP requests.

 
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;

namespace ConsoleApplication
{
    public class Startup
    {
        public void Configure(IApplicationBuilder app)
        {      
            app.Run(async ctx => await ctx.Response.WriteAsync($"Hello World!"));
        }        
        
        public static void Main(string[] args)
        {
            new WebHostBuilder()
                .UseKestrel()
                .UseStartup<Startup>()
                .Build()
                .Run();
        }
    }
}

The entry point to the application has changed to use a builder to wire up Kestrel and tell it to begin listening to requests. At a minimum, you have to tell the builder to use kestrel and tell it what startup class to use.

Note: In this example, I have combined by program’s entry point and it’s startup class into a single type. This is a convenience for the purposes of this article so I don’t have to introduce multiple code files. Typically this might be two seperate code files Program.cs and Startup.cs. Visual Studio will setup it up that way by default.

The Configure method is called automacially in the web host bootstrapping code and is the point where you tell the application how to handle requests. This example responds to all requests with “Hello World!”.

4. Compile and Run

Executing dotnet run from bash is now starts the application listening on port 5000.

dotnet run

Making the Package Serve Up Files

Now that we have an application that responds to http requests, Let’s get it to serve up files from the file system. To do this we will take another dependency on the StaticFiles Nuget package.

1. Update project.json and Run a Restore

Add the following dependency declaration to your project.json file.

 
{
  "version": "1.0.0-*",
  "buildOptions": {
    "emitEntryPoint": true
  },
  "dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0-rc2-3002702"
    },
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.0-rc2-final",
    "Microsoft.AspNetCore.StaticFiles": "1.0.0-rc2-final" 
  },
  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  }
}

dotnet restore will pull all the needed packages from Nuget.org.

2. Create a wwwroot directory with an index.html

mkdir wwwroot will create the directory.

touch wwwroot/index.html will create an empty index.html file to serve up.

3. Add Some Boilerplate html

Add the following to index.html.

 
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Hello World!</title>
    </head>
    <body>
        <p>Hello World!</p>
    </body>
</html>

2. Update Program.cs

Now we need to wire up our application to serve static files.

 
using System.IO;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
    public class Startup
    {
        public void Configure(IApplicationBuilder app)
        {      
            app.UseStaticFiles();
        }        
        
        public static void Main(string[] args)
        {
            new WebHostBuilder()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseKestrel()
                .UseStartup<Startup>()
                .Build()
                .Run();
        }
    }
}

There are two modifications here. First the Configure method now calls the UseStaticFiles extension method on the IApplicationBuilder inteface. This tells our application to use the static files middleware to respond to HTTP requests. Second is the addition of the call to UseContentRoot in the WebHostBuilder chain. This tells the application the root directory of our application and where it will find the wwwroot folder we created. With this line missing our application would simply serve up 404s because it doesn’t know where the wwwroot is located.

4. Compile and Run

Executing dotnet run from bash now starts the application listening on port 5000. Any file we add to the wwwroot directory will now be served up.

Note: At this point we have to explicitly request the file we want by name. Hitting the root of our site will return a 404.

curl localhost

WAT?!

The 404 may at first seem strage, but with ASP.NET Core everything must be explicitly opted in. We can wire up default documents by adding a single line to our Configure method.

 
using System.IO;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
    public class Startup
    {
        public void Configure(IApplicationBuilder app)
        {
            app.UseDefaultFiles();    
            app.UseStaticFiles();
        }        
        
        public static void Main(string[] args)
        {
            new WebHostBuilder()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseKestrel()
                .UseStartup<Startup>()
                .Build()
                .Run();
        }
    }
}

And now we have default docs being served up.

curl localhost

Note: The order of operations is important here, if we were to reverse the calls the static files middleware would respond with a 404 before the default files middleware has a chance to inspect the request and insert a default file into the response. This holds true for all middleware wireup in the configure method.

With RC2 you can use the method UseFileServer to add both default documents and file serving at once. Allowing you to sidesetup this possible configiration issue. UseFileServer takes an optional parameter enabledDirectoryBrowsing that is false by default. Enabling this flag will serve up html showing listings of the files available in each directory that does not have a default docuement.

If you run the application with directory browsing enabled based on this article, you will get an error.

Unhandled Exception: System.InvalidOperationException: Unable to resolve service for type ‘System.Text.Encodings.Web.HtmlEncoder’ while attempting to activate ‘Microsoft.AspNetCore.StaticFiles.DirectoryBrowserMiddleware’.

This error is generated because we have instructed our application to use the DirectoryBrowserMiddleware, but we failed to make all of the depedant services needed by the middleware available.

To do this we need to modify the application and hook into the dependency injection bootstrap method ConfigureSerices. Then add a call to AddDirectoryBrowser to ensure that all required depedencies for directory browsing are registered for injection.

 
using System.IO;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddDirectoryBrowser();
        }
        
        public void Configure(IApplicationBuilder app)
        {
            app.UseFileServer(enableDirectoryBrowsing: true);
        }        
        
        public static void Main(string[] args)
        {
            new WebHostBuilder()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseKestrel()
                .UseStartup<Startup>()
                .Build()
                .Run();
        }
    }
}

Summary

In this post, I have shown how to create a simple static file server using ASP.NET Core RC2 from the command line in OSX using a simple text editor. This application will run in Windows, OSX and Linux. The complete source for this article can be found here.

Comments

Dependency Could Not Be Resolved Error in Visual Studio 2015 RTM

Microsoft dropped the RTM (release to manufacturing) version of Visual Studio 2015 on Monday with great fanfare. I, like many other .NET developers, hopped on MSDN, downloaded their favorite flavor and installed it immediately. I was pretty happy to see that I could open my current project, compile and run without a single change to the solution or project files. There also appears to be a dramatic increase in performance of the text editors. Yay!

Yesterday, I decided to play around with the shipped bits of ASP.NET vNext. Obviously still in beta phase, but I should be able to whip up a little app to get my feet wet. A quick google search dropped me on a tutorial that I started working through.

When adding the Microsoft.AspNet.Diagnostics package to project.json, I started getting weird errors. It only seemed to happen when I was working in a new Empty Web project and attempting to add packages via the project.json or the NuGet Package Manager. I could create a new Web Application and the Diagnostics package was added with no issue.

Here are the symptoms:

Action: Create a new Empty web project and attempt to add Microsoft.AspNet.Diagnostics package to the project.json.

Result: Error List pane will display a build error “Dependency Microsoft.AspNet.Diagnostics >= 1.0.0-beta5 could not be resolved”

Action: In Powershell, navigate to the solution root and run a dnu restore. Note: You may need to run a dnvm use 1.0.0-beta5 first.

Result: A large stack dump with the following error: “System.Net.WebException: The remote server returned an error: (407) Proxy Authentication Required.”

I am currently working at a large state institution and all of our traffic goes though a proxy. Apparently the pass through authentication is not being handled properly somewhere deep in the bowels of beta5. After doing much googling and issue report reading; I discovered a thread, which I have long since lost, that suggested a fix for the issue.

The solution was fairly simple. For each installed runtime in your .dnx folder, you need to add a dnx.exe.config to enable proxy settings. You should then be able to pull packages with no issues.

Here is a little powershell script that will automate the process for you.

 
$source = "http://git.io/vYLij";
Get-ChildItem "~\.dnx\runtimes" -Recurse -Force -Filter "dnx.exe" | 
	%{ $_.DirectoryName } |
	Get-Unique |
	ForEach { Invoke-WebRequest $source -OutFile "$_\dnx.exe.config" }
Comments

A Non-Trivial Example of MediatR Usage

I have been using Jimmy Bogard’s MediatR library on my current project for the last few months. I first read about the project on his blog which has a series of posts including his explaination of the benefits of using the Mediator Pattern.

I took the leap and gave the library a try. I was quickly impressed by how clean and simple my controllers become. Which makes them really simple to test. I fell into a pattern of seperating actions that change state in my system from queries of the current state, a CQRS lite kind of pattern. I also liked the concept of the Mediator Pipeline described in Jimmy’s blog post. It creates a nice seperation of concerns where I can focus on one bit of command processing at a time and compose the entire process at runtime.

Let’s take a look at a full example taken directly from my current application. The application is a psudo-multi-tenant financial tracking system intended to track funds on behalf of residents of various institutions. These institutions belong to a greater organizational heirarcy. For an insitution to use the system they first have to be added to the appropriate palce in the heirarcy.

The data entry from looks like this:

organization editor

The form allows a user to specify various details about the orgaization including an active directory group containing the organization’s users, a name and abbreviation that must all be unqiue. There are some simple client side validations that make ajax calls to ensure active directory info actually exists active directory. The form also allows a user to select what features are enabled for the organization, I intend for this list to grow as features are added throughout the life of the project.

The controller handler for the post back of this form looks like this:

 
// POST: Organizations/Create
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> Create(OrganizationEditorForm form)
{
    Logger.Trace("Create::Post::{0}", form.ParentOrganizationId);

    if (ModelState.IsValid)
    {
        var command = new AddOrEditOrganizationCommand(form, ModelState);
        var result = await mediator.SendAsync(command);
                    
        if(result.IsSuccess)
            return RedirectToAction("Index", new { Id = result.Result });

        ModelState.AddModelError("", result.Result.ToString());
     }
            
     return View(form);
}

I lean on MVC model binding to hydrate a form view model and do basic data attribute validation. If that validation fails, I simply redisplay the form with ModelState errors displayed. This is simply ensuring that the client side validation is still valid.

Next, I construct an AddOrEditOrganzationCommand containing my form view model and the current ModelState. This allows me to use the ModelState to attach errrors to the form concerning server side only validations. The command object is fairly simple just containing the bits of data needed to perform the work.

public class AddOrEditOrganizationCommand : IAsyncRequest<ICommandResult>
{
    public OrganizationEditorForm Editor { get; set; }
    public ModelStateDictionary ModelState { get; set; }

    public AddOrEditOrganizationCommand(OrganizationEditorForm editor,
    	ModelStateDictionary modelState)
    {
        Editor = editor;
        ModelState = modelState;
    }
}

The command is sent using a mediator reference and a result is retrieved. My result types (SuccessResult and FailureResult) are based on a simple interface:

public interface ICommandResult
{
	bool IsSuccess { get; }
    bool IsFailure { get; }
    object Result { get; set; }
 }

If the result is a success, I redirect the user to the newly created organization. On failure I add any message passed back by the failure result to ModelState and redisplay the form.

Now for the various handlers that are needed to process this command. The first step is to do any server side validation of my form before attempting to change state in the database. I do this with a pre processor called OrganizationEditorFormValidatorHandler.

public class OrganizationEditorFormValidatorHandler : CommandValidator<AddOrEditOrganizationCommand>
    {
        private readonly ApplicationDbContext context;

        public OrganizationEditorFormValidatorHandler(ApplicationDbContext context)
        {
            this.context = context;
            Validators = new Action<AddOrEditOrganizationCommand>[]
            {
                EnsureNameIsUnique, EnsureGroupIsUnique, EnsureAbbreviationIsUnique
            };
        }

        public void EnsureNameIsUnique(AddOrEditOrganizationCommand message)
        {
            Logger.Trace("EnsureNameIsUnique::{0}", message.Editor.Name);

            var isUnique = !context.Organizations
                .Where(o => o.Id != message.Editor.OrganizationId)
                .Any(o => o.Name.Equals(message.Editor.Name,
                		StringComparison.InvariantCultureIgnoreCase));

            if(isUnique)
                return;

            message.ModelState.AddModelError("Name", 
            		"The organization name ({0}) is in use by another organization."
                    .FormatWith(message.Editor.Name));
        }

        public void EnsureGroupIsUnique(AddOrEditOrganizationCommand message)
        {
            Logger.Trace("EnsureGroupIsUnique::{0}", message.Editor.GroupName);

            var isUnique = !context.Organizations
                .Where(o => o.Id != message.Editor.OrganizationId)
                .Any(o => o.GroupName.Equals(message.Editor.GroupName,
                		StringComparison.InvariantCultureIgnoreCase));

            if (isUnique)
                return;

            message.ModelState.AddModelError("Group", 
            	"The Active Directory Group ({0}) is in use by another organization."
                    .FormatWith(message.Editor.GroupName));
        }

        public void EnsureAbbreviationIsUnique(AddOrEditOrganizationCommand message)
        {
            Logger.Trace("EnsureAbbreviationIsUnique::{0}",
            		message.Editor.Abbreviation);

            var isUnique = !context.Organizations
                .Where(o => o.Id != message.Editor.OrganizationId)
                .Any(o => o.Abbreviation.Equals(message.Editor.Abbreviation,
                		StringComparison.InvariantCultureIgnoreCase));

            if (isUnique)
                return;

            message.ModelState.AddModelError("Abbreviation", 
            		"The Abbreviation ({0}) is in use by another organization."
                        .FormatWith(message.Editor.Name));
        }
    }

The CommandValidator`T type contains simple helper methods for iterating the defined validation methods and executing them. Each validation method performs its specific logic and adds a model state error on failure. All server side validation happens here.

Next up is the actual command handler that does the change of state in the database. Due to adding and upating an organization being so similar, I handle both actions with a single handler.

public class AddOrEditOrganizationCommandHandler : IAsyncRequestHandler<AddOrEditOrganizationCommand, ICommandResult>
    {
        public ILogger Logger { get; set; }

        private readonly ApplicationDbContext context;

        public AddOrEditOrganizationCommandHandler(ApplicationDbContext context)
        {
            this.context = context;
        }

        public async Task<ICommandResult> Handle(AddOrEditOrganizationCommand message)
        {
            Logger.Trace("Handle");

            if (message.ModelState.NotValid())
                return new FailureResult("Validation Failed");

            if (message.Editor.OrganizationId.HasValue)
                return await Edit(message);
            
            return await Add(message);
        }


        private async Task<ICommandResult> Add(AddOrEditOrganizationCommand message)
        {
            Logger.Trace("Add");

            var organization = message.Editor.BuildOrganiation(context);
            
            context.Organizations.Add(organization);

            await context.SaveChangesAsync();

            Logger.Information("Add::Success Id:{0}", organization.Id);

            return new SuccessResult(organization.Id);
        }

        private async Task<ICommandResult> Edit(AddOrEditOrganizationCommand message)
        {
            Logger.Trace("Edit::{0}", message.Editor.OrganizationId);

            var organization = context.Organizations
            		.Find(message.Editor.OrganizationId);
                    
            message.Editor.UpdateOrganization(organization);

            await context.SaveChangesAsync();

            Logger.Information("Edit::Success Id:{0}", organization.Id);

            return new SuccessResult(organization.Id);
        }
    }

The main handle method of this handler is fairly simple. it checks the validation state of the form and returns a failure result if the previous validation step failed. It then determines if we are adding or updating the organization and delegates that work to specific methods.

The add method has the editor build a new organization entity and saves it to the database. A success result is returned with the id of the saved entity.

The edit method loads the entity, has the editor update it and saves changes to the database. A success result is returned with the id here as well.

While not obvious from the code above, I have yet to do anything with the features that are enabled for the organiation. I wanted to seperate the logic of processing the organization from the handling of associating the enabled features to that organization.

So I have a a post processor handler called UpdateOrganizationFeaturesPostHandler to provide that functionality.

public class UpdateOrganizationFeaturesPostHandler : IAsyncPostRequestHandler<AddOrEditOrganizationCommand, ICommandResult>
    {
        public ILogger Logger { get; set; }

        private readonly ApplicationDbContext context;

        public UpdateOrganizationFeaturesPostHandler(ApplicationDbContext context)
        {
            this.context = context;
        }

        public async Task Handle(AddOrEditOrganizationCommand command, 
        	ICommandResult result)
        {
            Logger.Trace("Handle");

            if (result.IsFailure)
                return;
            
            var organization = await context.Organizations
                                        .Include(o => o.Features)
                                        .FirstAsync(o => o.Id == (int) result.Result);

            
            
            var enabledFeatures = command.Editor.EnabledFeatures
                                    .Select(int.Parse).ToArray();

            //disable features
            organization.Features
                .Where(f => !enabledFeatures.Contains(f.Id))
                .ToArray()
                .ForEach(f => organization.Features.Remove(f));
                    
            //enable features    
            context.Features
                .Where(f => enabledFeatures.Contains(f.Id))
                .ToArray()
                .ForEach(organization.Features.Add);

            await context.SaveChangesAsync();
        }
    }

This handler does nothing if the command result is a failure. On success it gets the organization including its currently enabled feature set which will be empty on a new organization and may contain some features on an updated organization.

Next I pull the enabled features out of the editor form and modify the associated features of the organization removing disabled features and adding enabled features.

These handlers represent the entire pipeline of handling the AddOrEditOrganizationCommand so far in my system.

Here are some lessons learned from my usage of Mediatr. It decomposes processes nicely into tiny easily testable chunks but I have encountered some weirdness in specific cases. This weirdness is most definintely on my end and not the libraries. 8)

One, when an editor form has database retrieved lookup lists I have not found a clean way of rehydrating them for redisplay of the form. At the moment I am handing that in a custom model binder and that just feels dirty.

My form get controller method looks like this:

// GET: Organizations/Create/{1}
public async Task<ActionResult> Create(int? id)
{
    Logger.Trace("Create::Get::{0}", id);

    var query = new OrganizationEditorFormQuery(parentOrganizationId: id);
    var form = await mediator.SendAsync(query);
    return View(form);
 }

And the model binder like this:

[ModelBinderType(typeof(OrganizationEditorForm))]
public class OrganizationEditorFormModelBinder : DefaultModelBinder
{
    public ILogger Logger { get; set; }

    private readonly ApplicationDbContext dbContext;

    public OrganizationEditorFormModelBinder(ApplicationDbContext dbContext)
    {
        this.dbContext = dbContext;
    }

    public override object BindModel(ControllerContext controllerContext,
    	ModelBindingContext bindingContext)
    {
        Logger.Trace("BindModel");

        var form = base.BindModel(controllerContext, bindingContext)
                .CastOrDefault<OrganizationEditorForm>();

        if (form.ParentOrganizationId.HasValue)
            form.ParentOrganization = dbContext.Organizations
                .FirstOrDefault(o => o.Id == form.ParentOrganizationId);

        return form;

    }
}

Both ensure that a parent organization is apart of the editor form. I need to come up with a better way of doing this that works for the inital load of the form and the post redisplay of the form. As the forms are becoming more complex this becomes more of an issue. I may need to start doing some asyncronous callbacks from the client to get lookup lists values. The problem goes away at that point, I think.

Two, MediatR has a concept of notifications where you can fire and forget almost event like information out to parties that care about a specific change. In my system, I am eventally going to need to take some action when a feature is enabled or disabled.

I am not sure if the MediatR notification is the right way to approach this. Taking on a dependency to my mediator in a handler feels dirty to me, but its just a feeling. I need to reach out to Jimmy and see what his thought are on such a strategy. Am I creating a tangled nightmare by allowing handlers to issue notifications or even other commands? I am just not sure about that yet.

At the end of the day, I like what MediatR has done to my codebase. My controllers are nice and skinny and my handlers are tight and focused. This alone is worth looking into th using the pattern/library.

Comments