Tuesday, October 18, 2011

Applying SRP to WebForms

Most applications based on ASP.Net WebForms fall foul of good OO design practices because of the page life cycle and the plethora of events exposed by the many web controls. One of the key principles of good OO design is the Single Responsibility Principal (SRP). I often find that this is either completely ignored or not used enough when an application is based on ASP.Net WebForms. SRP states that every object should have a single responsibility and that responsibility should be encapsulated by its class.

With WebForms, business logic is written in the many event handlers which are part of the WebForms model. This is a hang over from the Visual Basic days when true client server applications were being built. Here the UI was a procedural wrapper over a set of Stored Procedures.

SRP forces you to ask if the code you are writing belongs in that class. If it does not, then a new class is needed for the job. Following this practice leads to a well factored code base, full of objects doing one job. In the case of the WebForm it is now only responsible for building the UI and handling the HTTP requests and response. This avoids complex and overly long WebFoms that are hard to understand and difficult to debug.

Violating SRP

In this example, the Page is a form gathering contact details from the visitor. The visitor could have arrived from a marketing campaign. The tracking codes for the campaign are a comma list of key value pairs stored in a Cookie. The four values extracted from the cookie must be included when submitted to the process which writes to the database.

A common implementation is to extract the values from the Cookie in the page load event and store them in a field on the form. When the event fires to save the form, the values are passed to the method which writes to the database.


I feel this method of working is poor. Sure, it will work. You can extract the values and send them to the database. However, the Page should only be responsible for managing the incoming Request, the out going Response and building the UI. Also, what if there are many forms on the site which have to capture this information? The code will be duplicated in many places causing problems if the name of the cookie changes. Instead I prefer to hand the task of capturing data from the cookie to a couple of classes which encapsulate the process and return a single object for the marketing data.

Applying SRP



All that this page is responsible for is passing the CookieCollection to the MarketingTrackerBuilder object. It then stores this in a private field to be passed on to the database when the form is submitted.


This class is essentially a DTO. It has no other job than to store the four pieces of information about the campaign which brought the visitor to the website. It also implements an interface. We will see why that is useful later.


Most of the work is being done in the Builder. The class knows how to extract the fields from the cookie. It is also where the name of the cookie is defined. Keeping this information here means that if anything to do with reading the cookie changes, it will only change here. Often this kind of code is scattered around many WebForm pages. Then a change to the implementation requires a search and replace on the entire code base.

When the Build method is called it first checks that the cookie exists. If the check fails it returns an instance of a NullMarketingTracker.


The NullMarketingTracker object is the reason for using the IMarketingTracker interface. We are now free to substitute the type returned as long as we code to the interface. If you review the code you can see that all references to the MarketingTracker have used the IMarketingTracker interface.

Now when values are written to the database, there is no need to check for a null strings first.

Summary

The Single Responsibility Principal is a great way to think about structuring code. By applying this to the WebForms Page object I decided that its only responsibility is to deal with the incoming request and the outgoing response. By further applying it to the code which captures the cookie data, the final design is well structured and easily maintained. If the implementation changes then the change will not ripple through the code base.

I find that this a good way to work and a great way to keep the code behind files readable and manageable.

Friday, October 07, 2011

Removing ignored files from a git repository

When I am using TFS, Visual Studio manages the files which should not be committed. So when I create a git repository I often forget to add the .gitignore file. The first reminder I get about my oversight is when I see all the DLLs being added during the first commit.

Today I decided to find out how to clean up the repository. First I added this .gitignore file to my repository:


Then I searched the internet. The first hit from Google was this post by Aral Balkan. The content and the comments provided me with all the information I needed to manage the git repository.

Searching and cleaning the repository

An instance of a git repository can be thought of as an isolated file system. As such commands can be run against it the same way as a normal file system.
The first command I needed was git ls-files which works in the same way as ls. The command git ls-files -i -X .gitignore lists all the files in the repository which would have been excluded had I remembered to set the .gitigonre.
Removing a file from git is done using the git rm. As git is a versioned file system there is the file on disk and a reference to that file in the index. The command git rm --cached will remove the reference from the index but leave the file on disk.

A script to do that

Manually removing each file from the index would take some time. It would also go against all of my computing instincts. The job needs a script.


Here I simply loop round the results from git ls-files sending each one to git rm. I am sure there are many ways to achieve the same result but this method worked well for me. I am using git bash and Windows.

Wednesday, October 05, 2011

The wonderful backbone.js

I recently gave a presentation on backbone.js at the Brighton Alt.net meeting. During this talk I demonstrated how Backbone.js can be used to organise JavaScript code into manageable layers. It’s Models and Collections manage the storing and retrieving of data. Views provide a mechanism for arranging the UI in to manageable chunks. It also has an event bus which helps reduce coupling between functions. Altogether, backbone brings order to the often chaotic world of client side development.

For the demonstration, I made a shopping list application which is available on github. Included is a web service which is used to manage the shopping list. You will need to install node.js to run the web service.

A traditional view of MVC

When first looking at backbone, I was thinking of an MVC framework in terms of the ASP.Net implementation. Here, the framework does not impose anything upon the model. The model is full of classes to capture the state and the behaviour of the system. For my shopping list it would contain types for an item, the list class, the price of the item and the state of the item. All of these would have methods which capture the behaviour. This model consists of a lot of small classes working together to define the system.

The controller is responsible for incoming requests. It will then validate the request and process it. If it is a query it will gather the required data and return it. If it was a command it will find a handler and update the model.

When complete it will load the correct view passing in the state required to render it.

The view uses the passed in data to create the representation requested by the client. Typically this will be an HTML page. The view is where we think of the client running server based MVC framework. Mainly as this is where we put all the client JavaScript.

Breaking with tradition

In backbone.js, the model object is very simple. It does not model the behaviour of a system accurately in fact, there may only be one model object. Therefore, it is not a system for building fully featured domain models.

What it does is apply the MVC pattern to browser development. Models, collections and views work together to create a wall. A wall which keeps all the AJAX code for dealing with data on one side, and all the code for building and rendering DOM elements on the other side. Without this boundary it is easy for JavaScript applications to have the same function calling a web service and updating the DOM. Over time this will lead to a system which is hard to maintain. By making a very clear separation between persistence code and UI code, backbone.js helps us to write better JavaScript.

Coding the data side

The first thing I did to find out how backbone can help my development was to create a model and a collection and point it at my web service.

I created a model object to represent an item in my shopping list:


There are three things happening above:
  1. I have created a model called ShoppingItem. This is told to use the Products collection in the constructor. It is also given some default values to be used by new instances.
  2. Here I create the collection of shopping items. In this simple demo I only have to set the endpoint for my web service and set the model object for the collection.
  3. Finally, I create a new instance of the collection.

The page itself has no real content, just a title. By using the console window in Firebug I can create, edit and delete new items in my shopping list

Here I can create a new item and when the save method is called, backbone sends a POST request to the service, creating the item.



Running this code in the browser will show backbone first POSTing the new item to the service. Then issuing a PUT to update the State, and finally a DELETE with the Id to remove it. Internally backbone uses either jQuery or Zepto for communication.

Collections

In backbone, a Model has to belong to a collection, in fact, it is a rare application where a single entity exists in isolation. Here is the Collection for my shopping list:



A very simple collection, it is told what type of Model it holds, and the URL for the web service to persist the objects to. The model object will use this URL when communicating with the web service. Finally it has method called toobuy which returns a list of all items in the collection where the state is “To buy”.

Summary

In this post I have created a shopping list in JavaScript. There is enough code here to run the application from the browser console where I can create, update and delete items from my list.

This highlights one of the first advantages of using backbone.js. I have concentrated on how my application will interact with the service before creating any UI components.

Look at the backbone.js site for more information and a growing list of examples.

Wednesday, September 14, 2011

Extending the JavaScript Array type

Here is how I created some expressive code by extending the basic types in JavaScript. This example extends the Array type whose content is often filtered or transposed. Through the use of function chaining complex operations can be very expressive and concise. I find this a great way to write code which is easy to follow.

I wanted to create a cross domain cookie based on the current domain of a page and I was provided with a list of sub domains where this should apply. Interestingly, this list included the name ‘uk’ and ‘cn’ which are also root level domain names.

Here are some example domains:
  • www.keithbloom.co.uk
  • landingpage.keithbloom.co.uk
  • uk.keithbloom.com
  • test.keithbloom.com

Here is a list of sub domains which can be removed:
  • www
  • landingpage
  • uk

Extending the Array

Arrays seemed the obvious choice to me. The domain string can be split on the full stop to create an array and the list of safe sub domains to be removed is already an array. Taking the first example, I end up with the following two arrays:

Now I wish to remove any items in the subDomains which form domainNameParts and return this result.

The subtract function loops over the input and compares each element with each element in the mask array. If it finds a match, the array splice function will remove it. It stops before it reaches the end of the input array though, to avoid removing any of the top level domains. Otherwise my example would return keithbloom.co - useless.

I now have a working function which can be used to create the domain for my cookie:

In the example I use the array function join to re-build my string and append a leading period to it. (For cross domain cookies to work, they must start with a period).

I found this cumbersome though and wanted a more more expressive method. Fortunately, JavaScript is a dynamic language so its internal types can be extended (a technique also know as Monkey Patching). I can add my subtract function to the Array objects prototype:

I now have a more expressive way to create my domain:

The final statement fits on one line. More importantly though it is concise and reads like a sentence. Creating code which is readable is more maintainable.

Pitfalls

This technique is a great way to extend the language and provide an expressive method for writing code. It can be dangerous though. As JavaScript runs as part of a web page there could be other scripts also running on that page. I may find that one of those scripts is also adding a subtract function to the array prototype. If this is a script I have access to, I can rename it. If it is an external script I may have to use a new name.

One way to avoid this is to prefix a namespace to my function:

Summary

Through the extension of the basic types in JavaScript it is possible to create expressive code. The Array type lends itself to this technique as they are often used as lists which we wish to manipulate in some way. Care must be taken though as we are changing code for all the programs being run in the session.

Friday, August 12, 2011

Reading list

Books have improved my knowledge about programming, creating users interfaces, and how software has a life after the first deliverable. I have also found there are many awful books which are a waste of time and money. This is a shame as I believe a good book will convey a topic in more depth than a series of blog posts or examples on the Internet. Here is my list of books which I value and can recommend to you.

Programming Pearls, 2nd Edition, John Bently

Originally a series of essays for the ACM. They form a superb book on the subject of how to write software. Most of the examples are in C and an analysis of how malloc works may no longer be relevant. For me though, this added to the interest as I got to think again about how memory is managed and data structures are implemented.

Where Bently excels is by demonstrating how a problem can be thought through and analysed. The "Back of the envelope" chapter describes how to estimate the volume of water which flows down the Mississippi river. This is a master class in lateral thinking.

A theme that runs through the examples is the creation of test harnesses to prove that the program being developed works. It is refreshing to see automated testing being focused on in a book which by computing standards is now considered a classic.


The Mythical Man Month and Other Essays on Software Engineering, Frederick Brooks

Based on his experience of working on IBM’s OS/360 in the 1960s, Frederick Brooks argues against the idea that adding more developers to a team will accelerate the production of code. He demonstrates how new developers have to learn the code base and in fact decelerate development time as experienced people stop writing code to teach them.

This is a great read for anyone who works as a developer as Brooks’ experiences with punch cards and rooms full of documentation for one system are relevant now.
This is an essential read though for any manager of a business who employs software developers within their company.


Refactoring, Martin Fowler

This is the book which has most improved my understanding of object orientated coding. Before reading this book I was unsure about changing working code no matter what state it was in.

Using a series of refactorings Martin Fowler shows how the design and quality of a code base can be improved by making many small changes. Changes which alter the code but not the behaviour of the system. This is made possible by having a good collection of tests that assert how the code being changed behaves.

This is a book which I return to often. It is a book which has had a profound impact on software development. Most of the patterns described are now built in to development tools like ReSharper and CodeRush.


Agile Web Development with Rails 1.2, Dave Thomas

The first section of this book shows how to build simple web applications and in doing so it introduces key aspects of the Rails framework. The second section is a detailed look at the framework with chapters dedicated to ActiveRecord, ActionView, ActionController and ActionMailer.

This is the book I am currently reading as I am creating a Rails app in my spare time. It is woefully out of date as Rails is now at version 3 (with 3.1 soon to be finalised). So I read all the examples wondering what has changed.

Design Patterns, Gamma, Helm, Johnson, Vlissidies

A classic book about object orientated design and one of the first books to present a series of patterns for writing code. Based around a case study for building a document editor, the patterns are split in to three groups; Creation Patterns, Structural Patterns and Behavioural Patterns.

Most of the code examples are in C++ and a few are in SmallTalk and whilst I only have distant memories of C++, I found the code examples interesting and readable.
Some of the patterns in this book are now considered anti-patterns (Singleton and maybe Template method) but most are well worth understanding. What these patterns also provide is a vocabulary for developers to use when discussing code. Often a solution to a problem can be articulated by citing one of these patterns.

JavaScript: The Good Parts, Douglas Crockford

What Refactoring did to improve my knowledge of statically typed, object orientated programming, “JavaScript: The Good Parts” equally did to improve my knowledge of dynamic, prototype programming.

Douglas Crockford believes that some parts of the languages are great, some are bad and the rest are just ugly. Most of the book is spent explaining how the good parts can be used to form an expressive and flexible language. The remainder highlights the bad and the ugly which, if avoided, make the good parts even better.

This book is so rich in content and so terse that I read it three times. I now understand the power of closures in JavaScript and how best to construct objects which are secure and extensible.


Patterns of Enterprise Application Architecture, Martin Fowler

PoEAA follows on from Martin Fowler’s Refactoring and he has assembled a set of patterns for writing software where the code base is organised in to layers of responsibility. The most common types of layers are the data layer and the presentation (or User Interface) layer.

Once again I am impressed by the way that Martin Fowler manages to formalise patterns in software engineering and the impact that he has on the frameworks that I use. I read this book soon after using NHibernate the .Net Object Relational Mapping tool and it felt like I was reading the specification for NHibernate. The same is true for Active Record in Ruby on Rails, and many of the Model View Controller (MVC) frameworks that exist. I must add that I do not think that Martin Fowler was the first to discover these patterns. For example Trygve Reenskaug created the MVC pattern while working at Xerox Parc. But what Martin Fowler has is the ability to collate and present the patterns so they become accessible and readable to all. He also draws upon the experience of many so the pattern is applicable to the time.


The Little Schemer, 4th Edition, Friedman and Felleisen

The Little Schemer is the most unique and challenging book on programming that I have ever read. But then it is about recursion, a topic which can twist even the nimblest brain.

Scheme is a dialect of LISP so it is a language where all data structures are lists and functions are also data. The Little Schemer builds up through its narrative “Ten Commandments” for writing idiomatic and valid Scheme programs. At first this is easy to follow as the recursion is shallow and mainly focused upon creating functions and safely processing the lists. The later chapters are much harder as the recursion gets deeper and functions start generating functions. This builds to the final masterpiece, the applicative-order Y combinator.

I enjoyed this book. It was challenging, more challenging than Dante’s the Divine Comedy. However, it opened my mind to a world of functional programming that I am just starting to explore. I will be downloading a Scheme at some point so I can work through the code and further my understanding.


Domain Specific Languages, Martin Fowler

The final book on this list form Martin Fowler and his most recent. This has his usual style of a detailed example demonstrating the application of the patterns to follow. The topic this time is how to write Domain Specific Languages (DSLs). The focus is how DSLs can help to configure complex applications, like the main example called “Miss Grants Controller”. This is a complicated state machine which can be configured to open a door only when the correct sequence doors have been opened and lights switched on.

This book is a small study on computer language design. It covers; lexing, syntactic analysis, the specification of grammer using BNF and the role of Abstract Syntax Trees to name but a few. As I have not previously studied language design or the writing of compilers, this was a great introduction to the topic.

For me the best chapters came towards the end. Here Martin Fowler presents some alternative models of computation. They are alternative because they are not imperative computation which is the most common. They relate to DSLs as they are often harder to configure and their operation can not be immediately understood through just reading the code. So DSLs are a very useful tool to simplify the programming of these programming models. Of the four presented I was especially interested in the “Decision Table” and the “Production Rules” models as both of these solve problems I often encounter at work.


Designing with web standards, Jeffery Zeldman

This is one of the few books I have read that completely changed how I thought about working. Prior to reading Designing with web standards I created HTML pages using tables to layout the page. I remember being very pleased with a site I made for Comet. I managed to make an image of a vacuum cleaner break the gird just as the designer had planned. To do that needed four nested tables and the image had to be cut in to several pieces. It was hard work, and it was wrong, as I found out when I read Designing with web standards. I then understood the idea that the HTML is a document. And this document is description of the content. The CSS is the presentation and the JavaScript adds any extra frills if the client supports it.

I am sure of one thing. That back in 2003, when Jeffery Zeldman published this book, I was not the only person making web sites this way. But we all soon stopped. I have read other books since which have helped me to understand more about the detail. But it is this book which changed my thinking on the topic.

Tuesday, March 08, 2011

Adding the sequence number to a LINQ query

LINQ queries are a powerful way to keep your code expressive and by using differed execution, they are quick. But what if you need the value of the index which LINQ used whilst building the projection? I had this issue and found the solution was to use the Select method which accepts a Func<TSource, int, TResult> for the selector.

With a for loop this is simple to accomplish as the index value is available in each iteration as it is controlling the loop:
This code prints this to the console:


Creating a LINQ query with the indexer in the projection is not as obvious. My first attempt was to use the Count() property of the line parameter in the projection.


But when run to the console the problem became apparent


In the projection, line.Count() is returning the string length of each line in the array. First attempts are always a good way to discover how something could work

Fortunately the LINQ Select Method has two overloads. They both iterate over an IEnumerable<T> but it is the delegate for the selector which differs. The code above uses the first delegate which should be Func<TSource, TResult>. The second overload expects a Func<TSource, int, TResult>. Here, the parameter used for the int32 is assigned the current value for the index of the sequence.

The first code example can now be changed to something more expressive


Running this code displays the following in the console


Many of the LINQ methods have this overload for the selector. Using it I have been able to continue using LINQ to specify how I want to transform the array. This means my code is more expressive. Plus, the LINQ query is quick as the projection will only be populated when the foreach loop is executed to display the result.

Saturday, February 12, 2011

IIS application pool and domain indentites

Follow these steps to specify and non standard identity for an IIS application pool. For this example I will use the account domain\WebUser
  1. In Administrative tools open the Local Security Policy program. And find the Log on as service policy in Local Policies, User Rights Assignment. Click properties and add the user domain\WebUser
  2. Open Windows explorer and go to C:\Windows\Temp. Open the Sharing and Security and add the user to the security tab. Grant the user enough rights to read and write files
  3. Open a command prompt and change to c:\windows\microsoft.net\Framework\v2.0.50722. Run aspnet_regiis.exe -GA domain\WebUser
  4. In IIS open the properties of the application pool and go to the identity tab. Click Configurable and enter the username and password.

Monday, January 10, 2011

IIS 6 and the HTTP 401.3 error

I love it when I find a new tool to use, I love it even more when it is really useful and saves me hours of work. Recently, I had the opportunity to try out ProcMon. This is what happened.

Our test web server started returning HTTP 401.3 errors. The cause was quick to find; the permissions on the root website folder had been changed and the IIS accounts were missing. So the fix appeared simple, re-apply the permissions and they will cascade all the way down the tree. I added the local IUSER account but it failed to fix the problem. I spent several hours with the MSDN documents to make sure I had the correct users and groups applied, but to no avail. I could not find a way to return the server to normal operation.

Finding the problem

The next day I felt resolved to find the problem, no more hacking around throwing users at a dialog box. For help I turned to Process Monitor (ProcMon), part of the SysInternals suite of tools. ProcMon is a superb tool for these situations. It collects all activity on the machine, showing a list file, registry and network activity. Importantly for me, it also records the result of the operation.

I fired it up, attempted to load a web page from my browser, and then stopped the trace. Tracing all the activity on a server will produce a metric ton of data; a one minute trace on my PC generates ~300,000 events. For this reason ProcMon has good filtering. You can pick from a list of events and limit by a text value. I chose to filter the list by Result, only showing those which returned ACCESS DENIED.



With the filter applied there was only one event in the list; the IUSER account was trying to access the file from my browser request. Upon checking permissions on the actual file I found that they were different to the parent. All of the IIS accounts were missing. I forced the permissions down the tree and IIS started serving pages again.

Not just any tool but the right tool

ProcMon is the star here, without it I would have found the problem but with a lot of guess work and a great deal of time. With ProcMon I could see exactly what was happening when IIS tried to serve the page. Being able to see what happens at the core of a system is essential to fault finding and having the right tool is infinitely time saving.