Scenario Based Testing

As mentioned in my previous post I am trying to experiment with more resilient ways to perform test driven development. In my ideal world, not changing functionality means tests do not need to be changed.

Recap On Unit Testing Shortcomings

Unit testing too often breaks with refactors unless the refactoring only takes place inside a single class. More often than not you start writing code to do something in one class and then later realize you need the same logic in another class. The proper way to handle this is to extract the logic into a common dependency (whether it is a new dependency or existing one), but if your tests are scoped to a single class (with all dependencies mocked) this becomes a breaking change requiring all of your mocks to be touched up (or new ones added) in order for your test to confirm that nothing has changed.

The problem with that is you are forcing the change in your tests to verify that functionality hasn’t changed, but if you are changing your test you can’t be 100% sure functionality hasn’t changed. The test is no longer exactly the same. All you can be 100% sure about is that the updated test works with the updated code, and then you pray that no regressions come out of it.

Expanding the scope of what you consider a unit for test purposes helps alleviate this, but that only works well if you have a complete understanding of both the code and business domain models, so that each unit becomes a domain. This is hard to do until a product is out and in active use because it is rare the business or engineering sides have a full grasp of the product before it is in customers hands. The product always changes after the MVP.

So What’s a Solution?

Scenario based testing is a test pattern I am playing with that tests the end user boundaries of a system. At the end of the day what I care about as a developer is that end users (whether they are people or applications) can perform actions against the system and the correct processes are run.

This means that if I am using a web based API and my system allows users to edit a blog post, my tests will verify that sending the correct data to my API and then retrieving that post back will return the post with the correct fields edited in the correct way. I don’t care about testing that it’s stored in the database in a specific way, or a specific class is used to process the logic on the post, I only care about the end result. If the results of an action never leave the system, does it really matter if it occurs?

So It’s An Integration Test?

Some of you are probably shrugging your shoulders thinking I’m talking about an integration tests. While these can somewhat be considered as integration tests, I try not to call them integration tests as all external components are mocked out.

Most integration tests involve writing to actual external data stores (like SQL) in order to verify that it is interacting with those systems properly. This is still necessary, even with scenario testing, because you can never be sure that your SQL queries are correct, that your redis locks are done properly, etc…

The purpose of this style of testing though is to codify the expected functionality of your system with as much area coverage as possible, lowering the needed scope of the required integration testing to just those external components.

Mocking all external systems also has the side benefit of being able to run tests extremely quickly and in parallel.

What Do These Tests Look Like?

Another goal for me in my testing is that it’s easy to read and understand what is going on, so if a developer breaks a test he has an understanding of the scope of functionality that is no longer working as expected.

If we take the previous example of editing a blog post, a test may look like this:

[Fact]
public void Can_Edit_Blog_Post()
{
	var date = new DateTimeOffset(2016, 1, 2, 3, 4, 5, TimeSpan.Zero);
	var date2 = new DateTimeOffset(2016, 1, 3, 0, 0, 0, TimeSpan.Zero)

	var originalPost = new PostDetails
	{
		Title = "My New Title",
		Content = "My Awesome Post"
	};

	var updatedPost = new PostDetails
	{
		Title = "Edited Title",
		Content = "Edtted Content"
	}

	TestContext.SetDate(date)
		.AsAuthoringUser()
		.SubmitNewPostApiRequest(originalPost)
		.ExecuteAction(context => updatedPost.Id = context.GetLatestPostId())
		.SetDate(date2)
		.SubmitEditPostApiRequest(postId, updatedPost);

	var post = TestContext.GetPost(updatedPost.Id);
	post.Should().NotBeNull();
	post.Title.Should().Be(updatedPost.Title);
	post.Content.Should().Be(updatedPost.Content);
	post.LastEditedDate.Should().Be(date2);
}

In this you can clearly see the functional intent of the test. As a developer you are able to completely change around the organization of the code without having any impact on the passing or failing of this test, as long as it is functionally the same as it was before your refactor.

If this looks like magic, that’s because it does require some up front development costs. The TestContext class mentioned above is a class that holds details about the current test, including IoC functionality and other information it needs to track to perform tests.

In my current code base utilizing this pattern, the TestContext class is initialized in the test class itself, and has all the logic for how to set up IoC, what it needs to mock, and what events in the system it might need to tie into in order to be able to perform verifications.

All the methods shown on the TestContext object are merely extension methods that utilize the test context to make it easy to perform common actions. So for example, SetDate(date) call would get the mocked date provider implementation from IoC and force it to a specific date (to allow freezing time).

`SubmitNewPostApiRequest()` could be something that either calls on your API using a fake browser or calls your API controller, depending on what framework you are using. The current code base I use this for uses Nancy for it’s API layer, which provides tools that allow sending actual POST or GET requests to the API for testing purposes.

What Are The Cons?

This approach isn’t perfect however. Since we are only testing the system boundaries it requires the developer to have a decent grasp of the underlying architecture to diagnose test failures. If a test starts failing it may not be immediately clear why a test is failing, and has the possibility to waste time while trying to figure that out.

This does force the developer to write smaller pieces of code in between running tests, so they can keep the logical scope of the changes down in case of unexpected test failures. This also requires tests to be fast so developers are not discouraged from running tests often.

All in all, I am enjoying writing tests in this style (so far at least). It has let me do some pretty complicated refactors as the application’s scope broadens without me worrying about breaking existing functionality.

The Frustrations of Unit Testing

Test driven development has proven itself to be a good thing for me over the years. It has helped me to not only codify my design decisions, but it has also helped keep me focused on the next iterative steps I need to create the larger change in the system that I originally set out to do. It has also helped keep regressions from occurring.

Unit testing is one integral part of test driven development, as it allows you to take a unit of code and make sure it does what it is supposed to.

What Is a Unit?

What qualifies as a unit of code differs from person to person. Most “conventional wisdom” of the internet will lead you to believe that in object oriented programming languages, a unit is a single class. However there is no reason that this should be the case, as you can have functional units in your code base that involve multiple classes or even a single function, depending on the full scope of what is being performed (and how logic is related to each other). The reality of the situation though is that as you start involving more classes in your functional unit, your tests start taking on a profile that looks more like integration tests rather than unit tests. This complicates the effort required to setup a test and seems to make me gravitate back to smaller units, which usually ends up back as a class as a single testable unit.

Unit testing at the class level is also considered good as it makes it much quicker to figure out where the code is that is causing a test to break, as the area of code that can be causing a bug is isolated to a single class (which should be relatively small anyway).

However, as time goes on I have started disliking testing with classes as the boundary of the test.

An Example

Let’s say we are writing code that contains the business logic for updating a post on a blog. A minimal version would contain several business processes, such as:

  • Verify the logged in user has access to the post
  • Get the current version of the post (with categories that have already been added to the post)
  • Get the list of categories known by the system
  • Modify the post with the new details
  • Save the post to the data store

Assume each bullet point is it’s own class or function that already has tests for them. I now need to test the class that is combining each of these components into a cohesive pipeline that performs the correct operations under the correct conditions in the correct order.

So let’s say I want to create a test that a user with access to a post can modify the title of a post. At the outset it sounds like a simple case of inputs and outputs.

Unfortunately, it gets a lot more complicated than that. Since i just want to focus on the functionality of my EditPost() function (since all the other functionality should already be tested and I just need to compose them together) I need to create mocks so that I don’t have to test the implementation details of checking if a user has access to a post, can retrieve a post object successfully from the data store, etc…

So for this test I create my first mock that says that when I pass my user id and post id to the authorization function, it should always return true.

The next mock I need to create says that when I pass a post id to the post retrieval function, it returns to me a post object with some pre-defined information.

Another mock I need to create says that when I call on the function to retrieve all blogging categories, it returns to me a static list of categories.

Yet another mock I need to create says that when I call the function to save the post to the data store, it siphons off the parameters I passed into it for verification.

Now that I have those mocks setup, I can now pass them into my edit post function call (via dependency injection), call my edit post function with the new title, and verify the parameters I siphoned off are what I expect them to be. If I am following the TDD methodology then I now have failing tests, so that I can fill in the function’s implementation and get a nice passing test.

You continue on creating more tests around this function (and all the mocks that come with it) to test authorization failures, post doesn’t exist conditions, other specific editing conditions (e.g. the creation date must always remain the old value on published posts, edit date is automatically set to the current date/time), etc….

So What’s the Problem?

So a few days later you realize that always pulling down a post with it’s full content and categories is too much. Sometimes you only need titles, sometimes titles and categories, sometimes titles and content without categories. So you split them out into different functions that return different pieces of the blog post (or maybe you have different functions for blogPostWithContent, blogPostWithCategories, etc…).

Or maybe you realize you need to add some more parameters to the authorization method due to changing requirements of what information is needed to know if someone is authorized to edit a post in some conditions that may or may not be related to the tests at hand.

Now you have to change every test of your edit post function to use new mock(s) instead of the previous mocks. However, when I wrote the original test all I really wanted to test was that an authorized user could change the title of a post but all my changes are all related to the implementation details of how that is accomplished.

The tests are now telling me that I have a code smell, and that my tests on the logical flow of the process are too intertwined with the tests surrounding how I’m composing all my functional units. The correct way to fix this problem is to move all the the business logic of how to modify a post object into it’s own pure function. This function would take in a data store post object, the change operations to perform on it, and the categories listing. It would then output the resulting post that needs to be saved to the data store.

We can then have tests on the logic of editing a post on the new function and keep the tests on the original function focused solely on how all the functions are composed together. Of course we have to completely modify the previous tests now to reflect this refactoring effort, and since the old tests are no longer valid we have to be careful to create regressions since we are completely changing old tests and adding completely new tests.

The question is, was this refactor worth it? At an academic level it was because it gave us more pure functions that can be tested independently of each other. Each functional unit is very small and does only one thing in a well tested way.

However, at a practical level it thoroughly expands the code base by adding new data structures, classes, and layers in the application. These additional pieces of code need to be maintained and navigated through when bugs or changing requirements come down the pipeline. Additional unit tests have to then be created to verify the composition of these functions are done in the correct and expected manner.

Another practical issue that brings this to a wall is that you are usually working with a team that has various skill and code quality concern levels, combined with the need to get a product and/or feature out to customers so it can generate revenue and be iterated on further.

It also increases the need for more thorough integration tests. An example of this is if the function that actually performs the edit requires the post to have a non-null creation date, or only is allowed to edit non-published posts. This function will have unit tests surrounding it to make sure it surfaces the correct error when it is passed data that it considers bad, but there is no way to automatically know that this new requirement is not satisfied by callers of the function with unit tests alone, and this error will only be caught when the system is running as a holistic unit.

So what’s the solution? There really is no good solution. Integration tests are just as important as unit tests in my opinion, but having a comprehensive set of unit and integration tests that easily survive refactoring is extremely time consuming and may not be practical to the business objectives at hand.

Integration tests by themselves can usually withstand implementation level refactorings, but they are usually much slower to perform, in some cases are completely unable to be run in parallel, require more infrastructure and planning to setup properly (both on a code and IT level), and they make it harder to pinpoint exactly where a bug is occurring when a test suddenly fails.

When a Decimal does not equal a Decimal

It all started with a simple bug ticket in Trello that said “When I select 0.125 from the drop down, save, then enter the edit screen again the drop down is set to 0”.

Seemed pretty simple. The clock said 5:45pm and I figured “this should be easy to knock out”. An hour later I was pulling my hair out.

What?

Here’s a good example of the offending code:

public class FooRepository
{
	public decimal GetFooValue() 
	{
		return 0.1250m;
	}
}

public class FooModel
{
	public decimal Value { get; set; }
}

public class FooController
{
	public ActionResult Index()
	{
		var repository = new FooRepository();
		var model = new FooModel { Value = repository.GetFooValue() };
		return View(model);
	}
}

Then in the view:

@Html.DropDownListFor(x => x.Value, new SelectList(new[] {0m, 0.125m, 0.25m}))

Every time I displayed this view the drop down was always on the first item and it was driving me nuts. After playing around in Linqpad I came across a clue:

0.250m.ToString(); // Outputs "0.250"

I stared for a while and finally noticed the trailing zero in the displayed output. I then made sure I wasn’t totally crazy so I tried:

0.250m.ToString() == 0.25m.ToString() // outputs False

I looked in my real database where the value was coming from, and since it is a decimal(18,4) entity framework is bringing it back with 4 decimal places, which means it’s including the trailing zero. Now it makes sense why Asp.net MVC’s helpers can’t figure out which item is selected, as it seems like a fair assumption that it is calling ToString() and doing comparisons based on that.

While trying to figure out a good solution I came across this StackOverflow answer which had the following extension method to normalize a decimal (which removes any trailing zeros):

public static decimal Normalize(this decimal value)
{
    return value/1.000000000000000000000000000000000m;
}

After normalizing my model’s value, the select list worked perfectly as I expected and the correct values were picked in the drop down list.

Of course this is a clear hack based on implementation details and I reverted this fix. There’s no telling if or when a new version of the .Net framework may change this, and from what I’ve read a lot of the internal details of how decimals work are different in Mono, and this hack does not play nicely in Mono (supposedly in mono, 0.1250m.ToString() does not display trailing zeros).

The proper way to resolve this situation is to force the drop down list to do a numerical equality to determine which item should be selected, by manually creating a List instead of using the SelectList() constructor.

Why?

So even though I knew what was going wrong, I was interested in why. So that stack overflow answer pointed me to the MSDN documentation for the Decimal.GetBits() method. That link contained specifications for how a decimal is actually constructed. Specifically that internally a decimal is made up of 4 integers, 3 representing the low, medium, and high bits in the value and the last integer containing information on the power of 10 exponent for the value. Those combined can give exact decimal values (within the specified range and decimal places allowed).

So to start with I tried the decimal of 1m and printed out the bit representation:

new[] {
	Convert.ToString(decimal.GetBits(1m)[0], 2),
	Convert.ToString(decimal.GetBits(1m)[1], 2),
	Convert.ToString(decimal.GetBits(1m)[2], 2),
	Convert.ToString(decimal.GetBits(1m)[3], 2),
}

// 0000000000000000000000000000001
// 0000000000000000000000000000000
// 0000000000000000000000000000000
// 0000000000000000000000000000000

That didn’t show much unexpected. It shows the low bits as containing just a 1 and all other bits containing zeros. Next I tried 1.0m:

new[] {
	Convert.ToString(decimal.GetBits(1.0m)[0], 2),
	Convert.ToString(decimal.GetBits(1.0m)[1], 2),
	Convert.ToString(decimal.GetBits(1.0m)[2], 2),
	Convert.ToString(decimal.GetBits(1.0m)[3], 2),
}

// 0000000000000000000000000101000
// 0000000000000000000000000000000
// 0000000000000000000000000000000
// 0000000000000010000000000000000

This parses the core bits to:
lo: 10
mid: 0
high: 0

So how does it convert a 10 into a 1? Looking back at the MSDN documentation it says that the first 15 bits are always zero (as they are in this case) with the next seven bits being the exponent of the decimal. Bits 16-23 are 0000001 and the last bit is zero, giving us an exponential value of positive one. These mean that to get the final value we take the value of the low + mid + high bits combined (10) and divide them by 10 to the power of positive 1. This gives us a value of just 1,

If we look at our 0.125m example we get:

new[] {
	Convert.ToString(decimal.GetBits(0.125m)[0], 2),
	Convert.ToString(decimal.GetBits(0.125m)[1], 2),
	Convert.ToString(decimal.GetBits(0.125m)[2], 2),
	Convert.ToString(decimal.GetBits(0.125m)[3], 2),
}

// 0000000000000000000000001111101
// 0000000000000000000000000000000
// 0000000000000000000000000000000
// 0000000000000110000000000000000

Like before, this is taking a value of 125 (125 + 0 + 0) and dividing it by 10 to the positive 3 exponent, which gives us 0.125. If we instead use 0.1250m we get:

new[] {
	Convert.ToString(decimal.GetBits(0.1250m)[0], 2),
	Convert.ToString(decimal.GetBits(0.1250m)[1], 2),
	Convert.ToString(decimal.GetBits(0.1250m)[2], 2),
	Convert.ToString(decimal.GetBits(0.1250m)[3], 2),
}

// 0000000000000000000010011100010
// 0000000000000000000000000000000
// 0000000000000000000000000000000
// 0000000000001000000000000000000

This represents 1,250 (1,250 + 0 + 0) divided by 10 to the positive 4 exponent.

So now it’s clear that when you create a new decimal it essentially keeps track of the exact representation it was originally invoked with, including zeros and is pieced together to it’s fully realized value on a ToString() call.

Making .Net Regular Expressions Easier To Create and Maintain

Recently I had the idea of a log parsing and analysis system, that would allow users (via a web interface) to define their own regular expressions to parse event logs in different ways. The main purpose of this is for trend analysis. Logstash is an open source project that does something similar, but it relies on configuration files for specifying regular expressions (and doesn’t have other unrelated features I am envisioning).

One issue I have with regular expressions is they can be very hard to create and maintain as they increase in complexity. This difficulty works against you if you are trying to create a system where different people with different qualifications should be able to define how certain logs are parsed.

While looking at Logstash, I was led to a project that it uses called Grok, which essentially allows you to define aliases for regular expressions and even chain aliases together for more complicated regular expressions. For example, if you needed a regular expression that included checking for an IP address, you can write %{IP} instead of (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9]). This makes regular expressions much easier to create and read at later on.

The problem with Grok is that it is written in C as a stand alone application. This makes it hard to use in .Net applications without calling out to the shell, and even then it is cumbersome to use with dynamic aliases due to having to reconfigure config files on the fly.

For those reasons, I created an open source library I called RapidRegex. The first part of this library is the freedom to generate regular expression aliases by creating an instance of the RegexAlias class. This class is used to give you full flexibility on how you store and edit regular expression aliases in any way your application sees fit, whether it’s in a web form or in a flat file. It does however come with a class that helps form RegexAlias structures from basic configuration files.

As a simple example, let’s look at the regular expression outlined earlier for IP addresses. With RapidRegex, you can create an alias for it and later convert aliased regular expressions into .net regular expressions. An example of this can be seen with the following code:

            var alias = new RegexAlias
            {
                Name = "IPAddress",
                RegexPattern = @"\b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b"
            };

            var resolver = new RegexAliasResolver(new[] { alias });

            const string pattern = "connection from %{IPAddress}";
            var regexPattern = resolver.ResolveToRegex(pattern);
            // Resolved pattern becomes "connection from \b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b"

What makes this even more powerful is the fact that you can chain aliases together. For instance, an IPv4 address can be defined as 4 sets of valid IP address bytes. So we can accomplish the same above with:

            var alias = new RegexAlias
            {
                Name = "IPDigit",
                RegexPattern = @"(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)"
            };

            var alias2 = new RegexAlias
            {
                Name = "IPAddress",
                RegexPattern = @"%{IPDigit}\.%{IPDigit}\.%{IPDigit}\.%{IPDigit}"
            };

            var resolver = new RegexAliasResolver(new[] { alias, alias2 });

            const string pattern = "connection from %{IPAddress}";
            var regexPattern = resolver.ResolveToRegex(pattern);

            // Resolve the pattern into becomes "connection from \b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b"

The project can be found on Github and is licensed under the LGPL license.

Creating a Single Threaded Multi-User TCP Server In .Net

In my random desire to build a simple IRC server I decided the first step was to build a TCP server that can read data and send data simultaneously to multiple users. When someone first starts to write a networked server they quickly realize it’s not as easy as they had hoped due to all the network operations being done via blocking method calls. Before .Net 4.5, handling multiple socket connections meant having to put the socket handling calls into alternate threads, either by AsyncCallback calls or manually loading the method calls into the thread pool. These then require thread syncing when data needs to get back to the main thread so that the server’s business logic can run on the socket events.

With the introduction of the Async/Await keywords in .Net 4.5, it has become much easier to run your socket operations asynchronously while never leaving the main application thread. If you are not familiar with the Async/Await keywords it would probably be a good thing to read Stephen Cleary’s tutorial.

So here is how I went about making a single threaded TCP server that can handle multiple clients. This may not be the best way, but it did seem to be the most successful attempt I had for this.

Network Clients

The first thing I needed was a class to handle already connected clients. I needed the class to be able to take an already established connection and have it be able to receive network messages (and pass them to the business logic for processing) as well as sending messages back to the client. It also needs to determine when a connection has been dropped (either gracefully or not). Most importantly, these actions should not block the general server flow. As long as the server’s business logic processor is not actually processing a network message, it should not be waiting on a networked client.

Since both receiving messages from the client and a client being disconnected are important aspects that the underlying server should be aware of, I defined the following delegates:

    public delegate void MessageReceivedDelegate(NetworkClient client, string message);

    public delegate void ClientDisconnectedDelegate(NetworkClient client);

I then created the skeleton of the NetworkClient like so:

    public class NetworkClient
    {
        private readonly TcpClient _socket;
        private NetworkStream _networkStream;
        private readonly int _id;

        public bool IsActive { get; set; }
        public int Id { get { return _id; } }
        public TcpClient Socket { get { return _socket; } }

        public event MessageReceivedDelegate MessageReceived;
        public event ClientDisconnectedDelegate ClientDisconnected;

        public NetworkClient(TcpClient socket, int id)
        {
            _socket = socket;
            _id = id;
        }

        private void MarkAsDisconnected()
        {
            IsActive = false;
            if (ClientDisconnected != null)
                ClientDisconnected(this);
        }
     }

This handles creating the client (not the connection, but the class itself) and a helper method for handling when a client has been disconnected. Now we need to listen for incoming data on for this TCP client. Since reading TCP input is blocking, we need to perform this asynchronously with the following code:

        public async Task ReceiveInput()
        {
            IsActive = true;
            _networkStream = _socket.GetStream();

            using (var reader = new StreamReader(_networkStream))
            {
                while (IsActive)
                {
                    try
                    {
                        var content = await reader.ReadLineAsync();

                        // If content is null, that means the connection has been gracefully disconnected
                        if (content == null)
                        {
                            MarkAsDisconnected();
                            return;
                        }

                        if (MessageReceived != null)
                            MessageReceived(this, content);
                    }

                    // If the tcp connection is ungracefully disconnected, it will throw an exception
                    catch (IOException)
                    {
                        MarkAsDisconnected();
                        return;
                    }
                }
            }
        }

This method essentially loops until the network client is not activate anymore and awaits for a full line of data to be returned by the client. If a null response is returned or an IOException occurs, then that means the connection has been disconnected and I need to mark the client as such. By making sure we are awaiting for incoming data, it assures we do not block the application flow while waiting for data to come down the pipe. This method returns a Task, so that we can check for unhandled exceptions in the main server.

Next we need to be able to asynchronously send data to the user. I do this with the following method:

        public async Task SendLine(string line)
        {
            if (!IsActive)
                return;

            try
            {
                // Don't use a using statement as we do not want the stream closed
                //    after the write is completed
                var writer = new StreamWriter(_networkStream);
                await writer.WriteLineAsync(line);
                writer.Flush();
            }
            catch (IOException)
            {
                // socket closed
                MarkAsDisconnected();
            }
        }

I do this asynchronously so we don’t risk blocking the entire server while waiting for the whole TCP process to finish.

Server Infrastructure

Now that we have a fully working class to handle networked clients, we need to create the server infrastructure. We need some way to:

  • Accept new clients on a specific IP address and port
  • Turn those clients into NetworkClient instances
  • Process commands coming from network clients
  • Handle client disconnections

This begins with the following class skeleton:

    public class Server
    {
        private readonly TcpListener _listener;
        private readonly List&lt;NetworkClient&gt; _networkClients;
        private readonly List&lt;KeyValuePair&lt;Task, NetworkClient&gt;&gt; _networkClientReceiveInputTasks; 
        private Task _clientListenTask;

        public bool IsRunning { get; private set; }

        public Exception ClientListenTaskException
        {
            get { return _clientListenTask.Exception; }
        }

        public Server(IPAddress ip, int port)
        {
            _listener = new TcpListener(ip, port); 
            _networkClients = new List&lt;NetworkClient&gt;();
            _networkClientReceiveInputTasks = new List&lt;KeyValuePair&lt;Task, NetworkClient&gt;&gt;();
        }
    }

The _networClientRecieveInputTasks would be used to check for exceptions while listening for input from a client. The client listen task will be used to reference the asynchronous task that listens for new client connections, and this would be use this to check for unhandled exceptions being thrown. Everything else is to get data ready for actually running the server.

We need to consider what to do when a command is received by the client. To get up and running quickly we are just going to relay the incoming data out to all other clients, via the following method:

        private async void ProcessClientCommand(NetworkClient client, string command)
        {
            Console.WriteLine("Client {0} wrote: {1}", client.Id, command);

            foreach (var netClient in _networkClients)
                if (netClient.IsActive)
                    await netClient.SendLine(command);
        }

Now we need to handle the NetworkClient.ClientDisconnected event. In this case all we want to do is close the network socket and remove the client from our list.

        private void ClientDisconnected(NetworkClient client)
        {
            client.IsActive = false;
            client.Socket.Close();

            if (_networkClients.Contains(client))
                _networkClients.Remove(client);

            Console.WriteLine("Client {0} disconnected", client.Id);
        }

The next thing we need is to figure out what we want to do when a client is connected. When a client connects we need to create a NetworkClient instance for them, assign them an identification number (for internal use only), hook into the NetworkClient’s events, and start listening for input from that client. This can be accomplished with the following method:

        private void ClientConnected(TcpClient client, int clientNumber)
        {
            var netClient = new NetworkClient(client, clientNumber);
            netClient.MessageReceived += ProcessClientCommand;
            netClient.ClientDisconnected += ClientDisconnected;

            // Save the Resulting task from ReceiveInput as a Task so
            //   we can check for any unhandled exceptions that may have occured
            _networkClientReceiveInputTasks.Add(new KeyValuePair&lt;Task, NetworkClient&gt;(netClient.ReceiveInput(),
                                                                                      netClient));

            _networkClients.Add(netClient);
            Console.WriteLine("Client {0} Connected", clientNumber);
        }

This will take a TcpClient, create a new NetworkClient for it, tie up the events and start receiving input.

We now have everything e need to execute information from a client, we just need to actually accept incoming client connections. This of course needs to be done asynchronously so we do not block the server flow while waiting for a new connection.

        private async Task ListenForClients()
        {
            var numClients = 0;
            while (IsRunning)
            {
                var tcpClient = await _listener.AcceptTcpClientAsync();
                ClientConnected(tcpClient, numClients);
                numClients++;
            }

            _listener.Stop();
        }

        public void Run()
        {
            _listener.Start();
            IsRunning = true;

            _clientListenTask = ListenForClients();
        }

That’s pretty much all the code that is needed. Now all you have to do is add the server calls to your main method:

         static void Main(string[] args)
         {
             var server = new Server(IPAddress.Any, 9001);
             server.Run();

             while (server.IsRunning)
             {
                 Thread.Sleep(100);
             }
         }

We need the while() loop due to the main functionality of the server running asynchronously. Otherwise the program would immediately exit. Now if you run the server you will be able to connect multiple telnet sessions to each other and pass messages back and forth.

Threading

The problem with our server so far is that it is not operating in a single thread. This means that once you have a lot of clients connecting, sending messages, etc.. you will come up with syncing issues (especially once you start adding real business logic to the mix).

Outside of WPF and Winforms applications async/await operate in the threadpool not in the main thread. This means you cannot 100% predict which await operations will work on which threads (you can read more about this at this MSDN blog).

If you want proof of this on your sample server, you can add the following code everywhere you have any other Console.WriteLine() call:

Console.WriteLine("Thread Id: {0}", Thread.CurrentThread.ManagedThreadId);

If you also add this to your Main() method and run the application you will notice multiple thread ids being displayed in the console.

Async and await commands utilize the SynchronizationContext which controls how (and on what threads) the different actions are run on. Based on that reference, I created the following SynchronizationContext implementation.

    public class SingleThreadSynchronizationContext  : SynchronizationContext
    {
        private readonly Queue&lt;Action&gt; _messagesToProcess = new Queue&lt;Action&gt;();
        private readonly object _syncHandle = new object();
        private bool _isRunning = true;

        public override void Send(SendOrPostCallback codeToRun, object state)
        {
            throw new NotImplementedException();
        }

        public override void Post(SendOrPostCallback codeToRun, object state)
        {
            lock (_syncHandle)
            {
                _messagesToProcess.Enqueue(() =&gt; codeToRun(state));
                SignalContinue();
            }
        }

        public void RunMessagePump()
        {
            while (CanContinue())
            {
                Action nextToRun = GrabItem();
                nextToRun();
            }
        }

        private Action GrabItem()
        {
            lock (_syncHandle)
            {
                while (CanContinue() && _messagesToProcess.Count == 0)
                {
                    Monitor.Wait(_syncHandle);
                }
                return _messagesToProcess.Dequeue();
            }
        }

        private bool CanContinue()
        {
            lock (_syncHandle)
            {
                return _isRunning;
            }
        }

        public void Cancel()
        {
            lock (_syncHandle)
            {
                _isRunning = false;
                SignalContinue();
            }
        }

        private void SignalContinue()
        {
            Monitor.Pulse(_syncHandle);
        }
    }

I then had to update my program’s Main method to utilize the context.

         static void Main(string[] args)
         {
             var ctx = new SingleThreadSynchronizationContext();
             SynchronizationContext.SetSynchronizationContext(ctx);

             Console.WriteLine("Main Thread: {0}", Thread.CurrentThread.ManagedThreadId);
             var server = new Server(IPAddress.Any, 9001);
             server.Run();

             ctx.RunMessagePump();
         }

Now if you run the application and send some commands to the server you will see everything running asynchronously on a single thread.

Conclusion

This may not be the best approach, and a single-threaded TCP server is probably not the most efficient in a production environment, but it does give me a good baseline to work with to expand out its capabilities.

Mimicking Html.BeginForm() to reduce html div duplication in Asp.Net MVC sites

Recently I have been trying to find ways to reduce how much HTML I have to duplicate in my views and how I have to remember what css classes to give each set of divs. The problem comes with that the HTML that my views require don’t fit in the main layout, due to the fact that content still comes after it and some elements are optional, and they don’t fit well in partial views due to how customized the HTML inside the area is.

An example of the HTML I was dealing with, here’s an example of one of my views

<div class="grid1 floatLeft"> 
    <div class="lineSeperater"> 
        <div class="pageInfoBox"> 
            @using (Html.BeginForm(MVC.JobSearch.Edit())) 
            {
                @Html.HiddenFor(x => x.Id)
                
                <div class="grid3 marginBottom_10 marginAuto floatLeft"> 
                    <h3 class="floatLeft">@(isNewJobSearch ? "Start Job Search" : "Edit Job Search")</h3> 
                </div> 
                
                <div class="grid3 marginBottom_10 marginAuto floatLeft">
                    <div class="floatLeft">
                        <p>
                            Displayed page summary
                        </p>    
                    </div>
                </div>
                
                <div class="grid3 marginBottom_10 marginAuto floatleft">
                    <div class="floatLeft infoSpan">
                        @Html.ValidationSummary()
                    </div>
                </div>
                
                <div class="grid3 marginBottom_10 floatLeft"> 
                    <div class="floatLeft"><p class="greyHighlight">Title:</p>
                        <div class="infoSpan">@Html.TextBoxFor(x => x.Name, new { @class = "info" })</div>
                    </div> 
                </div> 

                <div class="grid3 marginBottom_10 floatLeft"> 
                    <div class="floatLeft"><p class="greyHighlight">Description:</p>
                        <div class="infoSpan">@Html.TextAreaFor(x => x.Description, new { @class = "textAreaInfo" })</div>
                    </div> 
                </div> 

                <div class="grid3 marginBottom_20 floatLeft"> 
                    <div class="submitBTN "><input type="submit" value="Save" /></div>                    
                </div> 
            }

            <div class="clear"></div> 
        </div> 
    </div> 
</div> 

<!-- More HTML Here -->

I started thinking of how I can increase my code re-use to make this easier to develop and maintain. While looking over the view my eyes gravitated towards the Html.BeginForm() and I realized the most logical idea was to utilize using statements. So after looking at the implementation of Html.BeginForm() for guidance (thanks to dotPeek) I came up with the following class to implement the first few divs automatically.

    public class PageInfoBoxWriter : IDisposable
    {
        protected ViewContext _viewContext;
        protected bool _includesSeparator;

        public PageInfoBoxWriter(ViewContext context, bool includeSeparator)
        {
            if (context == null)
                throw new ArgumentNullException("context");

            _viewContext = context;
            _includesSeparator = includeSeparator;

            // Write the html
            _viewContext.Writer.Write("<div class=\"grid1 floatLeft\">");

            if (_includesSeparator) _viewContext.Writer.Write("<div class=\"lineSeperater\">");

            _viewContext.Writer.Write("<div class=\"pageInfoBox\">");

            return;
        }

        public void Dispose()
        {
            _viewContext.Writer.Write("<div class=\"clear\"></div></div></div>");
            if (_includesSeparator) _viewContext.Writer.Write("</div>");
        }
    }

I then created an html helper to use this class

    public static class LayoutHelpers
    {
        public static PageInfoBoxWriter PageInfoBox(this HtmlHelper html, bool includeSeparator)
        {
            return new PageInfoBoxWriter(html.ViewContext, includeSeparator);
        }
    }

This will write the beginning divs upon creation and the div end tags upon closing. After following suit with the other elements of my page and creating more disposable layout classes and Html helpers, I now have the following view:

@using (Html.PageInfoBox(false))
{
    using (Html.BeginForm(MVC.JobSearch.Edit())) 
    {
        @Html.HiddenFor(x => x.Id)

        using (Html.OuterRow())
        {
            <h3 class="floatLeft">@(isNewJobSearch ? "Start Job Search" : "Edit Job Search")</h3> 
        }

        using (Html.OuterRow())
        {
            <div class="floatLeft">
                <p>
                    Displayed page summary
                </p>    
            </div>
        }

        using (Html.OuterRow())
        {
            <div class="floatLeft infoSpan">
                @Html.ValidationSummary()
            </div>
        }

        using (Html.OuterRow())
        {
            Html.FormField("Title:", Html.TextBoxFor(x => x.Name, new { @class = "info" }));
        }

        using (Html.OuterRow())
        {
            Html.FormField("Description:", @Html.TextAreaFor(x => x.Description, new { @class = "textAreaInfo" }));
        }

        using (Html.OuterRow())
        {
            using(Html.FormButtonArea())
            {
                <input type="submit" value="Save" />
            }
        }
    }
}

<!-- other html here -->

Now I have a MUCH more maintainable view that even gives me intellisense support, so I don’t have to worry about remembering how css classes are capitalized, how they are named, what order the need to be in, etc…

Testing OAuth APIs Without Coding

While working with the LinkedIn API, I started becoming frustrated in testing my API calls. The core reason was that I couldn’t just form my URL in the web browser due to OAuth. In order to test my API calls I would have to write code to perform the call, test it out, use the VS debugger to retrieve the result to make sure the XML it’s returning is what I expect, etc.. It results in a lot of wasted time and I finally got fed up.

Introducing the ExtApi Tester

I developed a windows application to make testing API calls, especially API calls that require OAuth, to be done much simpler. I call it the ExtApi Tester.

The code can be found on GitHub. I’ve already gotten some good use out of it, and hopefully it helps someone else with their dev experience.

The code also includes an API for making it easier to call web API’s from code, but it requires some further refinement to handle the DotNetOpenAuth authorization process.

DotNetOpenAuth, OAuth, and MVC For Dummies

I recently was trying to understand OAuth so that I could utilize the LinkedIn API in my Asp.Net MVC application. The LinkedIn API has a pretty good summary of the OAuth process. However, the more I looked at it (and other documentation on OAuth), the more confused I got at implementing OAuth myself. So I went around and looked for a C# library to help me out, and found DotNetOpenAuth.Net. Unfortunately, the DotNetOpenAuth website is horribly designed without any real tutorials. After searching around the internet I was able to piece some things together, and hopefully this will help someone figure this out quicker than I was able to.

Requesting User Authorization

The first step of authorizing with OAuth and DotNetOpenAuth is to redirect to the OAuth provider’s authorization page, so the user can grant your application access to perform queries/service calls on their behalf. DotNetOpenAuth needs several pieces of information to begin this process. The first is to create a ServiceProviderDescription object, that contains the provider’s URL for retrieving the request token, URL for retrieving the access token, URL for requesting user authentication, what OAuth protocol version to use, and details of the tamper protection used to encode the OAuth signature. An example of creating the provider description for connecting to LinkedIn is:

        private ServiceProviderDescription GetServiceDescription()
        {
            return new ServiceProviderDescription
            {
                AccessTokenEndpoint = new MessageReceivingEndpoint("https://api.linkedin.com/uas/oauth/accessToken", HttpDeliveryMethods.PostRequest),
                RequestTokenEndpoint = new MessageReceivingEndpoint("https://api.linkedin.com/uas/oauth/requestToken", HttpDeliveryMethods.PostRequest),
                UserAuthorizationEndpoint = new MessageReceivingEndpoint("https://www.linkedin.com/uas/oauth/authorize", HttpDeliveryMethods.PostRequest),
                TamperProtectionElements = new ITamperProtectionChannelBindingElement[] { new HmacSha1SigningBindingElement() },
                ProtocolVersion = ProtocolVersion.V10a
            };
        }

The next thing that DotNetOpenAuth requires is a token manager. The token manager is a class which DotNetOpenAuth utilizes to store and retrieve the consumer key, consumer secret, and a token secret for a given access key. Since how you will store the user access tokens and token secrets will vary project to project, DotNetOpenAuth assumes you will create your own token storage and retrieval mechanism by implementing the IConsumerTokenManager interface.

For testing, I looked online for an in memory token manager class, and found the following code:

    public class InMemoryTokenManager : IConsumerTokenManager, IOpenIdOAuthTokenManager
    {
        private Dictionary<string, string> tokensAndSecrets = new Dictionary<string, string>();

        public InMemoryTokenManager(string consumerKey, string consumerSecret)
        {
            if (String.IsNullOrEmpty(consumerKey))
            {
                throw new ArgumentNullException("consumerKey");
            }

            this.ConsumerKey = consumerKey;
            this.ConsumerSecret = consumerSecret;
        }

        public string ConsumerKey { get; private set; }

        public string ConsumerSecret { get; private set; }

        #region ITokenManager Members

        public string GetConsumerSecret(string consumerKey)
        {
            if (consumerKey == this.ConsumerKey)
            {
                return this.ConsumerSecret;
            }
            else
            {
                throw new ArgumentException("Unrecognized consumer key.", "consumerKey");
            }
        }

        public string GetTokenSecret(string token)
        {
            return this.tokensAndSecrets[token];
        }

        public void StoreNewRequestToken(UnauthorizedTokenRequest request, ITokenSecretContainingMessage response)
        {
            this.tokensAndSecrets[response.Token] = response.TokenSecret;
        }

        public void ExpireRequestTokenAndStoreNewAccessToken(string consumerKey, string requestToken, string accessToken, string accessTokenSecret)
        {
            this.tokensAndSecrets.Remove(requestToken);
            this.tokensAndSecrets[accessToken] = accessTokenSecret;
        }

        /// <summary>
        /// Classifies a token as a request token or an access token.
        /// </summary>
        /// <param name="token">The token to classify.</param>
        /// <returns>Request or Access token, or invalid if the token is not recognized.</returns>
        public TokenType GetTokenType(string token)
        {
            throw new NotImplementedException();
        }

        #endregion

        #region IOpenIdOAuthTokenManager Members

        public void StoreOpenIdAuthorizedRequestToken(string consumerKey, AuthorizationApprovedResponse authorization)
        {
            this.tokensAndSecrets[authorization.RequestToken] = string.Empty;
        }

        #endregion
    }

Now that we have a Token Manager class to use, and a service description we can begin the authorization process. This can be accomplished with the following code:

        public ActionResult StartOAuth()
        {
            var serviceProvider = GetServiceDescription();
            var consumer = new WebConsumer(serviceProvider, _tokenManager);

            // Url to redirect to
            var authUrl = new Uri(Request.Url.Scheme + "://" + Request.Url.Authority + "/Home/OAuthCallBack");

            // request access
            consumer.Channel.Send(consumer.PrepareRequestUserAuthorization(authUrl, null, null));

            // This will not get hit!
            return null;
        }

This sets up the DotNetOpenAuth consumer object to use our in memory token manager, and our previously defined service description object. We then form the URL we want the service provider to redirect to after the user grants your application access. Finally we tell the consumer to send the user authorization request. The Send() method will end the execution of the Asp.Net page, and thus no code after the Send() call will be called. The user will then see the authorization page on the service provider’s website, which will allow them to allow or deny access for your application.

Receiving the OAuth CallBack

Once the user logs into the service provider and gives your application authorization, the service provider will redirect to the callback URL specified in the previous code. The service provider includes the oauth token and secret, which needs to be processed by DotNetOpenAuth. The following code can be done to process the auth token and store it and the secret in the token manager:

        public ActionResult OAuthCallback()
        {
            // Process result from the service provider
            var serviceProvider = GetServiceDescription();
            var consumer = new WebConsumer(serviceProvider, _tokenManager);
            var accessTokenResponse = consumer.ProcessUserAuthorization();

            // If we didn't have an access token response, this wasn't called by the service provider
            if (accessTokenResponse == null)
                return RedirectToAction("Index");

            // Extract the access token
            string accessToken = accessTokenResponse.AccessToken;

            ViewBag.Token = accessToken;
            ViewBag.Secret = _tokenManager.GetTokenSecret(accessToken);
            return View();
        }

Perform A Request Using OAuth Credentials

Now that we have the user’s authorization details we can perform API queries. In order to query the API though, we need to sign our requests with a combination of the user’s access token and our consumer key. Since we retrieved the user’s access token in the previous code, you need to figure out a way to store that somewhere, either in the user’s record in the database, in a cookie, or any other way you can quickly get at it again without requiring the user to constantly re-auth.

In order to use that access token to call a service provider API function, you can form a prepared HttpWebRequest by calling the PrepareAuthorizedRequest() method on the WebConsumer class. The following is an example how to use an access token to query the LinkedIn API.

        public ActionResult Test2()
        {
            // Process result from linked in
            var LiServiceProvider = GetServiceDescription();
            var linkedIn = new WebConsumer(LiServiceProvider, _tokenManager);
            var accessToken = GetAccessTokenForUser();

            // Retrieve the user's profile information
            var endpoint = new MessageReceivingEndpoint("http://api.linkedin.com/v1/people/~", HttpDeliveryMethods.GetRequest);
            var request = linkedIn.PrepareAuthorizedRequest(endpoint, accessToken);
            var response = request.GetResponse();
            ViewBag.Result = (new StreamReader(response.GetResponseStream())).ReadToEnd();

            return View();
        }

And now, if the user has authenticated going to /Home/Test2 will correctly access LinkedIn on behalf of the user!

Update: For those who are looking for a tool to help test API calls prior to having to write them down formally in code, please see my Testing Oauth APIs Without Coding article!

From C# To Java: Events

I have been developing in C# for the last 3-4 years, and before that I was primarily a php coder. During one semester of my freshman year of college I did a little Java, but not much, and I have not done any since. I have decided to work on an N-Tier application in Java to give me a good project under my belt and show that I can use Java (and not just say to people that I can learn Java).

The problem with going from C# to Java is that the language is very similar. This is a problem because while I can figure out the language itself pretty easily, most “getting started” guideand tutorials are too beginner level to really be helpful. However, jumping straight into something such as a Spring framework tutorial or many other more high level tutorials can be pretty overwhelming, as they all assume a general knowledge of Java concepts that are not immediately obvious by just looking at a C# to Java language guide. Therefore, I decided to write a series containing conceptual differences or stumbles I find on my journey through implementing a Java application.

Events in C#

The first conceptual hurdle I came across was dealing with Events. In C#, events are pretty easy to deal with, as an event is can be described as a collection of methods that all conform to a single delegate’s method signature. An example of this, based on a good tutorial on Code Project is:

using System;
namespace wildert
{
    public class Metronome
    {
        public event TickHandler Tick;
        public EventArgs e = null;
        public delegate void TickHandler(Metronome m, EventArgs e);
        public void Start()
        {
            while (true)
            {
                System.Threading.Thread.Sleep(3000);
                if (Tick != null)
                {
                    Tick(this, e);
                }
            }
        }
    }
	
    class Test
    {
        static void Main()
        {
            Metronome m = new Metronome();
            m.Tick += HeardIt;
            m.Start();
        }
		
		private void HeardIt(Metronome m, EventArgs e)
		{
			System.Console.WriteLine("HEARD IT");
		}
    }
}

This is essentially adding the HeardIt method to the Metronome.Tick event, so whenever the event is fired, it calls theTest.HeardIt() method (along with any other method attached to the event). This is pretty straightforward in my (biased) opinion.

Events In Java

I started reading some pages on the net about how events in Java are handled. The first few articles I came across were kind of confusing and all over the place. I felt like I almost understood how it’s done but I was missing one piece of crucial information that binds it all together for me. After reading a few more articles, I finally had my “Ah ha!” moment, and it all clicked.

The confusion was due to that fact that unlike in C#, there is no event keyword in Java. In fact, and this is the piece I was missing, there essentially is no such thing as an event in Java. In Java you are essentially faking events and event handling by using a standard set of naming conventions and fully utilizing object-oriented programming.

Using events in java is really just a matter of defining an interface that contains a method to handle your event (this interface is known as the listener interface). You then implement this interface in the class that you want to handle the event (this is the listener class). In the class that you wish to “fire off” the event from you maintain a collection of class instances that implements the listener interface, and provide a method so listener classes can pass an instance of themselves in to add and remove them from the collection. Finally, the act of firing off events is merely going through the collection of listener classes, and on each listener call the listener interface’s method. It’s essentially a pure-OOP solution masquerading as an event handling system.

To see this in action, let’s create the equivalent of the C# event example in Java.

package com.kalldrexx.app

// Listener interface 
public interface MetronomeEvent {
	void Tick(Date tickDate);
}

// Listener implementation
public class MainApp implements MetronomeEvent {

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {
        EventFiringSource source = new EventFiringSource();
		source.addMetronomeEventListener(this); // Adds itself as a listener for the event
		source.Start();
    }
	
	public void Tick(Date tickDate)
	{
		// Output the tick date here
	}
}

// Event source
public class EventFiringSource {

	// Our collection of classes that are subscribed as listeners of our
    protected Vector _listeners;
	
	// Method for listener classes to register themselves
	public void addMetronomeEventListener(MetronomeEvent listener)
	{
		if (_listeners == null)
			_listeners = new Vector();
			
		_listeners.addElement(listener);
	}
	
	// "fires" the event
	protected void fireMetronomeEvent()
	{
		if (_listeners != null && _listeners.isEmpty())
		{
			Enumeration e = _listeners.elements();
			while (e.hasMoreElements())
			{
				MetronomeEvent e = (MetronomeEvent)e.nextElement();
				e.Tick(new Date());
			}
		}
	}
	
	public void Start()
	{
		fireMetronomeEvent();
	}
}

When the app starts (and enters the MainApp’s main() method), it creates the event source and tells it that the MainApp class should be registered as a listener for the Metronome event. When the event source class starts, it will “fire” off the event by just looking at all classes registered that implement the MetronomeEvent interface, and call the Tick() method for that implemented class.

No special magic, just pure object-oriented programming!

My Adventures With RavenDB – Getting Distinct List Items

I decided to play around with using the RavenDB database system. I wanted to see how fast it would take me to get a Raven database up and running. I was very impressed with how easy it was, and for the most part it was just a matter of storing my records and using Linq to query for them.

No Select Manys

The only issue I came across was the fact that the Linq implementation does not implement the SelectMany() method. From discussions, this is due to the linq queries being done against Lucene, and since Lucene data are stored flat it is impossible to look for data inside a list.

The query I was trying to implement dealt with the following two data structures:

    public class LogRecord
    {
        public Guid SessionId { get; set; }
        public IList<LogField> Fields { get; set; }
        public int RecordNumber { get; set; }
    }

    public class LogField
    {
        public string FieldName { get; set; }
        public string StringValue { get; set; }
        public DateTime? DateValue { get; set; }
    }

What I needed to do was to retrieve a distinct list of all FieldName values for a given SessionId value. Normally with Linq I would use the following code:

return ravenSession.Query<LogRecord>()
		.Where(x => x.SessionId == sessionId)
		.SelectMany(x => x.Fields)
		.Select(x => x.FieldName)
		.Distinct()
		.ToList();

This fails because SelectMany() is not supported by Raven.

After doing some research it turns out that I needed to use Raven Map/Reduce indexes. The reason Raven indexes work is because they run prior to Raven putting the data into Lucene, thus it can run the SelectMany() on the objects itself rather than just the data stored in Lucene.

So in order to do this I coded the following index:

 public class LogRecord_LogFieldNamesIndex : AbstractIndexCreationTask<LogRecord, LogSessionFieldNames>
    {
        public LogRecord_LogFieldNamesIndex()
        {
            Map = records => from record in records
                             from field in record.Fields
                             select new
                             {
                                 SessionId = record.SessionId,
                                 FieldName = field.FieldName
                             };

            Reduce = results => from result in results
                                group result by new { result.SessionId, result.FieldName } into g
                                select new
                                {
                                    SessionId = g.Key.SessionId,
                                    FieldName = g.Key.FieldName
                                };
        }
    }

The query I used to access the list of field names now became:

return session.Query<LogSessionFieldNames, LogRecord_LogFieldNamesIndex>()
			  .Where(x => x.SessionId == sessionId)
			  .Select(x => x.FieldName)
			  .Customize(x => x.WaitForNonStaleResultsAsOfNow())
			  .ToList();

Hope this helps someone!