When Ruby on Rails became popular, there was an explosion of similar frameworks that borrowed many of its ideas. ASP.NET MVC was one of those frameworks. A feature that it borrowed is the ability to store data for the next request. In Rails it’s called Flash and in ASP.NET MVC it’s called TempData. I’ve created Harbour.RedisTempData that allows TempData to be stored in Redis.

An Example of TempData

Imagine building a form to create an item and you want to display a success message when the user redirects to the index. Using the Post-Redirect-Get (PRG) pattern, the actions would look something like:

[HttpPost]
public ActionResult Create(ItemModel model)
{
    db.Save(model);
    TempData["success"] = "The new item was successfully added!";
    return RedirectToAction("index");
}

[HttpGet]
public ActionResult Index()
{
    return View();
}

In the Index.cshtml view, the message is displayed:

@if (TempData.ContainsKey("success"))
{
    <span class="success">@TempData["success"]</span>
}

The success message must be available between Create the redirect to Index. If we used ViewData instead of TempData, the message would not be available for the Index action. When the Index is refreshed, no message is displayed because the success message was already read.

TempData + Redis = Love

An application that is on a single web server can use most of ASP.NET MVC out of the box. When your application sits on many web servers behind a load-balancer, you have to think about your data flow.

If you’re already using distributed Session state with Redis, you’re all set. By default, the Session holds the TempData.

If you’re not using the Session or would like to store TempData outside of the Session, you can store it in Redis. I’ve created Harbour.RedisTempData that implements ITempDataProvider. Setting it up is easy:

  1. Install from Nuget:

    PM> Install-Package Harbour.RedisTempData
    
  2. Create a base controller that configures the TempDataProvider:

    public abstract class ApplicationController : Controller
    {
        private readonly IRedisClient redis = new RedisClient("localhost:6379");
    
        protected ApplicationController()
        {
            TempDataProvider = new RedisTempDataProvider(redis);
        }
    
        protected override void Dispose(bool disposing)
        {
            redis.Dispose();
    
            base.Dispose(disposing);
        }
    }
    
  3. Inherit from the base controller and use the TempData as you normally would:

    public class HomeController : ApplicationController
    {
        public ViewResult Index()
        {
            TempData["message"] = "Hello World";
            return View();
        }
    }
    

That’s it! Be sure to checkout the README for more configuration recommendations and options. You can now distribute and scale your TempData with the power of Redis.

NHibernate has quite a steep learning curve, but I’m in favor of making it an easier tool. I don’t want an uphill battle when writing code. I don’t want a tool that gets in the way. Building software should be fun.

I recently had an application using NHibernate that had a strange bug. I was trying to remove an entity from a collection, but it wasn’t being removed. Can you spot the issue?

public class Blog
{
    public virtual int Id { get; set; }
    public virtual string Name { get; set; }

    // This would be configured to lazy-load.
    public virtual IList<Post> Posts { get; protected set; }

    public Blog()
    {
        Posts = new List<Post>();
    }

    public virtual Post AddPost(string title, string body)
    {
        var post = new Post() { Title = title, Body = body, Blog = this };
        Posts.Add(post);
        return post;
    }
}

public class Post
{
    public virtual int Id { get; set; }
    public virtual string Title { get; set; }
    public virtual string Body { get; set; }
    public virtual Blog Blog { get; set; }

    public virtual bool Remove()
    {
        return Blog.Posts.Remove(this);
    }
}

void Main(string[] args)
{
    // The mapping for these two entities isn't anything out of the ordinary.
    var post = session.Load<Post>(postId);
    post.Remove();
}

Down the Rabbit Hole

After two days of debugging, I found the issue: NHibernate was comparing a proxy with a non-proxy. Inside of post.Remove(), this refers to a Post. But Blog.Posts refers to a collection of PostProxy. Because NHibernate’s proxies use reference equality when Equals is not overridden, the objects can never be equal. So, the Post is not removed from the collection… ugh!

It’s even more confusing if you perform the same code outside of the Post:

var post = session.Load<Post>(postId);
post.Blog.Posts.Remove(post);

Here, a PostProxy is compared with and a collection of PostProxy. Reference equality returns true and the post is removed. Having a proxy outside but a real object inside makes sense because of how proxies and interception work. But it adds confusion and can hide bugs. These types of bugs are waiting to creep up on you.

The Misinformation

In the normal .NET world, you’d override Equals/GetHashCode to make the Posts.Remove() always work. But, the documentation only recommends overriding Equals/GetHashCode if you’re mixing entities from different sessions. You don’t have to worry because NHibernate:

”[…] guarantees identity ( a == b , the default implementation of Equals()) inside a single ISession!”

That’s not true. Identity with NHibernate usually means that two entities refer to the same row in the database. Because of the bad documentation, there’s lots of misinformation that you don’t need to worry about Equals/GetHashCode. But as the example showed, NHibernate considers a proxy and a non-proxy with the same primary key as not equal.

If you were to override Equals, the recommendation is to use business equality (e.g. post.Name == other.Name). This is a bad idea because it can cause unnecessary loading of other entities. Imagine if you were comparing root entities in a deep and complicated object graph. It’s a good way to kill performance.

A Real Solution

First, as a community we need to stop the misinformation. The documentation is wrong and needs updating. We need to emphasize that there are cases where NHibernate is inconsistent.

Second, if you want to prevent difficult to find bugs, always override Equals/GetHashCode. Even if you don’t do the pattern in the example above, always override them. This is to future-proof your code. Rather than using reference equality or implementing business equality, use the identity of the entity. I use the following base class for all my entities (and you can too):

public abstract class EntityBase
{
    public virtual int Id { get; protected set; }

    protected bool IsTransient { get { return Id == 0; } }

    public override bool Equals(object obj)
    {
        return EntityEquals(obj as EntityBase);
    }

    protected bool EntityEquals(EntityBase other)
    {
        if (other == null || !GetType().IsInstanceOfType(other))
        {
            return false;
        }
        // One entity is transient and the other is persistent.
        else if (IsTransient ^ other.IsTransient)
        {
            return false;
        }
        // Both entities are not saved.
        else if (IsTransient && other.IsTransient)
        {
            return ReferenceEquals(this, other);
        }
        else
        {
            // Compare transient instances.
            return Id == other.Id;
        }
    }

    // The hash code is cached because a requirement of a hash code is that
    // it does not change once calculated. For example, if this entity was
    // added to a hashed collection when transient and then saved, we need
    // the same hash code or else it could get lost because it would no 
    // longer live in the same bin.
    private int? cachedHashCode;

    public override int GetHashCode()
    {
        if (cachedHashCode.HasValue) return cachedHashCode.Value;

        cachedHashCode = IsTransient ? base.GetHashCode() : Id.GetHashCode();
        return cachedHashCode.Value;
    }

    // Maintain equality operator semantics for entities.
    public static bool operator ==(EntityBase x, EntityBase y)
    {
        // By default, == and Equals compares references. In order to 
        // maintain these semantics with entities, we need to compare by 
        // identity value. The Equals(x, y) override is used to guard 
        // against null values; it then calls EntityEquals().
        return Object.Equals(x, y);
    }

    // Maintain inequality operator semantics for entities. 
    public static bool operator !=(EntityBase x, EntityBase y)
    {
        return !(x == y);
    }
}

This uphill battle was painful. But after using this class, I don’t have to worry about these bugs creeping up. Now I can focus on writing meaningful code and have fun building software.

Tracking Active Users with Redis (and a sample in C#)

Knowing the currently active users is a common requirement. Maybe you’d like to track who’s online. Or, maybe you’d like to know the current collaborators of a document. Using Redis as the data store, this can be quite trivial to implement. I’m going to describe the general algorithm and give a sample in C#.

Windows of Activity

Activity of an object is broken into “windows” (fixed intervals of time). For example, if we divided activity for document #1 into intervals of 10 seconds, it’d look something like this:

| 1:20:00  | 1:20:10  | 1:20:20  | 1:20:30  | ...     
-------------------------------------------------
|          |          |          |          | ...

We store activity at a specific time, but we dump it into each of these windows. For example, if John (denoted by J) was active at 1:20:12 and then again at 1:20:25, it’d look something like this:

| 1:20:00  | 1:20:10  | 1:20:20  | 1:20:30     | ...     
-------------------------------------------------
|          |  J       |     J    |          | ...

And if Sally (denoted by S) was active at 1:20:05 and 1:20:23 it would look like this:

| 1:20:00  | 1:20:10  | 1:20:20  | 1:20:30  | ...     
-------------------------------------------------
|     S    |  J       |   S J    |          | ...

Calculating a time’s window is simple math. For example, the window for 1:20:05 would look like:

time = Date(2013, 11, 11, 1, 20, 5)
window_width_seconds = 10
window = time.second / window_width_seconds
window_time = Date(time.year, time.month, time.day, time.hour, time.minute, window * window_width_seconds)
# window_time == Date(2013, 11, 11, 1, 20, 0)

Storing in Redis

For the implementation in Redis, we could represent each window as a List. The identifier of the object and window time are included as part of the key:

RPUSH "activity/documents:1/2013-11-01-01:20:00" "Sally"
RPUSH "activity/documents:1/2013-11-01-01:20:10" "John"
RPUSH "activity/documents:1/2013-11-01-01:20:20" "Sally"
RPUSH "activity/documents:1/2013-11-01-01:20:20" "John"

And then we can easily query for active users during a window. For example, the 1:20:20 window:

LRANGE "activity/documents:1/2013-11-01-01:20:20" 0 -1
1) "Sally"
2) "John"

However, there are a couple of problems with this naive approach.

Problems

First, we may need the specific time that the user was active. We could store this with the list value, but it feels like we’re not using Redis to its full potential:

RPUSH "activity/documents:1/2013-11-01-01:20:00" "{\"name\":\"Sally\",\"time\":\"01:20:05\"}"

Second, if a user is active more than once during a window, we’d have duplicate data:

RPUSH "activity/documents:1/2013-11-01-01:20:00" "John"
RPUSH "activity/documents:1/2013-11-01-01:20:00" "John"

Finally, querying for activity isn’t as simple as looking at users for the current window. What we really need is a sliding window. For example, if the current time is 1:20:14, we’d want to query between 1:20:04 and 1:20:14 (since our window size is 10 seconds):

| 1:20:00  | 1:20:10  | 1:20:20  | 1:20:30  | ...     
-------------------------------------------------
|     S    |  J       |   S J    |          | ...
     |<-------->|

To accomplish this, we can query the current and previous windows and then join the results:

LRANGE "activity/documents:1/2013-11-01:20:00" 0 -1
1) "Sally"
LRANGE "activity/documents:1/2013-11-01:20:10" 0 -1
1) "John"

Querying the full list leads us to a window size that is twice as big. We can do better than this.

Sorted Sets to the Rescue

With a Sorted Set, we can do the following:

  1. Use the activity time as the score of each entry (and thus allowing us to know when the user was active).
  2. Silently ignore duplicate users in a window (since we’re using a set).
  3. Not have to worry about querying the full list (sorted sets support range queries).

For example, with a window of 30 seconds:

# John @ 2:29:45
ZADD "activity/documents:1/2013-11-11-02:29:30" "6.3519776985E+17" "John"
# Sue @ 2:30:00
ZADD "activity/documents:1/2013-11-11-02:30:00" "6.3519777E+17" "Sue"
# Mary @ 2:30:05
ZADD "activity/documents:1/2013-11-11-02:30:00" "6.3519777005E+17" "Mary"

To get the users at a specific time:

  1. Query the previous window (from the current time minus the window size to positive infinity).
  2. Query the current window and don’t worry about a range (the upper bound would just be now - and we want current users).
  3. Concat these results (they’re already sorted by time!).

For example, to query at the current time 02:30:23:

ZRANGEBYSCORE "activity/documents:1/2013-11-11-02:30:00" "6.3519776993E+17" "+inf" "WITHSCORES"
1) "Sue"
2) "6.3519777E+17"
3) "Mary"
4) "6.3519777005E+17"
ZRANGE "activity/documents:1/2013-11-11-02:30:30" "0" "-1" "WITHSCORES"
(empty list or set)

The WITHSCORES argument is used so that we can return the score (which is the time of the activity).

Extras

  1. One thing that I love about Redis is key expiry. Be sure to expire your windows of activity by at least double the window size. For example:

    EXPIRE "activity/documents:1/2013-11-02:30:00" "60"
    
  2. Don’t forget to wrap your Redis commands in a transaction so that they are atomic:

    MULTI
    ZADD "activity/documents:1/2013-11-11-02:29:30" "6.3519776985E+17" "John"
    EXPIRE "activity/documents:1/2013-11-11-02:29:30" "60"
    EXEC
    
  3. Use UTC dates for all of your dates.

Sample in C#

I’ve uploaded a sample in C# that uses BookSleeve and JSON.NET.

Run the sample or check out the tests to see the code in action. I’ve abstracted this class to IActivityMonitor for easy swapping:

interface IActivityMonitor
{
    void Beacon(string key, DateTime time, int userId, string userName);
    IEnumerable<ActiveUser> GetAll(string key, DateTime time);
}

Finally, the code is MIT licensed - so feel free to use it in your project.

Using the ASP.NET compiler (via the aspnet_compiler.exe or AspNetCompiler MSBuild task) as part of your build process is a recommended approach to increase application performace and precompile views. If you introduce NHibernate into the picture, you may run into hard to debug exceptions during the precompilation step. For example, when the ISessionFactory is created, it will quote any column or table names that interfere with reserved keywords. If your database doesn’t exist or isn’t accessible from the build server, you’re going to get a connection exception.

It would be better if we didn’t have to query the database during the pre-compilation step (because it doesn’t actually need to!).

Solution 1

The first attempt, at least to our exception above, is to disable the column and table name quoting. There are a couple of ways to acheive this. For example, through XML configuration:

<property name="hbm2ddl.keywords">none</property>

Adversely, doing this puts us at risk of using column or table names that aren’t valid. Disabling this during debugging is OK, but our build server should be as rigid as possible.

Solution 2

The better approach is to not configure the ISessionFactory during pre-compilation. Therefore, we need a way to detect if the application is being pre-compiled.

NOTE: This code uses implementation details and looks into the bowels of ASP.NET that could change in the future. I’ve tested it against .NET 2.0, 3.5 and 4.0 using both ASP.NET compilers. If anyone has a better approach for detecting precompilation, I’d suggest taking a look at my SO post.

public static bool IsPerformingPrecompilation()
{
    var simpleApplicationHost = Assembly.GetAssembly(typeof(HostingEnvironment))
                                    .GetType("System.Web.Hosting.SimpleApplicationHost", 
                                             throwOnError: true, ignoreCase: true);

    return HostingEnvironment.InClientBuildManager &amp;&amp; 
           HostingEnvironment.ApplicationID.EndsWith("_precompile", StringComparison.InvariantCultureIgnoreCase) &amp;&amp; 
           HostingEnvironment.ApplicationHost.GetType() == simpleApplicationHost;
}

Now, we can easily disable creation of the ISessionFactory if the application is being precompiled. You could, for example, put this code inside of your IoC container’s setup:

if (IsPerformingPreCompilation())
{
    return;
}

// Configuration creation snipped...
var sessionFactoy = configuration.BuildSessionFactory();
// Wire up sessionFactory with your IoC container etc.

Conclusion

NHibernate shouldn’t have to hit the database when using the ASP.NET compiler. The simplest fix is to ignore creation of the ISessionFactory all together. Although this approach might be a bit adventurous, I’ve been using it for a couple of applications without any issues.

It’s not uncommon for ASP.NET MVC developers for find things missing out of the box. Now that MVC is open source, the missing puzzle pieces might actually make it into future releases.

Problem

Sending and model binding JSON with an enum is one of the missing pieces. For example, let’s assume we have the following models:

public enum Suit
{
    Spades = 1,
    Hearts = 2,
    Clubs = 3,
    Diamonds = 4
}

public class Card
{
    public Suit Suit { get; set; }
    public int Value { get; set; }
}

And assume that we’ve sent the following JSON back to our CardsController:

// Six of hearts.
{
  'Suit': 2,
  'Value': 6
}

This doesn’t work as expected:

invalid enum model binding

Why is the Suit is 0? Well, it turns out that the default model binder only handles enums by the string representation of their name. Therefore, sending the following JSON works as expected:

{
  'Suit': 'Hearts',
  'Value': 6
}

Sending this type of enum representation back to a controller is counter intuitive (especially if you’re also representing the enum in JavaScript!).

Solution

The following model binder can be used to model bind enum values like { 'Suite': 2 }. You can specify a default enum value if necessary.

public class EnumModelBinder<T> : IModelBinder
        where T : struct
{
    private readonly T defaultValue;
    private readonly bool hasDefaultValue;

    public EnumModelBinder(T defaultValue)
    {
        this.defaultValue = defaultValue;
        this.hasDefaultValue = true;
    }

    public EnumModelBinder()
    {
        this.hasDefaultValue = false;
    }

    public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
    {
        var valueResult = bindingContext.ValueProvider.GetValue(bindingContext.ModelName);
        var modelState = new ModelState() { Value = valueResult };

        object actualValue = null;

        if (valueResult == null && this.hasDefaultValue)
        {
            actualValue = this.defaultValue;
        }
        else if (valueResult == null)
        {
            modelState.Errors.Add("No default representation of enum " + typeof(T).Name + " could be found.");
        }
        else
        {
            string value = valueResult.AttemptedValue;
            T enumValue;

            if (Enum.TryParse<T>(value, out enumValue) && Enum.IsDefined(typeof(T), enumValue))
            {
                actualValue = enumValue;
            }
            else if (this.hasDefaultValue)
            {
                actualValue = this.defaultValue;
            }
            else
            {
                modelState.Errors.Add("Could not parse " + value + 
                                      " as a valid numerical value of enum " + typeof(T).Name + ".");
            }
        }

        bindingContext.ModelState.Add(bindingContext.ModelName, modelState);
        return actualValue;
    }
}

Wire this up inside your Global.asax:

ModelBinders.Binders.Add(typeof(Suit), new EnumModelBinder<Suit>());

And now we can go home happy campers:

fixed enum model binding

Conclusion

Because model binding enum values isn’t supported out of the box in ASP.NET MVC, we have to create our own binder. However, I still think this type of binder should be baked into the framework.

Enjoy!


Appendix

Here are the associated tests for this model binder:

[TestClass]
public class EnumModelBinderTest
{
    private enum Foo
    {
        A = 1,
        B = 2,
        C = 3
    }

    [TestMethod]
    public void Bind_to_default_value_if_value_provider_has_no_value()
    {
        // Arrange
        var b = TestableEnumModelBinder<Foo>.Create("X", null, Foo.B);

        // Act
        var result = b.BindModel(b.ControllerContext, b.BindingContext);

        // Assert
        Assert.AreEqual(Foo.B, result);
        Assert.IsTrue(b.BindingContext.ModelState.IsValid);
    }

    [TestMethod]
    public void Bind_to_default_value_if_parsing_fails()
    {
        // Arrange
        var b = TestableEnumModelBinder<Foo>.Create("X", "adsf", Foo.C);

        // Act
        var result = b.BindModel(b.ControllerContext, b.BindingContext);

        // Assert
        Assert.AreEqual(Foo.C, result);
        Assert.IsTrue(b.BindingContext.ModelState.IsValid);
    }

    [TestMethod]
    public void Add_error_if_value_provider_has_no_value_and_no_default_value_is_set()
    {
        // Arrange
        var b = TestableEnumModelBinder<Foo>.Create("X", null);

        // Act
        var result = b.BindModel(b.ControllerContext, b.BindingContext);

        // Assert
        Assert.IsNull(result);
        Assert.IsFalse(b.BindingContext.ModelState.IsValid);
    }

    [TestMethod]
    public void Add_error_if_value_could_not_be_parsed_as_enum_value_and_no_default_value_is_set()
    {
        // Arrange
        var b = TestableEnumModelBinder<Foo>.Create("X", "999");

        // Act
        var result = b.BindModel(b.ControllerContext, b.BindingContext);

        // Assert
        Assert.IsNull(result);
        Assert.IsFalse(b.BindingContext.ModelState.IsValid);
    }

    [TestMethod]
    public void Bind_correct_value()
    {
        // Arrange
        var b = TestableEnumModelBinder<Foo>.Create("X", "2");

        // Act
        var result = b.BindModel(b.ControllerContext, b.BindingContext);

        // Assert
        Assert.AreEqual(Foo.B, result);
        Assert.IsTrue(b.BindingContext.ModelState.IsValid);
    }

    private class TestableEnumModelBinder<T> : EnumModelBinder<T>
        where T : struct
    {
        public ControllerContext ControllerContext;
        public ModelBindingContext BindingContext;

        private TestableEnumModelBinder(T defaultValue)
            : base(defaultValue)
        {

        }

        private TestableEnumModelBinder()
            : base()
        {

        }

        public static TestableEnumModelBinder<T> Create(string modelName, string modelValue, object defaultValue = null)
        {
            var controllerContext = new ControllerContext();
            var valueProvider = new Mock<IValueProvider>();

            if (modelValue != null)
            {
                valueProvider.Setup(p => p.GetValue(modelName))
                    .Returns(new ValueProviderResult(modelValue, modelValue, CultureInfo.CurrentCulture));
            }
            else
            {
                valueProvider.Setup(p => p.GetValue(modelName)).Returns((ValueProviderResult)null);
            }

            var bindingContext = new ModelBindingContext()
            {
                ModelName = modelName,
                ValueProvider = valueProvider.Object
            };

            TestableEnumModelBinder<T> binder;

            if (defaultValue != null)
            {
                binder = new TestableEnumModelBinder<T>((T)defaultValue);
            }
            else
            {
                binder = new TestableEnumModelBinder<T>();
            }

            binder.ControllerContext = controllerContext;
            binder.BindingContext = bindingContext;

            return binder;
        }
    }
}

I’ve often found myself writing raw SQL queries with NHibernate because the abstraction can be limiting. This can be achieved with the CreateSQLQuery() method on ISession:

var results = this.session.CreateSQLQuery("select Id, Title, Body from [Posts]")
    .AddEntity(typeof(Post))
    .List<Post>();

For a simple case like this, everything works as expected. However, if the query includes something that breaks the NHibernate abstraction, like in this StackOverflow post, it’s not really possible to AddEntity(). When abstractions get in the way, it causes developer frustration and many wasted hours trying to fight the uphill battle. Sometimes I want NHibernate to get out of my way and just give me some data - I don’t want to have to add extra entity mappings!

Being Dynamic

Recently, I’ve been playing around with Dapper and Massive. I really like the ability to mingle directly with the data using dynamics. Wouldn’t it be cool if we could do the same thing using NHibernate?

Why not!

public static class NhTransformers
{
    public static readonly IResultTransformer ExpandoObject;

    static NhTransformers()
    {
        ExpandoObject = new ExpandoObjectResultSetTransformer();
    }

    private class ExpandoObjectResultSetTransformer : IResultTransformer
    {
        public IList TransformList(IList collection)
        {
            return collection;
        }

        public object TransformTuple(object[] tuple, string[] aliases)
        {
            var expando = new ExpandoObject();
            var dictionary = (IDictionary<string, object>)expando;
            for (int i = 0; i < tuple.Length; i++)
            {
                string alias = aliases[i];
                if (alias != null)
                {
                    dictionary[alias] = tuple[i];
                }
            }
            return expando;
        }
    }
}

public static class NHibernateExtensions
{
    public static IList<dynamic> DynamicList(this IQuery query)
    {
        return query.SetResultTransformer(NhTransformers.ExpandoObject)
                    .List<dynamic>();
    }
}

How awesome is this?

var results = this.session.CreateSQLQuery("select Id, Title, Body from [Posts]")
                  .DynamicList(); // Secret sauce!
// results are now dynamic!
Console.WriteLine(results[0].Id);
Console.WriteLine(results[0].Name);
// rock on!

Conclusion

NHibernate is quite the tool - but it can be too abstracted. Using DynamicList() gives data control back to the developer when raw SQL access is required.