November updates – what’s new from Mindscape?

Developer Notes

WPF Elements

WPF Diagrams

  • Added DiagramSurface.IsVirtualizing option. (details)
  • Resolved selection bugs related to grouping. (details)
  • Resolved a selection bug when holding Ctrl or Shift in ReadOnly mode. (details)

Web Workbench

  • Update Less to 1.5.0
  • Outline and indent @include directive blocks.
  • Better handling for locating root folder.
  • Additional logic to ignore errors when compass import not found.


  • Additional string parsing conversion support.
  • Use current culture for string parsing.
  • More string based parsing support for SQLite3.
  • Only care about the zero rows case.
  • Relax the OptimisticConcurrencyException check to support triggers etc. (details)
  • Switch to Invariant to match SQLite behavior.
  • Add additional IgnoreDataMember attributes. (details)

  • Improved the performance across the app!
  • Improved search results
  • Several provider updates

As usual the free editions of the nightly builds are available right now from the downloads page, and the full editions from the store.

Visual Studio 2013 support

Visual Studio 2013

I’m pleased to let you know that all Mindscape products have full support for Visual Studio 2013!

If you have have an active subscription you can download the latest nightly builds to get this new support. If your subscription has ended, you can renew it to obtain the latest builds.

Some products needed explicit support to work at all (e.g. Web Workbench), while others just have nice-to-have improvements like putting WPF controls into the toolbox for you.

Happy Coding!

Building WPF Applications

A common scenario for many software developers is the need to build an application over a database and allow the end users to view, edit and delete data. The purpose of this article is to look at this common situation and how we can best approach the application architecture and database access to create a high performance user experience while at the same time building a maintainable solution. I’ll also share the main performance optimization concepts we used to build our DataGrid and chart controls that are capable of binding to millions of items.

We have built a sample application to go with this article which you can download at the end of the post. It’s a simple dashboard for an imaginary company that sells three main products: cookies, lemonade and Absinthe (trust me, they go great together!). I’ve generated just over a million sales records in the sample database to help show a “real world” performance load.

Build on a solid foundation

To structure this application we have used the popular Model View ViewModel (MVVM) pattern. If you’re unfamiliar with the MVVM pattern we have posted an overview of MVVM here. For those of you familiar with it, we typically suggest using the much loved Caliburn Micro framework built by Rob Eisenberg. Working with Caliburn Micro saves developers from having to write a lot of boiler plate code for plumbing their applications together. You can read a more in depth tutorial series about Caliburn Micro on our blog.

For data access we have used our object relational mapper, LightSpeed with the data being accessed from a SQLite database. This is just to make it easier for you to download the attached sample and run it up. Using LightSpeed you could easily change the sample to work against SQL Server, Oracle or even cloud databases like SimpleDB.


The general structure of the user interface is:

TOP: A Time Explorer control that shows an overview graph of sales over time. It supports zooming in and selecting areas along the graph. When a selection change is made it updates the various charts, gauges and the data grid with the database from the selected date range.

MIDDLE: A stacked area chart showing the total sales by product type. Also, clicking on the legend at the top of the app will filter the data by that product type (e.g. click on ‘Absinthe’ and only see the data related to Absinthe sales).

BOTTOM: A data grid showing customer data for each sale. We also can group by any column, page through the data and give an example of showing countries using a flag rather than a boring label.

RIGHT: A top 10 chart by country for the selected range of data. Below this are some gauges showing the year on year sales and how we’re tracking to budget (in our application these values are fairly random but you get the idea!).

Start with a solid framework

To help you build applications, there is a variety of frameworks available to reduce your workload. Caliburn Micro supports all XAML platforms and several popular UI patterns including MVVM. Although it isn’t difficult to create an MVVM application unassisted, a framework can significantly make things easier for you to quickly create a robust application that is easy to maintain. When you start working with Caliburn Micro, you’ll find that it clearly separates the view from the view-model and reduces temptation to clutter up the code behind. Caliburn Micro offers many convenient features for binding controls in the view to properties in the view-model, and also listening to events and commands from controls in the view that have been raised from user interactions.

The view-model is mostly comprised of properties, all of which are either primitive values, collections or plain old C# objects (POCOs). A good MVVM application does not have any references to UI elements or view related classes (such as brushes) within the view-model. One of the great advantages of this is that a pure model with no UI elements is much easier to test. When writing unit tests, you simply want to set properties or call methods and then assert that the state of the model is correct. If there are any UI elements within the model, certain functions may not work unless a particular control has been loaded in the view. Also, it’s much easier to test the effects of user interaction when you use a command system rather than hooking up event handlers to controls directly in the model. Another advantage of keeping the model free of UI elements is to help save time if changes are made to the application specifications.

If you’re an application designer, you probably know that the design can change quite often. For example, in an early version, you may expose some user options using toggle buttons – some of which need to be disabled based on other options. A poorly built application may have the toggle buttons accessible within the model where it is convenient to directly set the enabled state as other options change. In a future version, rather than using toggle buttons, the design may require the options to be exposed as menu items. When someone comes to make this change, they find the model is riddled with talk of toggle buttons and probably other controls which all needs to be changed around. In a well built application, UI elements can only be found in the view and are bound to properties in the view model. So when the UI design changes all that needs to be updated is the structure of the view. Generally it is possible to create an entirely new UI design for an application without making changes to the model. If you come across a scenario where it seems the model does need references to UI Elements, try pulling this out into a custom control implementation.

Go with the flow

When the application starts up, data is loaded from a database or a local file and various properties on the model are set to express the state of the application. Due to property changed notifications, bindings will be updated which the view uses to update the display. When the user interacts with the application, events or commands are triggered which the model can listen to. Based on the events, commands or parameters that the model receives, the state of the application model can change, and properties will be updated to reflect this. Once again property changed notifications cause bindings to be updated and the display is refreshed for the user.


Leverage the power of WPF

WPF has a wealth of great features built into it which is why I love working with it so much. Bindings, routed events and other great features make our lives a lot easier – especially when coupled with a solid design pattern like MVVM. In the attached sample pay particular attention to:

Bindings – Data binding is a way to link two properties together so that when the source property changes, the value is set to the target property. The most common use of bindings is to bind properties from the model to properties of controls in the view. When properties in the model raise property change notifications, the properties on the controls are also changed which are used to update the display. This is a huge help for MVVM applications as it simplifies the ‘bond’ between the view and the model. Data binding has a vast range of useful features, so I’ll only mention a few here. If you need to bind two properties that have different types, you can specify a converter that converters the source value into an appropriate value for the target. For example you may have a Boolean property in the model that the view may use to change the the visibility of an element. Here you can use a Boolean-to-visibility-converter, false is converted to Collapsed, and true is converted to Visible. Another useful feature is the string-format. If you have a text block that is binding to a double or date time property, you may want to specify a string format to display the value in an appropriate way. You can also specify the direction of the binding. In some situations you may only need property values to bind from the model to the view, in other situations you may want properties to change both ways through the binding.

Data context – A data context is the source of data binding. In an MVVM application, the data context of a view is a view-model. Any bindings in the view will look at that view-model for the source properties. As you look deeper into the visual tree of a view, the data context can be broken down into sub-models which the sub-elements can bind to. For example, the data context of a view is an entire view-model which can have many properties that each control binds to. One of these controls may be a list-box which is binding to a property that returns a collection of items. The data context of each of the UI items displayed in the list box will be the appropriate model object in the collection, rather than the entire view-model. The template of the list box items simply needs to bind to properties on those item models.

Routed events – Routed events let you send packets of information around your application as the state of controls change. Controls have lots of events that get raised as the user interacts with them. When the user has completed a UI operation, your model can listen to the appropriate events and make changes to the state of the model if necessary. When creating custom controls, you can listen to events coming from the mouse or keyboard to implement the user interaction of the control. You may also be raising events within you model that other parts of your model needs to listen to. Frameworks such as Caliburn Micro have features that make setting up events easier. In particular, they help reduce the coupling between the model and the view, and manage the removal of events when they are no longer needed to avoid performance and memory issues.

Commands – Another way to send messages from the view to the model is by using commands. Unlike events, commands usually don’t have any data associated with them. They usually represent simple user actions such as a button being pressed, though you can send command parameters if you need to. The great feature of the command system is that you can provide logic within your model to specify whether a command on a particular control is currently allowed to be executed. Whether or not a user is allowed to press a certain button may depend on the state of your model. For example, you may not want the user to press the “log in” button of a dialog if they haven’t entered anything in the user name or password boxes yet. By providing this ‘can execute’ logic, the WPF framework will automatically set the enabled/disabled state of the button. Again, Frameworks such as Caliburn Micro provide convenient features for hooking up commands from the view to the model.

Styles and templates – One of the features that make WPF and other XAML frameworks stand out in the application space is the powerful customizability of visual components. It is very easy to change the look of any part of your application, from the font size of a single text block to the overall visual theme of your application. The flexibility is phenomenal and I’ve seen a lot of impressive styles throughout my XAML experience. You can build up the visual tree of any control whether it’s a standard WPF control, a third party control or your own custom control implementation. There are also ways to change only part of a control style if you don’t need to customize the whole thing. Concepts that I previously explained such as bindings and commands are a huge help with the flexibility of control customization. For example, a well implemented control doesn’t care whether the template uses check boxes or toggle buttons; they are simply binding to the appropriate properties on the control.

Optimizing Performance

Once you have the foundation of your application in place, it’s time to crank up the performance.

While WPF is really easy to get your data bound up and flowing around it can let you throw away performance if you’re not careful. It’s important to keep in mind exactly what’s going on under the covers and to make sure you’re doing things as efficiently as possible. Here’s a list of things that were important in achieving great performance in our demonstration application:

1. Reduce the UI Element Count

UI element creation in WPF is expensive, so wherever possible, reduce the complexity of your data templates. Even simple elements such as a Border can degrade the performance. When building our data grid control, we found that simply removing a couple of Border elements from each cell removed a scroll bar lagging issue we had. Another trick we used in the data grid control was for cells to have a display mode and an edit mode. When the user is not editing a cell, there is not point displaying an expensive TextBox control. TextBoxes have tons of UI elements in their templates, mainly due to the built in scroll viewer. So when a cell is not being edited, it displays the data using the much simpler TextBlock element. Tip: as you make changes or experiment with data templates in your application, make sure not to leave behind any elements that aren’t needed any more – such as Grids that only contain a single child.

2. Reduce the call count of methods

As a project becomes more complex, it’s easy for some methods to be called many more times than they need to. If a method is quite expensive, such as iterating through large collections, then there can be a huge unnecessary drop in performance. To help solve these issues, it’s a good idea to use a profiler on your application every now and then. This makes it easy to identify methods that are being called an unexpected number of times. Follow through the call stacks and find places where the call count of expensive methods can be reduced. In particular, because bindings typically ‘just work’, you may be unaware of how often a binding is updating and causing unnecessary load.

3. Virtualization

A well-known way to improve performance of displaying collections of items is to use virtualization. There are 2 types of virtualization: UI-virtualization for the view, and data-virtualization for the model. The idea of virtualization is to only load resources when they are needed and re-use if possible. If you have a long list of items displayed on screen, such as in a data grid, only a small subset of them will fit in the viewport. There is no point trying to render the items that don’t fit on the screen, because remember: creating lots of UI elements is expensive. A virtualization engine works out the list of items that need to be displayed, and only generates the UI elements for those items. As the list is scrolled, some of the items will be destroyed as they move off the screen, and new elements will be made to display the items that scroll into the viewport (or, re-used with new model data to reduce the cost of destroying and creating new UI elements). Overall, the number of UI elements that exist are kept to a minimum. Same thing can be applied in the model. If fetching data is slow, for example it is coming from a database over a network, then you’ll want to look for ways to only fetch the data that needs to be displayed, rather than downloading the whole database.

4. UI element recycling

Another fantastic performance trick is to recycle UI elements as mentioned in the Virtualization section above. In scenarios where a control needs to be refreshed to display new values, rather than destroying the existing display elements and creating new ones, you can recycle the elements and simply change some of their properties to update the display. This reduces the number of times you create new UI controls and in turn improves the performance. By using UI virtualization in conjunction with UI recycling, data display controls become incredibly efficient.

5. Don’t overuse bindings

One of the down sides with the MVVM pattern in WPF is that it usually requires a lot of bindings. The problem with this is that bindings are expensive, so don’t overuse them. I’ve found that when building applications, there isn’t too much of a problem here, but when implementing controls that handle lots of data such as a chart or data grid, extra effort was made to avoid using too many bindings. Similarly, dependency properties are also slow. Simply getting or setting the value of a dependency property is much slower than a regular property. If a property does not need to be bound to, then it does not need to be a dependency property. If you find places where a dependency property is being accessed more than once in a single method, caching the value in a local variable somewhere in the method can help improve performance.

Sometimes after improving the performance of an application as much as possible, some operations such as loading or sorting epic amounts data will still be slow. In these situations, it’s a good idea to display loading spinners or progress bars in your application to at least let your users know that the application has not hung.

Don’t forget about your Data Access

While you may spend a lot of time optimizing WPF, you will also want to ensure your data access is as efficient as possible. There are several areas where we apply hints to provide LightSpeed with a better understanding of our intent when querying to help improve performance.

Conditional Eager Loading using Named Aggregates

One of the classic problems you encounter with using an object relational mapper is you lose sight of the number of queries being made to the database. Most object relational mappers will only load what is needed and then load subsequent data on demand – for example if we have a Sale which is related to a Product then when we load the Sale, the Product will be available but will be loaded the first time we access that property on the Sale instance. This is known as lazy loading and provides efficiency in allowing you to have free access to the object graph without having to load the entire set of data into memory up front. The problem with this approach is that in bulk scenarios such as our data set, if we wanted to make use of data about the Product then we would need to load the Product for each Sale we encounter. Expand this out to 1 million rows and this means we would make 1 million and 1 queries! To counteract this we can change the load behaviour to use eager loading. In our application we don’t want this to be the default so we can use LightSpeed’s conditional eager loading approach by using a named aggregate to control when the Product data is loaded in with the sale.

Understand and Optimize your LINQ queries

LINQ has been a fantastic addition to our language syntax, allowing us to natively express our queries in line with code in a way that maintains separation from the underlying data access providers. One of the challenges with this however is that each LINQ provider has to implement its own understanding (and response) to the syntax that we present it, so queries that may make perfect sense and work happily when dealing with an in memory set of objects suddenly makes no sense when trying to be translated into a server side database query.

One of the traps you can quickly fall into is that most LINQ providers when faced with a query that they cannot translate will either throw an exception (leading to a runtime failure) or shift the operation to be handled client side – this is particularly common when it comes to selecting data where rather than focusing on the specific columns asked for in the selection, we may need to pull back all of the data from the server and then handle the selection client side.

A basic check list to remember for handling this is:

  • Don’t use any application specific properties or functions in your criteria or selections (or, if your ORM is awesome enough, write custom functions so they have a SQL implementation)
  • Avoid traversing object relationships in your selection, rather use explicit joins to avoid confusion for the ORM and ensure it can select the data server side
  • Using .ToList(), .ToArray() etc will force client side enumeration so anything after these calls will be run client side
  • Remember that LINQ queries are an expression of intent, they are not 1:1 mappings to SQL queries

Profile it

While an object relational mapper provides great convenience to developers to abstract them away from the mechanics of writing SQL, it is critically important to understand what queries are being run to understand if they are efficient and if they can be improved on. LightSpeed includes a logging channel to emit the SQL statements it is executing which can be accessed by attaching an ILogger instance to the LightSpeedContext. This will give you a good understand of what queries are being run and when they are being executed in relation to your application’s flow allowing you to check if eager loading may be needed to avoid excessing lazy loading or if you might have an inefficient LINQ query which is not performing as you intended.


I hope that this article has given a good overview of how to approach building a modern line of business application that needs to consume millions of rows of data and yet perform quickly and efficiently.

You can download a fully functional sample to explore the code yourself. It includes the free version of LightSpeed (which supports up to 8 tables) and a 60 day trial of our WPF Elements library. The sample includes Readme with troubleshooting tips.

The Best .NET ORM: The LightSpeed Story

We’re working towards the next major release of LightSpeed – LightSpeed 6. LightSpeed was the first product that Mindscape ever created and thus has quite a history to it now. We’ve had fans from day 1 who are still with us but as LightSpeed continues to grow in popularity I thought the newer users might enjoy a trip down memory lane.

When we started Mindscape in early 2007 we only knew we would be a products company with a focus on developer tools that didn’t suck. We bootstrapped meaning that we didn’t raise a pile of money — we had to do some contracting at the time to pay the bills.

Andrew always had an interest in ORMs. At the time I didn’t personally think the world really needed a new ORM and they equally weren’t all that popular in the .NET space (remember, this was pre LINQ to SQL and Entity Framework). I was happy to have Andrew scratch his own itch because I saw it as an opportunity to put the wheels on our product development process.

What we needed to do to deliver LightSpeed as a product:

  • Build the product
  • Figure out how to build MSI installers
  • Be able to take payments online
  • Be able to manage customer records
  • Have a licensing agreement
  • Have a structure for explaining the product on the site
  • Have a logo for the product created
  • Have a nightly build & CI environment created
  • Know how to build a help file

A lot of this was new to all of us — we had a services background where we rarely needed to do more than build the product. I figured that even if LightSpeed was a market failure we would at least learn a lot. Furthermore, many of the product development stages were re-usable: the online store functioning and being able to handle nightly builds.

Towards LightSpeed 1

We went through a beta phase with developers. This was a bit of a waste of time to be honest – we got a little bit of a feedback however awareness of Mindscape was relatively low and interest in object relational mappers was even lower.

We released LightSpeed 1 in August 2007 and started picking up a few customers – that was good, end to end, the system worked. We could take money online, our users could understand the product. Great.

Welcome Microsoft, seriously.

Around this time Microsoft announced LINQ to SQL and Entity Framework. I won’t lie, my heart sank. Some sales had been coming in for LightSpeed and was picking up some faith that we could actually make some money from it.

In my initial panic about this I discussed with the team the possibility of doing various things:

  • Should we just open source LightSpeed and use it as a marketing angle?
  • Should we lower the price or take some other action to get more sales sooner?
  • Stay the course and just try to build a better product than Microsoft?

Thankfully (in retrospect!) the team decided we should stay the course. There were several challenges with this strategy. We knew we’d need to invest much more effort & time (read: money) into LightSpeed to make it a viable competitor.

LightSpeed 2

LightSpeed 2 shipped on June 4th 2008 with a fancy new Visual Studio integrated designer and something resembling a LINQ provider. We had also added some cool core features that users had been asking for.

LightSpeed Designer in Visual Studio 2008

The Designer allowed for full bi-directional updates (e.g. make changes to your design file and you could push changes to the DB automatically, or, if you preferred changing the database, you could and have the designer update from the database). This is still one of my favourite features of LightSpeed – and it absolutely hammers the competitors out there now for their database syncing.

The LINQ provider. Well. Let me tell you – it’s bloody hard to write a good LINQ provider. Doubly so when you’re one of the first teams in the world to actually write one and the documentation was little more than a couple of blog posts.

LightSpeed 2 sales increased, we started getting more companies on board. We had to deal with a lot of questions about why somebody should use LightSpeed over L2S or EF, which were free. In particular, LINQ to SQL was a pretty good product in my opinion. Entity Framework significantly less so – it was slow, lacked even basic features and was a bit of a dog.

Our LINQ provider wasn’t that powerful yet but it did have one great advantage – it worked over all the databases that LightSpeed worked with (MySQL, SQLite, Oracle, SQL Server, PostgreSQL etc). This was a pretty solid selling point.

LightSpeed 3

LightSpeed 3 logo

In December 2009 we released LightSpeed 3. This was huge for us – and really was the version that started generating significant sales for the company in the 2010 calendar year.

The LINQ provider got significant love, the visual designer started supporting advanced refactoring features and the core engine got a lot more capability.

We also shipped a visual migrations framework which we thought would be a big draw card for the product since nobody else was doing a good job of database migrations in 2009. It supported several of the databases that LightSpeed supported, could generate scripts and even had a command line utility if you wanted something for your server.

It never took off. It’s still in LightSpeed and is used a bit but it seems that .NET developers never fell in love with database migrations to the same degree that our friends using Rails did. Oh well, the other features still made for an impressive release. Our users loved it.

About this time it became apparent that Microsoft was killing LINQ to SQL. This was a really big deal to a lot of developers. Frankly, my opinion at the time (and still is) was that Microsoft killed the wrong baby. LINQ to SQL was fast, it worked, it was easy to understand. Entity Framework was getting bloated and still missed basic features that we had shipped in LightSpeed 1.

A lesson on education

Microsoft actually did a great service for Mindscape – they educated the .NET masses that an object relational mapper was a better way of working with data for line of business apps. That was huge – our small company never could have educated that many developers on why they should use one.

They also lacked focus by shipping two products that competed with each other and that confused the market.

Furthermore, their products were pretty woeful for serious apps.

Looking back on when I was so panicked about Microsoft entering the market, I could now see that it was a hugely useful thing to Mindscape.

LightSpeed 4

LightSpeed 4, looking more amazing!

In mid 2011 we shipped LightSpeed 4. This was also a big release for Mindscape and LightSpeed 4 sold a lot of licenses.

You will note however that by this stage we were shipping a new major version about every 18 months. That was quite slow in the eyes of many developers. The truth was that we were drinking our own Kool Aid. Ever since LightSpeed 1.0 shipped we had been shipping nightly builds that included every new bug fix, feature and enhancement. To be honest, sometimes we didn’t realise just how much work we had poured into a product until I went to write the release notes.

You’ll see from our change log that every major version has ended up with a longer and longer list of changes. I’m personally not a fan of arbitrary quarterly releases but perhaps one day we should change our approach — we don’t want users thinking the product isn’t being updated!

LightSpeed 4 added distributed data support (clients, servers, distributed LINQ, only sending change sets over the wire — all the stuff the cool kids wanted). We also further improved the querying engine, added a meta data API and auditing support, to name but a few key features.

LightSpeed 5

LightSpeed 5

In March 2013 we released LightSpeed 5. We had the core object relational mapping and design time experienced pretty much nailed by now so this was a big polish release.

LightSpeed 5 added internal improvements for compiled queries, support for new data types in SQL Server 2012 and designer improvements to boot.

LightSpeed 5’s performance further widened the gap between LightSpeed and Entity Framework. Raw performance wise, LightSpeed could get you your data more than 14x faster than Entity Framework at the time! LightSpeed was so fast that it was only marginally (few %) slower than the anaemic micro-ORMs that had become the flavour of the month in data access.

LightSpeed, it really kicks the lamas ass
LightSpeed sales continued to climb as performance conscious users who had started on other stacks needed the performance that only LightSpeed could provide. We saw several commercial competitors give up on selling their ORMs which only assisted in growing our LightSpeed audience. I think our commitment to building a quality product was proving to be a winning strategy — who would have thought?!

That brings us to today!

That’s the history of LightSpeed – from the first product at the start of Mindscape’s life, through to now. We’re working on LightSpeed 6 and seeing if we can’t continue to make data access even better.

I’m very proud of LightSpeed, it’s served the company well and, just as importantly – our customers have been served well. While the core product is very mature and very fast, there is always more we can do and we’re looking at that for LightSpeed 6.


This has been quite a long post. I haven’t written a post like this before on the Mindscape site but I thought those of you who use LightSpeed might enjoy a walk down memory lane with me :-)

Comments always appreciated below, and if you’re a .NET developer then be sure to grab the free edition of LightSpeed.

Happy coding!

Tagged as LightSpeed

Nightly news 23rd September 2013

Developer Notes

WPF Elements

  • Support for dynamically changing LineStyle.
  • Resolved dynamic scatter rendering issue and potential null reference
  • Resolved potential null reference exception.
  • Added Scheduler.IsReadOnly property.
  • Added easy way to change scheduler wing button content using SchedulerFormatter.
  • Added Ribbon control to VS toolbox.
  • DataGridColumn recylcing to resolve an override issue. (details)
  • Resolved an axis locking issue. (details)
  • Changed OutlookBar.SelectedItem to be a dependency property. (details)
  • Added OutlookBar.SelectedIndex property.
  • Added AxisPlacement.Overlay feature. (details)
  • Reapply sorting and filtering when the DataGrid ItemsSource property is changed.
  • Support for all chart values being NaN. (details)
  • Resolved a crash in DataGridCsvExporter when there is a column with no PropertyInfo.
  • Added DataGrid.ResetDisplayedItemsSource method. (details)
  • Resolved min/max axis binding issue. (details)
  • DataGrid footer and aggregates

WPF Diagrams

  • Improved alleyway centering in the A* pathfinder to resolve a routing bug. (details)


  • Improved error details when an unsupported query is issued.

As usual the free editions of the nightly builds are available right now from the downloads page, and the full editions from the store.


Join our mailer

You should join our newsletter! Sent monthly:

Back to Top