Tips Tricks and Troubleshooting
Long-running units of work introduce issues of concurrency and stale data. A particularly common pattern in Web applications is “unit of work per (HTTP) request” – never keep a unit of work across requests (this isn’t just a matter of concurrency or stale data, but storing a unit of work across requests is unreliable: for example a unit of work can’t be serialised to session state).
See the chapters Building Web Applications with LightSpeed and Building WPF and Windows Forms Applications with LightSpeed for more discussion.
The LightSpeedContext contains essentially static configuration information. There’s usually no need to have more than one unless you’re talking to multiple databases. Furthermore, because ID allocation in sequence or key table configurations is handled at a context level, using multiple contexts can lead to increased database load and ID fragmentation. See Building Applications with LightSpeed for more discussion.
Prefer configuration files to code-based configuration. You can set up the LightSpeed context in code by setting properties, or in configuration via the web.config or app.exe.config file. Using a configuration file makes it easier for operations staff or other users to configure the application to different environments.
Use partial classes to add behaviour to generated entity classes. You can also add properties in partial classes, but if those properties introduce additional state (as opposed to being wrapper or adapter properties around the existing LightSpeed fields), be sure to mark the backing fields with the TransientAttribute so LightSpeed doesn’t try to persist them.
Never use automatic properties in a LightSpeed entity class – the C# compiler generates a backing field which doesn’t map to a database column, and you can’t get at the field to mark it transient.
Eager loading and named aggregates allow you to tune loading performance for difference scenarios. Eager loading means that LightSpeed queries the database for an entity and its associated entities in a single database round-trip. This can massively improve performance, avoiding the so-called “N+1” problem. If you need finer control, to be able to choose whether to eager-load an association or not on a per-query basis, you can use a named aggregate: this allows you to say “I want to load this Customer with all their Orders” or “I want to load just the Customer” depending on the task at hand. You can also apply this to fields, for example eager-loading a large binary only if the “with high-resolution picture” aggregate is specified in the query.
Don’t turn on change tracking unless you need it. Change tracking has a memory cost, which can grow significant if you are making many changes to lots of entities. Don’t incur that cost unless you have a reason for doing so.
Tweaking UpdateBatchSize can improve save performance – or worsen it. When LightSpeed saves a unit of work, it sends the SQL statements to the database in batches. The number of statements per batch is controlled by LightSpeedConfiguration.UpdateBatchSize. The default value is 10. Increasing this figure reduces the number of round-trips to the database, which can improve performance. However, it also means that LightSpeed has to build, and the database has to parse, much larger blocks of SQL – which can worsen performance. (Also, some databases limit the number of parameters in a single SQL batch, so very large batch sizes may cause database errors.) Don’t increase UpdateBatchSize too far, and always measure the performance impact rather than just assuming that bigger is better!
Increasing IdentityBlockSize can improve performance – at a cost. When using a key table identity method, LightSpeed has to query the database to get the next “block” of IDs to allocate to entities. LightSpeedContext.IdentityBlockSize determines how many IDs it blocks out on each query. Increasing the IdentityBlockSize from its default of 10 therefore means LightSpeed has to query the key table less often, improving performance. For example, if you’re doing a bulk insert of tens of thousands of items, then increasing IdentityBlockSize from 10 to 1000 saves thousands of allocation calls and can make the application run measurably faster. But a large block size can result in a lot of unused IDs where the block has not been exhausted, so don’t increase IdentityBlockSize beyond the point of diminishing returns.
As of LightSpeed 5, the identity block case can now be set on a per-table basis.
When using sequence identity methods, IdentityBlockSize must equal the sequence increment amount. In a sequence identity method, the size of the ID block is determined by the database, not by LightSpeed. In this case, IdentityBlockSize must be the block size specified in the sequence definition, i.e. the INCREMENT BY amount. An incorrect IdentityBlockSize can lead to duplicate IDs being allocated, which will result in database constraint violations when you come to save.
Some customers have reported experiencing ObjectDisposedException when disposing a system TransactionScope than encapsulated two nested units of work. This appears to be a bug in the .NET Framework running on Windows XP. The solution is to use a single unit of work, or to dispose the first unit of work before creating the second unit of work.