This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
I have a POS application. In the terminal 90% of the data is lookup data that seldom changes. To speed up the application I want to cache all the lookup data. I have set every entity that is a lookup table to use 2nd level caching with an update period of 30 mins. Within the code we use lots of LINQ expressions to select the correct product items etc. But in testing it appears that these are not being held in the cache for 30 mins. To test this, I opened an invoice, and displayed the product list with LINQ, by my understanding, this should have populated the cache. Then, without closing the application, I quickly changed one of the products in the underlying database. When I displayed the product list again, again with LINQ, the changes were already visible. This entire process took less than two minutes. So unless I was really unlucky twice, and my updates fell on exactly the 30 minute barrier, I would not have expected it to show the changes. Can you please confirm that 2nd level caching does in fact work with LINQ, and if not, short of me making global list FYI, I am use LS5, with a nightly build form a few weeks ago. Many thanks |
|
|
This will depend on what your LINQ query is actually doing but in general this is likely to be no - see below for why. As some background LINQ queries are translated into standard LightSpeed queries as part of the work the LINQ provider undertakes so all queries are treated equally in that regard but if you are projecting as part of a LINQ query (and most of the time this is actually the case) then we will perform a UnitOfWork.Project as the associated UnitOfWork call for executing the query and this does not use the cache at all even if you are actually just projecting back what would otherwise be an entire entity. So something like UnitOfWork.MyEntities.Where(e => e.Name == "Foo").Single() would be loaded from cache, but something like UnitOfWork.MyEntities.Where(e => e.Amount > 0).Select(new { e = e, Amount = e.Amount}) would not. Here is a quick summary on when the cache is used from http://www.mindscapehq.com/documentation/lightspeed/Performance-and-Tuning/Caching
Its that last point which is especially relevant for LINQ because most LINQ queries result in a projection which removes the ability for us to use the cache.
|
|
|
Actually the vast majority of my LINQs for the lookups are not projecting, the are simply...
All our lookup tables are very small, i.e. dozens of rows, so I was hoping I could load the entire collection into memory, then just use the LINQ like above to filter what we need faster than constantly going back to the database. I guess I will just make some global lists that contains the data, and search / filter on those. Then create some mechanism to refresh them on a regular basis. But this just seems so messy. |
|
|
If you are just selecting out the original entity (e.g. as your query above, from c ... select c) then you will be using a Find call so this should use the cache. Keep in mind we will always issue a database call for anything other than a FindById: e.g. UnitOfWork.FindById or a FindAll if you are using FindAll caching: e.g. UnitOfWork.Find When we get the results back from the database we then check if there is an entity matching the Id and hydrate it from the cache otherwise its loaded from the result set. If you have a filter involved then this will always require a database call to determine what in that set. So if Categories above is a lookup table then that query should trigger a database call and then the results should be loaded from the cache rather than from the database assuming they have already all been loaded once already.
|
|
|
Hi Jeremy, Thanks for the advice. I have gone ahead and made a very simple third level cache, see code below. Not pretty, but for what I need it appears to work well, with a definite performance increase. I was considering to further refine it, by putting the "refresh" code into another thread on a timer, but was worried about cross-threading issues, and timing issues if data read during a refresh cycle. I personally think it would be useful, is something similar to this could be added to a future version of Lightspeed.
I use it as follows;
Any suggestions you have to further refine this would be gratefully received. PS. I must add the original idea can from the web. I was going to add a credit to the posts that set me on the right track, but I am unable to find it again to get the link or name. So, I apologise to the original author for my blatant plagiarism.... |
|
|
Hi Mark, The approach above looks fine, Ill just check a couple of thoughts with you :) I presume ContextsData is managing access to a UnitOfWork behind the scenes? If so just make sure that its always accessible and spins up/down the UOW instances as you would expect to avoid any long running UnitOfWork's holding connections on you or subsequently hitting a disposed UnitOfWork on the later refresh calls. I also presume .GetEntities returns a fully loaded list rather than returning an IQueryable? With the reference data entities did you explicitly Detach them from the UnitOfWork after they were loaded? My thought here would be to avoid any disposed exceptions from accidental lazy load triggering later on. Lastly be careful that they dont accidentally get enlisted in another UnitOfWork later on as you might start running into either disposed exceptions as per above or concurrency related issues due to it being "joined" to many running UnitOfWork instances across a number of threads. If you are making assignments just assign by Id to avoid this. Have you thought about this already?
|
|
|
Hi Jeremy, Many thanks for the feedback. I can confirm that ContextData is managing the access to the unit of work behind the scenes, and the GetEntities I had not thought of Detaching the entities, but all the tables I load here have eager loading switched on, as they only refer to other lookup tables, not to the main data tables. But I will certainly look into this, just in case. Finally, I am not sure what you mean by enlisted in another UOW. All the tables here are purely read-only in the app. So the only possible changes will come in the database, another program on another computer is used to add / change lookup table data, and that is why we have have locking for when we re-populate the lookup tables once every 15 mins or so. Again thanks for your help. I hope we can iron out any wrinkles, as I feel this may be helpful to others who want full table caching for lookup data. On a side note, the average query time improved from 180ms to 10ms, using this approach. When the lookups are used as often as ours are, this is actually giving a huge performance increase to terminals that connect to the database over a LAN. Best regards Mark |
|