This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Hello all
About 250'000 of these records we need to keep in memory from application startup to application shutdown. In between that time there exists a high probability that most of the other records will be read at some point, but we won't need them for long and should allow the GC to recover them as memory conditions require (unfortunately many of these records are large). |
|
|
1. Not quite, the SimpleUnitOfWorkScope defines the lifetime of the UOW as being equal to the lifetime of the scope. So what you describe would be true only if the SimpleUnitOfWorkScope had the lifetime of the entire application. For whole-app lifetime, I'd tentatively suggest you'd probably need to implement a custom scope object which stored the unit of work as a static member (this would be trivial to do and we would be happy to provide guidance), but given your requirement to release references this probably isn't the right plan -- see below for an alternative. 2. No, the first level cache (identity map) holds strong references. A possible option would be to use the second level cache, either using say the memcached provider or a custom ICache implementation which used weak references. But this would be beneficial only if you used short-lived units of work, allowing the identity map to be reclaimed between each one. Based on your description, the best strategy might be to use a separate unit of work to do each "transactional" block of work (where you are reading and writing records that you don't need to keep), and keep your 250000 "permanent" records in memory via the second-level cache (from where they will be pulled into the "current" UOW on demand). You can specify different caching policies for each entity type so if the "permanent" records are of distinctive types (e.g. reference data) then this should work quite well (you'd turn caching on for the permanent types and off for the transactional types). 3. Cascade deletion can be controlled at several levels (and, I have to admit, has evolved by accretion to the point where it is rather confusing now). Basically a delete will be cascaded if the association is non-nullable or has the Dependent attribute, *AND* CascadeDeletes is set on the LightSpeedContext or the entity that's being deleted. The entity-level cascade delete setting is a way to override the context-level cascade delete. So I'm not sure whether LightSpeed will be able to handle your scenario. If you've got three associations from an entity, then you can control which ones cascade and which don't by using nullability and the DependentAttribute (e.g. when a Person is deleted, you want to delete Children but not Friends, you would make Children dependent or non-nullable, but make Friends nullable). But if you want to cascade deletion to only specific entities *within* an association (e.g. when a Person is deleted, you want to delete male Children but not female Children), you couldn't use LightSpeed cascading, and would need to handle it in the domain layer. Hope this makes sense -- let me know if you need more info. |
|
|
Thank you for that write up. |
|
|
Yes, I think you could implement fine grained control with a custom cache. Basically I guess you would implement ICache to inspect the item on an AddItem or SetItem, and cache it only if it was a "permanent" record, otherwise no-op it (or cache a weak reference). LightSpeed would still call AddItem or SetItem for every entity of a cacheable type, but I don't think a custom cache provider is obligated to honour that request. I have to admit I'm not an expert on the caching stuff but if you have more questions I can drag in someone who is! 1. Correct. Tracking occurs at the unit of work / identity map level. An entity is tracked only so long as it is registered in a unit of work. Of course, if LightSpeed needs to load a particular entity and that entity is available in the L2 cache, it will pull it out of there and register it in the unit of work that needs it. See http://www.mindscape.co.nz/blog/index.php/2009/11/05/whats-the-deal-with-lightspeed-caching/ and in particular the section "When will I get a cache hit?" 2. No, you don't have to detach an entity from its existing UOW before attaching it to another UOW. Note you do need to use the Attach method rather than Add: Add is for adding new entities (i.e. registering a pending insert). As there's no need to detach, the issue with the cache doesn't arise. 3a. Yes, PerThreadUnitOfWorkScope creates a UOW the first time a thread needs one, then keeps returning the same UOW for that thread; but each thread gets its own UOW. (If you're familiar with ThreadStatic variables, it stores the UOW reference into one of those.) 3b. The docs are misleading (some would say wrong) and in my opinion the PerThreadUnitOfWorkScope itself is rather confusing. PerThreadUnitOfWorkScope is IDisposable, so you can dispose it with a using statement, but you should never do so. Calling Dispose on a PerThreadUnitOfWorkScope will dispose the calling thread's UOW, then prevent you from disposing any other thread's UOW -- and you will never be able to get another UOW for the thread that did the disposal! Instead, each thread needs to dispose its Scope.Current UOW separately -- but you still can't get a new one once you've done the disposal. For these reasons, I usually advise people to avoid PerThreadUnitOfWorkScope. It's just misdesigned for all but the most trivial of situations. Given your multithreading requirements (multiple threads but only one performing database activity at any given time) and that you are on board with using short-lived transactional blocks, PerThreadUnitOfWorkScope is not a good fit anyway because you want each thread when entering a transactional block to spin up a new unit of work, then dispose that on exiting the transactional block. PTUOWS would keep using the same UOW for the lifetime of the thread, which is not what you want (because it would hold references). Instead I think you'll want to be calling LightSpeedContext.CreateUnitOfWork explicitly (and disposing the UOW when the transactional block is done). Obviously you can encapsulate this within a dispenser of some sort, possibly within your repository class. Hope this makes sense -- please let us know if you have any more questions. |
|
|
Sorry for the above post (can be deleted) - cut&paste from WinWord didn't work well. Here the cleaned up versio of the same post: Understood. That cleared up a lot for me. Thank you very much Ivan. |
|
|
On the contrary, we are probably too close to the product to know what needs to be made clearer in the documentation, especially for people new to LightSpeed -- we would very much welcome your suggestions for improvement! You are right that we probably could merge Add and Attach from a technical point of view -- the implementations are almost identical. I do find when reading code that Attach makes it clear that the code is smuggling in something from an old unit of work (i.e. low-level UOW manipulation) whereas Add is simply and specifically registering a pending insert. But it's equally arguable that this minor readability benefit doesn't justify the API clutter and discoverability issues. I can't give you a definitive answer because that bit of the API dates back to before I joined the company...! |
|