This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Ever after the MySQL designer bug I've found I'm using nightly builds... As such... I find myself subject to all kinds of wierd bugs in Lightspeed, from extreme slowness in creating queries to Entities being save when calling SaveChanges() without me adding them to the UnitOfWork.
Is there a way to get more details about nightly builds so that I could actually understand what is going on with the nightly that I just installed? Alternatively... Could a bug-fix release of LightSpeed that deals with SQLite 1.0.58 and MySQL 5.1 Server be released so that the variance in performance / stability would actually go down? |
|
|
We're hoping to release 2.1 (which will include SQLite 1.0.58 and MySQL 5.1 fixes) in the next couple of weeks; unfortunately we have had a couple of crunches on other work recently and it is taking us longer to get 2.1 out than we had hoped. We would however be very keen to hear more details of the bugs you're seeing. The nightly builds are a snapshot of our progress towards 2.1 and if users report bugs against the nightlies then that helps us to improve the quality of the final release. (Of course we would hope that the stabilisation and release testing process catches all such bugs, but telling us about them eliminates the risk that it doesn't.) We are continuing to consider ways to provide changelogs or release notes with the nightlies. (By the way, regarding the specific issues that you mention, we are aware of one case where creating a SQLite query is extremely slow, which we have investigated and believe to be a SQLite issue. See Orm6e in the LinqSamples project for an example of this case. If you have come across other cases of extremely slow querying we would like to hear about them so we can investigate them.) |
|
|
Right now I'm mostly experiencing two issues...
|
|
|
Thanks for the information. You are right that with the IdentityColumn method LightSpeed will always insert singly. You might want to consider an alternative identity method such as KeyTable which will allow LightSpeed to generate and allocate identities in batches instead of having to query the database for the last identity each time. Nevertheless, even with IdentityColumn, something seems awry with those performance figures. I've just run some quick and highly unscientific tests and I'm seeing about 50-60 seconds to insert 20000 records (using IdentityColumn). That is using the MySQL 5.0 client to talk to either a 5.0 or a 5.1 server. (Haven't tested with the 5.1 client.) Neither server is particularly oofy and the network is a generic wireless LAN so it's not abnormally fast. I have tested with both standalone entities and with associations (in case we had an O(n^2) problem wiring things up once the identity had been received). How big are your records? Can you provide us with some code (LightSpeed code plus MySQL database schema) so we can reproduce the problem? Regarding the issue of LightSpeed saving entities that are not added to the unit of work, if I have correctly understood what you are doing, the behaviour you're seeing is by design and is intended to reduce the need to explicitly add entities to the unit of work. LightSpeed automatically adds related entities to the unit of work. This is because when you add an entity to an association collection -- e.g. person.Children.Add(newPerson) -- you normally want the entity to become part of the persistent collection. So LightSpeed implicitly adds the entity to the unit of work as well. On those occasions when you don't want an associated entity persisted, call IUnitOfWork.Remove to undo the implicit Add that LightSpeed carried out (note that this may have implications for child entities, and note also that if the Removed entity previously existed in the database, it will be deleted on save). |
|
|
Well, While this has improved performance considerably... I'm still not please with the current state of things... These are the performance #'s I'm seeing: Cataloging file1 As you can see, with KeyTable it takes about 1ms (!) per record (what I call "stream). Again. I would like to stress that all of this time is spent in the .NET exectable. The MySQL server sits idly doing nothing. As I've mentioned I'm now using KeyTable with an IdentityBlockSize = 1024 I think this has to do with the fact that each of these streams/entities has 7 associations with other entities. Where should I e-mail it to? |
|
|
Mail it to ivan at the obvious domain and I will take a look. If you could include some code that shows the kind of data you are putting into the entities and the associations you are setting up between them then that would also help. Thanks! |
|
|
I've been continuing to look into this and I think you might benefit from tweaking your UpdateBatchSize. In a totally unscientific test of inserting 10000 records with the KeyTable identity method, the effect of UpdateBatchSize was as follows: UpdateBatchSize = 10, time taken for inserts = 2000ms (I think the result for batch size 1280 is an outlier, since I got results more like 10-15 seconds for the same batch size after cleaning the database. Nevertheless, the pattern is still clear: a batch size of 1000 performs far worse than a batch size of 10.) So I would suggest trying batch sizes of between 10 and 50 rather than 1024. With these batch sizes I see timings of 7500-8000 ms for an insert of 40000 entities (admittedly small objects and with only one association involved). I think this is because the very large UpdateBatchSize results in extremely large INSERT commands which become expensive to construct (lots of StringBuilder reallocs). A smaller UpdateBatchSize results in smaller INSERT commands, albeit more of them, which can be constructed more efficiently. We will investigate the scaling characteristics here to see if we can improve matters, but in the meantime I would suggest retrying your test case with a much smaller batch size and see where the optimum lies for you. |
|
|
Freaky. It actually worked. With UpdateBatchSize of 20: Cataloging file1 |
|