This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
I'm having performance issues inserting large volumes of data. CPU usage starts out ok but grows until it maxes out one of my CPU cores. Attached is the screenshot of a profile I did using dotTrace. |
|
|
James, What kind of volumes are we talking? Can you post a sample? Cheers, Andrew. |
|
|
Volume is something in the order of 100,000 rows an hour, depending upon activity. I believe the issue is being caused by the fact that I have a single Session entity for the length of time that the program is open for. It is the parent for all FlowPacket entities that are created (I assign it to to Session parameter on a FlowPacket) I think Lightspeed is keeping a reference to all these FlowPackets from Session (which adds up to a lot of entities fairly quickly) and when I add a new FlowPacket to the Repository with a reference to the Session and save, Lightspeed is processing up the chain to the Session and then down the chain to all its previously saved children. I might blog about what I'm doing in the next couple of days which will include code. I'll post a link here. |
|
|
Yeah it was the parent Session keeping a reference which was the problem. The problem goes away when I change... packet.Session = _session; to... packet.SessionId = _session.Id; PerformUnload and ProcessEntity now only take up 1.5% of the time. Most is now spent executing the SQL, although the Regexes in BuildCommandLogMessage are also fairly significant. |
|
|
Cool. I fixed the logging slow down today. Logging is now off by default. |
|