This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Hi, I'm using LightSpeed v5. I have a server process that concurrently takes care of multiple responses. For each response that arrives, a short-living task is created, which creates a couple of DB connections one after the other (UOWs, both read from the same table named Units), writes a record to yet another table and finishes (is returned to the thread pool). I thought that this kind of operation can benefit from a 2nd level cache, since two UOWs may access the same entity in the course of a task's life, so I turned the 2nd level cache on for that entity (Units), with expiry period set to 1 minute. What happened was this: for a period of about 3 hours, the process worked fine. However, after that, exceptions started to be thrown, a swarm of them. No DB operation succeeded after that point. An hour or so later, the poor process crashed. Here is some data regarding the exceptions and the subsequent crash:
My question is: has anyone encountered a similar situation, and if so, what can be done to prevent it? Thanks! |
|
|
Hi, In theory, the cache should be making sure memory pressure isn't a problem (since it should start removing items if under memory pressure - at least according to the Microsoft documentation). Given that it triggers an out of memory error, that can be triggered just because the process is out of memory, meaning the call stack is largely irrelevant (anything trying to allocate will exhaust the memory and trigger and error). However, more interesting is the connection count. Now, you have pooling but I suspect that what's occurring is that the UnitOfWork is not being closed correctly. This may result in eating more memory also. Can you describe how you're ending the unit of work? In general, if possible, it's best to use a using block (using x = Context.CreateUnitOfUnitOfWork(){ }) so that when the scope ends, the unit of work is disposed. It's somewhat common to inadvertently not be disposing of a unit of work, causing the connection count to get exhausted. There's also the Level-1 cache within just the UoW which may explain the memory load. I'd also be curious to know: If you disabled the 2nd Level Cache, does the characteristics regarding connection counts change? Does it dramatically change the memory size of the process over a period of time? I hope this helps in debugging the issue! John-Daniel Trask |
|
|
Hi, Thanks for your reply. Here are the answers to the questions you've asked:
In my view, the connection pileup is a side-effect of some kind of corrupt state that begins with the first OutOfMemory exception (after which every DB operation results in an OutOfMemoryException or NullReferenceException), and not the root cause of it. At a certain point the internal state of the process (possibly of LightSpeed) becomes corrupt and then all those exceptions ensue, followed by connections accumulation and then the crash. Hope I narrowed the list of the possibilities for the reasons for the problem. Can I give you any additional information to provide more clues for this problem? Thanks! |
|
|
Hi, Sorry about the delay here, not sure how I missed this!
I'm going to need a repro case to dig into this unfortunately. As mentioned, the default LightSpeed second level cache is a very basic wrapper over the built in .NET cache provider. We're not doing anything special with regards to it but does seem like you've narrowed it down to using the cache being an issue. The only thing I can think of is if you're allocating memory in a hurry, before the cache can see that there is memory pressure and remove items from it. Again, very sorry about the delay here. John-Daniel Trask |
|