This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Hello - I'm running Lightspeed 5 nightly build from May 29th in Visual Studio 2013. I have attached the project for the model that isn't working as I think it should. The funny thing is, this used to work but I simply can't get it to work now. What I want to do is eager load all entities related to what I need. The database connection is made over a trans-Atlantic hop from the UK to the US, as the Oracle ERP server is located here in the states and one of our factories is located in the UK. This connection gets dropped due to lost packets more times than I care to admit and it is ridiculously slow at other times -- all reasons to get all teh data you can at once instead of making multiple database calls. Looking at the model I sent over, I want to load the Order record and get all OrdDetail records with their associated Arinvt (that's an item, don't ask). So far that works. Where it breaks down is with the PceVBomOpmat view. Again, I have no idea what that means but I do know that it searches through the associated Bill of Materials for the parent Arinvt item and returns all other Arinvt records associated with the parent Arinvt item. Except it doesn't do that now. A little background data - the Arinvt table has 2 separate composite keys. You can use the ItemNo, Class and EPlant fields, or you can use the StandardId and ArinvtId fields (or you can use just the ArinvtId field as a foreign key if it is a many-to-one backreference). In this code, we use the second key form. I do get all the PceVBomOpmat records, but for some reason I now have to use the OpmatArinvtId and OpmatStandardId as keys to go back to the database and read the Arinvt records. The OpmatArinvt entity is always null. Here's the code snippet that does this. I have commented out the one-by-one patch I had to use to get it to work and left the code that used to work in place.
Note that op.OpmatArinvt is now always null. Can you please have a close look at how the model is set up and annotated to see if you can spot what I'm doing wrong? It seems to me this really should work but I don't know what I changed that broke it. Also, although we have little to no control over most of the database, the PceVBomOpmat view was copied from a view supplied by the vendor and modified for our use, so we can tweak that some if needed. Thanks! Dave |
|
|
Hello? Are you guys on holiday or something? :) I really need some help with this. The performance hit of my workaround is killing the guys in the UK... Thanks, Dave |
|
|
Hi Dave, Im not seeing anything obviously wrong, however Im wondering if there may have been a regression introduced between the build you were previously using and the one on May 29. Do you remember what version you were using previously? I can look at making an earlier nightly build available to see if we can track down if this is indeed an issue and if so where it has been introduced.
|
|
|
Jeremy - Thanks for the response and the offer to dig up an old nightly build for me to try. Even if that were a probable cause I wouldn't need you to do that, as I have saved every nightly build we have ever used in a release. Your comments caused me to go back and look at the history of when this stopped working and what else was going on. I now don't believe it was Lightspeed code, at least not directly, as this was working just fine with the 3/25/2014 nightly build - until I started using a new package to catch database errors and retry the transaction. It's the Polly package, if you're at all interested - https://github.com/michael-wolfenden/Polly - very cool stuff. Anyway, there must be something I did or didn't do when I added this to cause the error. I may still need your assistance tracking down what's going on, but at this point I need to do more troubleshooting to track down exactly what is failing and why. Thanks again, you guys are awesome... |
|
|
Hi Jeremy - I believe I have isolated the problem, and it seems the last collection is lazy loaded instead of eager loaded. Here's the code snippet where I load everything:
For reference, here is how the RetryPolicy is defined:
The So I put a breakpoint in the database access code and opened QuickWatch on the result variable. As I drilled down through the collection of objects, I also had the debug window open with SQL logging enabled. I got down to the PceVBomOpmat table and the Arinvt member was null, as I expected / feared. But, I also noticed the SQL code to load the final Arinvt records was running in the debug window. When I refreshed the QuickWatch window, the Arinvt member was fully populated. That sounds like lazy loading to me. The reason this worked for me before is the UnitOfWork was created for each call into the source file and it was still open and active when I got around to following the rabbit trail down to the BoM Arinvt records. With Polly, the UnitOfWork is only active for the bit of code that does the original database access. We added the retries to work around the slow long distance hop - apparently enough packets get lost at times the firewall on the server end closes the connection. You have to close and reopen the UnitOfWork to reestablish the connection to retry the query. Anyway, this is not the desired behavior. I'm fairly certain this is not a regression error either, as it broke when I was running a nightly build from more than a year ago and the latest nightly build I downloaded didn't fix it. Any ideas what is going on or how to fix it? Thanks, Dave |
|
|
Hey Jeremy - Any chance you can look at this before you head home for the weekend? Thanks Dave |
|
|
Hi Dave, No Im afraid I wont be able to look at this for a while yet. I will get back to you once I do have a chance to investigate this further though.
|
|
|
So.... We're paying $899 US for support and you can't even tell me when you'll look into something that appears to be a failure in your product? As much as I love Lightspeed and how well it works, this will have a huge impact on our deliberation on whether or not to move to EF by the time renewal comes around again. That makes me sad, to see a marvelous product get left on the shelf by the new shiny (Raygun.io comes to mind). I understand there are limited resources and too many priorities to do everything that needs to be done, but we are paying you for support as per contract and we still have deadlines to meet as well. The huge difference between the level of support when we first started using Lightspeed and the level of support today makes it hard to justify continuing our relationship. Meanwhile, we still are going back to the database for .. every .. single .. record .. instead of eager loading everything we need. Regards, Dave |
|
|
Hi Dave, Yes, this is definitely a case of limited resource. The current focus is in getting integration with Visual Studio 2015 sorted which unfortunately is a time consuming process.
|
|