This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
I've seen this thread... http://www.mindscapehq.com/forums/thread/1739 but it didn't fully address my situation. I am needing to sync records from our main databases to various secondary databases and am unable to use replication or triggers for this. So what I have in my application is 4 models, because there are 3 different databases my application pulls from in addition to the db that stores the sync information about which objects in which tables and db's need to be synchronized. Ideally, what I would want is a "TransactionCompleted" event with the objects that were actually saved once the tx was committed, or a list of dirty entities in the current UOW.SaveChanges operation, or access to the IdentityMap of UnitOfWork. I could use reflection but that would get very very ugly. The alternative that I've come up with is to attach to the Saving event, store the sender to a collection, then once SaveChanges completes without error, process the collection and send the sync notifications. Or if there is an error throw out the collection and hope another save operation wasn't overlapping. Am I guaranteed that the saves have been committed to the database when SaveChanges returns? Ive seen behavior before using IdentityColumn where I create 2 new objects, Add the first and SaveChanges, then set a property on the second with the first objects ID, only to find that the ID wasn't yet populated after the save. Thanks for any help you can provide. Matt |
|
|
The unit of work is
As an alternative to attaching to each entity's Saving event, you could override SaveChanges in your strong-typed unit of work. Note that in this case you must make sure that all your code uses the strong-typed unit of work, never the weak-typed one. This would give you something like:
Yes, once SaveChanges returns, the changes are flushed to the database. This normally means they are committed but if you have created a transaction around SaveChanges then of course it will not be committed until you manually commit it. In this case of course you have control over the transaction so you can perform the sync when you commit the transaction. We haven't seen that behaviour where the ID wasn't set after SaveChanges returned, and would be interested in a repro -- it sounds like a bug. |
|
|
Would there be performance implications for querying the whole unit of work, versus just the subset that has changes. It was indicated in the other post I read that when you are performing the saves, you have a list of just the objects that need to be updated. |
|
|
There might be performance implications if you have kazillions of unmodified entities in the unit of work, but the overhead is likely to be small compared to all the other stuff that happens during a save. We don't exactly have a list of just the objects that need to be updated -- we build that list as part of SaveChanges. So we are already traversing the entire unit of work, sorting the entities into change buckets, traversing relationships, etc., not to mention the actual work of building SQL batches and sending them to the database. So a quick sweep to pick up the modified entities would probably get lost in the noise unless you have a huge number of unmodified entities and a very small number of modified ones. My advice would be to throw together a quick test harness with representative loads and measure the difference. How many entities are you looking at in a UOW? |
|
|
The most entities would likely be when someone is working with Transactions or Invoices which could be quite a bit for some client locations, although paging will help with that. I'll have to do some testing. You actually brought up a point I hadn't initially thought about. You said you are doing sorting, based on keys I imagine so you can save parents before children, etc. Since these updates are getting synchronized to other relational databases I'll need to do the same sorting. Is hooking in to the Saving event going to be a reliable method for ordering the entities? |
|
|
At present, the Saving event is fired in save order (e.g. fired for parents before children). We don't however test for this so it should be considered an implementation detail. I don't think it's likely to change in future releases, but we get a trickle of requests from people around the event, so it's possible it could change. It is probably okay to use the Saving event for ordering as long as you put some tests around it so that you'll catch it if we do ever change it. |
|