This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
The crucial parts in the database unique key design pattern are:
In LightSpeed the size of the key block allocation can be configured in the IdentityBlockSize property of the LightSpeedContext. AFAIK the key block is controlled by the LightSpeedContext independently of any created UnitOfWork. So the allocated keys are used in more than one UnitOfWork. The default IdentityBlockSize is 10 keys. In a scenario where we have an application server (for the sake of simplicity without any clustering) with exactly one ambient LightSpeedContext for all threads and the application server has exclusive write permissions in the underlying database. Should the IdentityBlockSize be raised to 1.000 or even 10.000? I mean, when really a lot of entity creation is part of the domain logic. Or maybe the other way around. Is it possible to change the IdentityBlockSize before a creation of a new UnitOfWork? I mean, in a normal situation i will create less than 10 entities, but now for this one use case I'll need to create 1.000 entities, so please fetch the next 1.000 keys into the context. Can the LightSpeedContext deal with on the fly changes of the IdentityBlockSize? Or is this a straight road to pain? Next part is isolation. I assume that in LightSpeed the key block allocation runs in an isolated transaction. Can you explain your key block allocation strategy? What happens when i begin my transaction first? Is the key block allocation operation then a nested transaction inside my own (maybe very long lasting) transaction? In a database with 100+ tables and a lot of concurrent users a write lock on the one and only row in the KeyTable is really evil. Performance bottlenecks and deadlocks may occur. In my opinion the only save strategy for performing a key block allocation is to open a fresh new connection to the database, starting a transaction in serializable isolation, getting the keys and committing the transaction, then disposing the connection. With speed of light. ;-) Cheers sbx |
|
|
Yes your understanding of the way the key block is handled is correct. The LightSpeedContext which is associated with the UnitOfWork involved is responsible for handling the allocation of a new block of identity values when required. The block is then consumed as required across all UnitOfWork instances associated with that context until the block is exhausted and then a new block is allocated. So if you have a single context with many instances which is long running and you know you will be allocating a lot of instances then it is sensible to have the block size large since this will reduce the number of times we need to go back to the database to allocate a new block. The main downside to taking a large allocation is if you dont end up using it, any unused identifiers will be lost. You can adjust the block size at any time but it wont take effect until the next block allocation. In terms of allocation a new independent connection is created and a transaction is then started on that connection forcing a new transaction scope (to avoid being joined to any existing ambient scope). The associated fetch and update is then issued inside the scope of that transaction. The transaction is then committed and the associated connection closed - effectively as you have described :)
|
|
|
I appreciate your answer and I'm eased to hear that my assumptions about the key block handling in LightSpeed were correct. In my opinion developers using third party components in their solutions should be aware of the mechanics taking part behind the scenes, to use them effectively and efficiently. After all The Law of Leaky Abstractions states: All non-trivial abstractions, to same degree, are leaky. So I believe that this thread will help other developers to gain a more deep insight into LightSpeed, too. Thx, sbx |
|