This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Hi I am evaluating lightspeed among other orms and I am trying to see if lightspeed can handle some specific requirements and get instruction on how to implement (Complete lightspeed noobie) I have to have an orm that can map to different types of databases at the same time ie - One client uses Mysql and Oracle backends (two databases in the same app; they have similar schemas) I have to be able to retrieve data in a distributed way ie - Get objects from the different databases and combine them in a list on return (Each Table has a Primary Key that consist of an autoincremented Integer column and a contant integer that represents its partitionid - the id that represents the database an object is stored on) I know two things - I have to do away with foreign keys but I want to create a model that pulls the Object from the pseudoforeign key (ID and partitionid of another table intended to be linked) that is stored in a current table. (Extend model with nonpersistent Objects that represent the pseudo foreign key link above) The data is to be pulled in parallel so if I have to pass multiple contexts to achieve - that is fine with me but I need to know how to do this. On saving of a persistant object it is saved in the database that is represented by the partition id. (By keeping track of a list of contexts that represent the database of the partitionid) I have an xml config file that houses the databases and there partitionids and more databases can be added to this so the partitionid range is not fixed - I set partitionid based on the database that will save the object (just before saving). I have evaluated xpo and figured out how to do this with a lot of hacks like setting single field primary key and then setting unique index across the primary key to form the real primary key (pseudo) and pulling data from different datalayers that are active to a customer in parallel and combining into a datalayer. So I need to know the following 1) How to create a autoincremented field in a composite primary key that is database independent 2) In the case of MSSQL save strings as nvarchar instead of varchar (control some mapping characteristics based on back-end) 3)Only generate one class library for use with all of the databases that could be added and make sure that specific databases save using certain datatypes like number 3 example 4)Retrieve data distributed from pseudo foreign keys as listed above the combine to a list of that a the pseudo foreign keys object type 5)Save Data to a specific database based on its partitionid. If you can help with this and I can implement into my product then you can consider this product bought by me because It then can fulfill any use case of mine. I know this is not created to access data in a distributed way but if you could show me by using some of the hack logic I mentioned earlier on how I got distributed data from xpo orm and show me how to do it with lightspeed that would be great. I prefer the return of a specific object not a datatable though (I only used datatable because there xpcollection is pernickety). The solution should be for more than the databases listed above ie Vistadb mssql sqlce 4.0 mysql postgresql oracle firebird etc Thanks in advance. I hope you can help. |
|
|
For example:
There are a couple of potentially difficult details:
Here is an example of using a query instead of a LightSpeed association:
Let us know if you need any more info! |
|
|
Thank you for such a fast response!!!:) Major Bonuses points. 1) Ff PartitionID is not apart of Lightspeed id... how does light speed generate database that matches this composite primary key requirement? The composite id is necessary for dynamic partitioning of data between databases with out defining partitioning in he database (not every user is given partition permissions on a database and if they do the partition has to be reconfigured every time a new database is added - The model I'm creating fixes that). If just the integer was saved then the key won't be unique across databases and it would present a problem especially with many to many mappings across the database. (I guess there is a way to do this differently but it is a preference to adding an additional unique index - Why use two when you can use one) Another question The tables will have the same names but I want to create the tables with an object qualifier in case the databases are used to support another app... ie App 1 erf_Customer in one app but still map to the same entity model Customer. App2 otherone_Customer in one app but still map to the same entity model Customer. Is this feasible (No query is necessary between the two apps I just want to know if I can do this for instance NHibernate uses xml mapping that can I could add a placeholder to and insert erf_ at runtime)? 4) Thanks for the example Yes I know about the not being able to use standard foreign keys and therefore associations from any orm (That makes sense as databases don't support that anyway). I did away with standard foreign keys. In a One to Many Association I simply save in the Primary Table the composite id of its Foreign Key I wanted to create a property in the model that is not persisted but holds the reference like your query example (This only works for single object retrieval not the many side of a one to many and many to many relationship (The items could possibly be spread across databases)) Can the Unit of works be per app (static) or do they have to be recreated?
I wnated to create a static class that holds a static dictionary that holds the unit of works (If Other databases are added later they can be added to this dictionary (if they are in use)). In the Method that queries for data
A subset list of unit of work will be created based on the one available to the site Anyway to just let the database control the autoincrement so that lightspeed can respect the primary key (What would have to be altered to support this) and of course the other parts of the key is created by the app (Partiionid)? Last question and I think I can see if lightspeed is feasible If I have multiple models that represent features, How can they be combined at runtime to call from one unit of work that can handle these models so that I only Have to create unitofworks per database and not per feature. Sorry if I am lost but you have been very helpful thus far. Examples please just like you provided above they were very helpful (My lightbulb flickers, with a little more info it can stay on:)) |
|
|
Note I'm not suggesting you remove the composite primary key in the database. All I'm saying is that it sounds like the composite primary key in the database can be represented using a scalar ID and a separate partition field in LightSpeed, without changing the database. Hope that clarifies things! (One caveat I thought of with this is that you couldn't use it with the second level cache, because entities from different databases could have the same autoincrement value and so using it for the ID would mess up the L2 cache. That isn't an issue with the L1 cache because each database session gets a different L1 cache. If you needed to use the L2 cache then you would need the LightSpeed ID to be composite.) (You did mention about LightSpeed generating the database. If you're referring to designer-database sync, then yes, that does want composite PKs to be represented by composite IDs. But when it comes to performing selects, inserts and updates, LightSpeed neither knows nor cares about PKs, only about IDs.) Table Prefixes. Yes, this is a good for for an INamingStrategy. You would create an INamingStrategy that added a prefix to the table name; you could then vary the prefix between instances.
Combining 'Feature' Models at Runtime. This depends on whether there are associations or inheritance relationships across 'feature' boundaries. If not, then you should be okay. The unit of work doesn't care what assemblies the entities are defined in, it only cares that they derive from Entity. However, if you use LINQ, then you will have to write your queries a little differently. The designer generates a strong-typed unit of work class for each model, e.g. CustomerUnitOfWork, LogisticsUnitOfWork, with LINQ helper properties per that model e.g. CustomerUnitOfWork.Customers, LogisticsUnitOfWork.Shipment. If you are combining features at runtime, then you can't create an UberUnitOfWork that includes all of these (e.g. UberUnitOfWork.Customers, UberUnitOfWork.Shipments) because that would create a compile-time dependency on all of the models. Instead you would need to use a plain IUnitOfWork and the Query extension method:
This does exactly the same thing as the designer-generated LINQ properties without the need for a feature-specific unit of work class, so you can use the same unit of work object across multiple features. |
|
|
Thank you, this is most helpful. Yes composite key is needed as the lightspeedid so that the database can be prepared at runtime (Databases can be added at runtime) unless you have another idea for doing this without me having to create insane sripts for all databases.But then again level2 caching may be utilized so I guess that is a mute point unless I use another global caching solution Can I get an example for combining the unit of works. The example query definitely helps, now I need an example of the combining in the context? Remember completely new to lightspeed. Then I will know if lightsspeed is viable solution create test based on what you have been helpful with. Side note I originally generated model from an existing database as test db, just to speed up model creation but other databases for the app is supposed to be generated by orm . Is there a way to change my username. I didn't know it would be visible and would prefer not to broadcast it. I think you can see why. Thanks in advance. |
|
|
It sounds like you're thinking of creating the databases using something like the NHibernate CreateSchema method. LightSpeed doesn't have that; instead, we provide a migration framework which you script independently of the runtime entity definitions. (We do this because it gives you more control over versioning and data migration.) However, in your case, the migrations framework isn't viable because it doesn't support creating tables with composite keys. (We believe that new tables shouldn't be using composite keys, and the migrations framework doesn't try to support replicating arbitrary existing database schemas, which is what it sounds like you need to do.) So I think you will need to use something external to LightSpeed to create the new databases anyway. (You can use LightSpeed to populate them of course -- I just don't think we have anything that will help you create the schemas.) I'm not sure what you're asking for regarding a sample of 'combining the units of work in the context.' Do you mean creating the multiple units of work from the contexts? If so it would go something like this:
I'll ask our Web bod about changing your username. |
|
|
Thanks. As far as the combining- It was meant as a precursor to using the IQuerable method you mentioned earlier. I know I can't get an uberunitofwork but How I would use the example you mention earlier. I thought the migration would work but I do see where it does not create those fields correctly.
I did some research on the composite key angle and someone actually wrote about some of my findings in a sql azure article about unique ids across databases that make for easy extention of the database(by adding new databases at anytime-This supports growing data for instance seismologist whom need to collect data every second- the data could become very large or growing social networks that don't use the bigtable model - code does not need to changed in future to support the growing data when the database becomes full anotherone can be added on the fly before that) so that part is definitely needed and not arbitrary. I know Lightspeed can retrieve it correctly but I have a budget so finding an orm that preferrably can support this would be most desirable more desirable. So close. |
|
|
What you're describing is basically the sharding approach to partitioning big data sets. Again, I'm not convinced that the primary key needs to include the partition ID in this case; I would have thought an index on the composite would suffice (and you could impose a unique constraint if you wanted to be sure). And I think we have customers doing sharding without composite IDs. But I'm certainly not an expert on sharding and there may well be reasons why having the composite be the actual primary key is better in that case! If so, I'm afraid this isn't something you could currently do with LightSpeed migrations; nor can you currently have a composite ID where one of the fields is autoincrement. Sorry! By the way, if what you want is sharding, then I think my previous advice may have been misleading. In a sharding scenario, typically you are dealing with only one database at a time. For example, if you are sharding by user, then the page or application needs to talk only to the shard containing that user's data. In this case you would just need to set up the LightSpeedContext dynamically to point at the right shard, and you would need only a single unit of work instance rather than the combined repository I showed earlier. Sorry if I gave you a bum steer! Regarding the querying, you would use this along the lines of:
Exactly how this would fit into runtime composition of features or into a repository that encapsulates multiple UOWs I'm not sure -- it depends on how your application composes features. |
|