This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Hi! We use a keytable that stores next key values per table. This keytable has two columns: TableName NVARCHAR(128), NextId INT. Is there any way to use this, or set the ID manually? |
|
|
We don't have support for keytable-per-table. I've logged a feature request for this. You can set the ID manually by overriding the GeneratedId() method to return the desired ID. |
|
|
A somewhat indirectly related question... The theoretical maximum limit in a system using Int64 would be 9,223,372,036,854,775,807. What happened after that? I'd imagine for a temporal database recording all sorts of event, this limitation could be reached easily. By having KeyTable-per-table would delay the inevitable, but not prevent it. So, I am just wondering, do you guys have any thoughts on good way to tackle this? :-) |
|
|
[quote user="technicaljoe"]The theoretical maximum limit in a system using Int64 would be 9,223,372,036,854,775,807. ... I'd imagine for a temporal database recording all sorts of event, this limitation could be reached easily.[/quote] Your imagination is clearly better than mine *grin*. By my estimation if you are inserting 1 billion entities per second, it would take slightly more than 292 years. At a more realistic rate, therefore, it would take a million years or more to exhaust the Int64 number space. We would of course be glad to address this scenario provided you pay for the million years licence subscription up front... |
|
|
Well, people do say I can get quite imaginative and come up with the weirdest scenarios... :-) For argument's sake, to design a system like Facebook (statistics), starting with the complexity of the data model (i.e. 1 key table for all tables), multiply by 350 million users, then multiply by all sorts of events and activities, e.g. generated by users, system, or spam/attacks. Run this system 24/7/365 for a while, surely theoretical limit could be reached within a reasonable timeframe. I guess I'm just wondering if anyone has any thoughts on what would be a robust pattern to apply this scenario. Or... Perhaps as you indicated, the risk is possibly minimal if not negligible. |
|
|
I don't want to get bogged down in this, because honestly it's largely theoretical -- I doubt you'd run a site the size of Facebook or MySpace through a generic ORM, and frankly the scaling challenges of a site that size are going to be about bandwidth, caching and storage volume rather than ID allocation -- but I can't resist the opportunity to multiply large numbers on the back of an envelope... 350 million users, let's say growing to 500 million so as to make the maths easier. Assume that there are no inactive users, and that every user reads, posts or performs some other action every 10 minutes (and never sleeps): say 150 actions per day. Assume every action results in 20 inserts (user data, logs, etc.; estimating on the high side here). Finally, assume that spam, attacks and non-request-related system activity result in a 10x multiplier (i.e. allow up to 900% overhead; again estimating on the high side (I hope)). So number of IDs allocated per day = 500 million users x 150 actions/user/day x 20 IDs/action x 10 for non-user stuff = 15 trillion (15 x 10^12) IDs/day. How long would it take, at this rate, to exhaust the Int64 space? 15 trillion is less than 2^44 (17.6tn). 2^63 IDs / 2^44 IDs/day = 2^19 days. That's just over 1400 years: a millennium and a half of flat-out operation. Big numbers, to paraphrase Douglas Adams, are big. Really big. I mean, you might think 15 trillion is a lot of IDs to allocate per day, but that's just peanuts compared to Int64.MaxValue. *goes on and on until Joe flies down to Wellington and makes me stop* |
|
|
<3 ivan |
|
|
Thanks Ivan, I can live with your explaination, for now :-) I'd like to think that I've taken as many extreme scenario into consideration as I could, so the sensible (hopefully simpler) solution I build would have enough wiggle room for theunexpected scaling requirements. |
|
|
Thanks for your replies! |
|
|
This is why Mindscape employ Ivan. You can throw something completely bonkers at him and he'll still come up with a cogent solution/reason/suggestion.
Towlson FTW! |
|
|
..And if this still isn't enough.. could we start the key table with negative 9,223,372,036,854,775,807 to instantly double the oh-so-claustrophobic wiggle room? Not sure if LightSpeed plays well with negative Id's.. |
|
|
Why bother making it negative? Just make it an unsigned BIGINT, and then you get the same effect :). It would be interesting to see how many regular databases would exhaust the 32-bit integer keytable space. I suspect most normal databases would still take a very long time even to use up the entire 32-bit integer space (but clearly the facebook scenario would have done that in one day; can't imagine the size of the database farm to allow that much traffic!) |
|