This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Hi, The project that I'm working on has two tables which are expected to get a lot of records. I expect these two tables to eventually reach the maximum integer value (product specifications required me to assume a maximum of 300m records per day with at least 30 days of history). For this reason I specified their ID as long/int64. These tables will also get bulk-inserts and for this reason I would like to use the key table method. However the key table is currently implemented as an int instead of an int64, would modifying the key table creation script be enough for this to work? And is there a way for the key table method to work per table? I've looked into overriding the GeneratedId entity method. While I think it would be possible to create my own (per table) key table solution, I feel it would be better if Lightspeed did it. This isnt a requirement (I think a long will be large enough for both tables to share the same key table), it would be 'nice to have'. Regards, Jerremy |
|
|
Yes, changing the KeyTable in the database to be a BIGINT instead of an INT should suffice. Internally LightSpeed treats the values returned from KeyTable as Int64 so will correctly handle IDs that don't fit in an Int32. You can quickly verify this by changing your table to BIGINT and temporarily changing the NextId value to 10 000 000 000. There is currently no support for per-table key tables. However, we can assure you that an Int64 will be more than large enough for both tables to share the same key table. See this now legendary thread for the gory details (quick summary: at 300m records per day it will take you over 80 million years to exhaust the Int64 space). |
|
|
Hehehe :-) |
|
|
Well as said its more a "nice to have" ;) That said, another reason for a per-table key table would be different identity sizes. Most of our tables use a standard int for Id purposes. Now you could convert them to bigint as well, but it would be overkill for a table thats not expected to grow above 10.000 (let alone above the int limit). Its also nicer if those tables had 'reasonable' Id's (e.g. not 8231281321 ;). Anyways, its no issue as those tables will just use an identity column. |
|