Within one Web application I am working on, all database operations are abstracted using some generic repositories defined over Entity Framework ORM.
However, in order to have a simple design for the generic repositories, all involved tables must define an unique integer (Int32 in C#, int in SQL). Until now, this has been always the PK of the table and also the IDENTITY.
Foreign keys are heavily used and they reference these integer columns. They are required for both consistency and for generating navigational properties by the ORM.
The application layer typically does the following operations:
- initial data load from table (*) -
SELECT * FROM table - Update -
UPDATE table SET Col1 = Val1 WHERE Id = IdVal - Delete -
DELETE FROM table WHERE Id = IdVal - Insert -
INSERT INTO table (cols) VALUES (...)
Less frequent operations:
- Bulk insert -
BULK INSERT ... into tablefollowed (*) by all data load (to retrieve generated identifiers) - Bulk delete - this is a normal delete operation, but "bulky" from ORM's perspective:
DELETE FROM table where OtherThanIdCol = SomeValue - Bulk update - this is a normal update operation, but "bulky" from ORM's perspective:
UPDATE table SET SomeCol = SomeVal WHERE OtherThanIdCol = OtherValue
*all small tables are cached at application level and almost all SELECTs will not reach database. A typical pattern is initial load and lots of INSERTs, UPDATEs and DELETEs.
Based on current application usage, there is very small chance of ever reaching 100M records in any of the tables.
Question: From a DBA's perspective, are there significant problems I can run into by having this table design limitation?