SQL Server stores data in blocks known as pages. Each page is 8,192 bytes, with its own individual checksum information. If a write operation fails because of a disk error or power failure, the page is damaged ("torn"), and the physical data file associated with that object is declared corrupt. Consequently, you want to keep database objects -- tables, indexes and full-text repositories -- in as many separate physical files as possible.
Aside from the fact that it allows for better concurrent access (it's easier for the system to read from two separate physical files than from two different points in the same one), data segregation also helps preserve data integrity in the event of a torn data page. If a file in your database becomes damaged, and that file contains only an index, then the damaged index can be dropped and rebuilt with no harm to the rest of the system. Things may slow down a bit in the meantime, but slowing down is better than grinding to a stop. If that index is stored with other indexes or other data files, everything in that file is declared corrupt, and the only way to recover from such a catastrophe is to dig out a backup.
ENSURING DATA INTEGRITY IN SQL SERVER
Step 1: Back up, optimize and enable safety features
Step 2: Segregate data aggressively into files and filegroups
Step 3: Consider using implicit transactions
Step 4: Be careful how you enforce internal referential integrity through triggers
Step 5: Use constraints and relationships to keep out bad data
Step 6: Don't expose interfaces that create dynamic SQL to the end user
Step 7: Use a "check-in/check-out" mechanism for contested data
|ABOUT THE AUTHOR:|
| Serdar Yegulalp
Serdar Yegulalp is editor of the Windows Power Users Newsletter. Check it out for the latest advice and musings on the world of Windows network administrators -- and please share your thoughts as well!
Copyright 2005 TechTarget