SQL Server table indexes have a kind of voodoo to them that can be alienating. The bigger the table and the more...
commonly it’s accessed, the greater the need for a properly tuned index.
The fill-factor option for a SQL Server table index is one of the most common tuning methods to apply to an index. A fill factor for an index is a percentage that tells SQL Server how much of each leaf-level page in the index should be filled with index data, and how much should be left as room to grow into. Such expansion happens if columns in the underlying table are changed or added, or when rows are added to the table. If the index fragments over time, or if it exhibits poor performance, that’s a sign you may need to tweak the fill factor for the indexes on your most commonly used tables. The question is how.
The first thing you must do is avoid, as has many have discussed (e.g., Paul S. Randal of SQL Server Magazine), manually setting an instance-wide fill factor -- that is, a fill factor that affects all tables in a given instance of SQL Server. The reason is simple enough: Each SQL Server table will exhibit its own behaviors and have its own needs. The default setting of 100% fill factor is just that -- a default, and it can be adjusted over time as you build a better picture of database performance.
The second thing to bear in mind: There may be many instances in which you don’t need to adjust the fill factor much, if at all. Don’t bother with indexes that only have a few pages (as per the DBCC SHOWCONTIG command), since those tend to be cached heavily, anyway. Go first for the indexes that have the largest number of pages and have low scan densities (again, as per the DCC SHOWCONTIG command). An index with only 12 pages and a density of 50% is going to be far less of a problem than an index with 7,000 pages and the same density.
Third, figure out how much tuning you need and in what direction you should go, since there’s no one magical value. Set the fill factor too high and you’ll see a great deal of wasted space and page splits across the board. Set fill factor too low, and your database size will increase, which in turn impacts performance all over again because of the amount of I/O that needs to be performed.
Randal suggests starting with 70%, using that as the default value for a week or so, and then tuning that number up or down to see what changes result. The Page Splits/Sec performance counter for SQL Server is a big help in this regard; the higher it is, the more tuning your SQL Server table indexes probably need.
It’s also been suggested that the fill factor can be set depending on the read/write ratio of the table in question, with low updates (and frequent reads) using a high fill factor, and vice versa. So, fill factor values lower than 60%-70% shouldn’t be used unless you have some extremely unusual usage scenario -- usually, many orders of magnitude more writes than reads.
Another suggestion is to not modify the fill factor from 0 (100%) if you are simply adding data to the end of the table. My own experience shows that a table that’s being appended can usually benefit from having a 90% fill factor -- not too major a change, just enough to give the index room to grow incrementally if needed.
SQLTeam.com’s Tara Kizer has also commented on how changing the fill factor reduces SELECT performance, so again, any changes to fill factor need to be accompanied by regularly auditing database performance.
ABOUT THE AUTHOR
Serdar Yegulalp has been writing about computers and IT for more than 15 years for a variety of publications, including InformationWeek and Windows Magazine.