As the databases behind websites and applications become bigger and busier, application architects and developers
often need to figure out very quickly how to scale the database before the application becomes unusable under heavy traffic.
One of the best ways to avoid this scenario is to think big from the beginning and avoid storing everything in a single database. Classifying data into logical groups and storing it in databases separated by function will make scaling easier in the future. A well-designed application should be able to handle errors and, in addition to displaying an error to the user, log the error to the database. Also, many websites now track how visitors use the website, such as what they click on, so there are many events an application must save to the database. In big and busy applications, this could add up to millions of rows very quickly. Over the years I have noticed that in many cases, even commercial applications save this type of data into the same database that drives the application. As a result, logging and event-capturing data contributes a lot to database growth. This approach results in adding to the cost required to host and maintain these databases.
Separating logging and event-capturing types of data from the data used by database applications gives you several advantages:
- If the database server becomes too busy to handle the load, you can move logging databases to another server. Moreover, you can spread logging databases across multiple servers to spread the load even further.
- This architecture makes it easier to move logging and event-capturing data into a central data warehouse and periodically purge the data from the logging database.
- Keeping this data separate allows the application database to remain smaller, hence easier to manage, administer or move a copy to the development, QA or staging environment.
- If a database server becomes busy, applications tend to log more errors due to timeouts and other issues on the busy server. If your application logs errors, it contributes to the server load even further and can potentially make the problem worse.
- If you are running logging databases on separate servers, you might be able to use the Standard Edition of SQL Server, since in many cases you will not need to have the same high availability as you might need with databases behind your applications.
More on SQL Server database performance
Learn the source of your database performance problems
Tips for tweaking SQL Server SharePoint database performance
This architectural approach to event logging also allows you to centralize logging data from multiple applications into a single place. Rather than going to multiple places to analyze data captured by your applications, you could log into one central location and then analyze data. I like the idea of keeping multiple, cheap logging servers and then pulling data daily into a data warehouse. This architecture makes it easy to add logging servers to the mix and also gives the option to redirect logging from one server to another, just by modifying a connection string in the application that uses the logging framework. Since each type of logged data may need to capture different pieces of data, you could initially store part of the data in an XML document. Then, as you move your data to the data warehouse, you can parse the XML nodes based on the event type and extract data into relational tables that make it easier to query and report on.
In terms of writing code for logging errors, you do not need to reinvent the wheel. There are several open source and publicly available libraries you can download and incorporate into your solution. One example is Log4Net, a .NET library that has been ported from the Apache log4j logging Framework. It is fairly easy to use and provides a good starting point. You can use it as is or extend it with your own code to handle your logging requirements.
Based on my experience, I would also advise you to develop a purging strategy right from the start. If you capture a lot of data, you might find yourself in a situation where you accumulate terabytes of logging data very quickly, making it harder to manage and purge. One solution is to classify your events and determine how long you want to keep each type of logged data, and then implement a purging job that will delete expired rows based on the thresholds you have defined. In some cases, you may need to keep only aggregations of the data, such as how many times per day someone clicked a button or how many database timeouts occurred during a certain time period. This architecture provides several advantages, among them spreading the load, centralized reporting and simplified high availability requirements. Following these steps will lead to improved SQL Server database performance.