SQL Server business intelligence (BI) systems can take a lot of computing horsepower, and most organizations deploy as much horsepower as possible when they get theirs up and running. Unfortunately, BI makes its users want more. More reports. More dashboards. More data.
All that means you’ll eventually need another more: computing power. In other words, you’ll need to scale your BI system to handle more. There are two broad techniques for scaling: up and out.
Scaling BI out
Scaling out means enlisting more servers to support the overall system. Typically, you do that by looking at the specific BI services and moving some of them to different machines. For example, perhaps your BI system serves different audiences within your organization; each could now benefit from having its own servers dedicated to its BI tasks.
Or perhaps your BI system comprises many different components that can be separated out onto different servers. Exactly what’s possible depends a lot on how that BI system was architected and how your users are utilizing it. Make no mistake: Scaling out can be painful. It almost always involves some re-architecting, and with some systems it may not be practical.
Scaling BI up
That’s why scaling up SQL Server is often the first choice for many organizations. Scaling up simply means moving the BI system to a bigger, more powerful server or upgrading the existing server. When you scale up, some general guidelines will come in handy.
- BI systems that rely heavily on in-memory analysis will get the biggest benefit from more memory and more processor power. Generally speaking, the more random-access memory (RAM) you can throw at such a system, the better. When it comes to processors, more is generally preferable to faster. In other words, four processor sockets sporting four cores apiece is better than a smaller number of sockets or cores running at slightly higher gigahertz apiece.
Unfortunately, servers rarely have room for more processors and rarely even accept faster processors. A processor upgrade generally involves a whole new server, and that gets expensive. Start then by considering a RAM upgrade, since servers are usually a bit more flexible when it comes to adding more memory.
- BI systems that rely heavily on a data warehouse tend to be most affected by disk performance. Getting faster disk drives, that is, disks with a higher rotational speed, will make a difference, but often not a huge one. Instead, try to get more disks. Disk speed usually comes down to how fast magnetic bits can be read off the spinning platters, so more platters -- more physical drives in an array -- usually means more bits will come off faster.
That said, there’s a limit to how much even a decent array can pump out. So you might also want to look into solid-state disk (SSD) caching systems. The most effective ones I’ve seen take the form of a piece of software running on your database server, with SSDs combined with SAS or the PCI Express expansion card in the range of 150 GB to 300 GB. I’ve seen that simple (and inexpensive -- think around $8,000 per database server) upgrade deliver as much as threetimes faster performance under the same workload.
In BI, bigger is usually better
No matter which kind of BI system you have -- data warehouse, in-memory analytics or a hybrid system that uses both -- more computing resources will usually deliver improved performance and the ability to handle a greater workload: more processors, more disks, more memory.
If getting more means buying a whole new server, buy big. Make sure you’ve got capacity for a whole lot of memory and get as many processor cores as you can afford installed in your new server. An investment here will pay off in the form of a longer-lasting server with the ability to scale up to meet ever-higher demand.
ABOUT THE AUTHOR
Don Jones is a co-founder of Concentrated Technology LLC, the author of more than 30 books on IT and a speaker at technical conferences worldwide. He can be reached through his website at www.ConcentratedTech.com.