Many small- and medium-sized businesses (SMBs) go with business intelligence (BI) systems that are at least partly...
prebuilt and based on SQL Server. That means that the biggest single factor in the system’s performance -- the database schema -- is often already decided and can’t readily be changed.
The benefit of that approach is that you’re not paying for a custom data warehouse design, which can be incredibly expensive and time-consuming to create. The downside is that you’re getting no input into the most critical aspect of BI performance. What you can do, however, is make sure that the hardware your SQL Server BI system runs on is up to the challenge.
The worst thing you can do is plan to run your SQL Server BI system on a server that’s already doing something else. Just forget about it. A truly massive, hyper-powerful, fully loaded server (without storage) might run you $30,000. If that’s too much, then start compromising, using recommendations from your BI system vendor to determine what corners you can cut (hint: slightly slower processors is preferable to fewer processor sockets and cores). Whatever you do, let the entire computer be dedicated to the BI system.
You see, most BI systems already include multiple components: A database engine, an analysis engine, a Web server and other stuff. Loading your own applications onto the server is just asking too much from it in terms of BI performance.
Some BI systems -- and these are ones to look for -- may permit you to spread their various subcomponents across multiple servers. If BI performance is a real concern, then look for a BI system that does permit that kind of spreading out of components. You’ll get more servers on the job, and that equals better performance.
RAM, RAM, RAM, RAM, RAM
Up to a third of the cost of a new server might be random-access memory, or RAM -- or just memory to most of us. That’s the third where you don’t want to cut corners. The database engine that powers a SQL Server BI system needs that memory, and it needs it in a bad way. A 25% reduction in installed memory can easily cut system performance in half, depending on the database engine, BI platform and other components involved.
If you can’t afford to fill your new BI server with memory, at least select a server that will have plenty of room for expansion at a later time. Whatever memory you do buy with the server should be the densest possible: If you’re equipping the machine with 32 GB of RAM, do so in as few memory modules as possible to allow for maximum expansion later.
Most BI systems targeted at the SMB market these days rely heavily on in-memory analysis engines instead of, or as a major companion to, a more traditional data warehouse database. In-memory analysis means exactly what it says: Analysis is conducted in real-time, on the fly, in the server’s memory. That means your server will need tons and tons of -- you guessed it -- memory.
Disks -- fast and plenty
SQL Server BI systems that rely on a data warehouse (and nearly all of them do to some extent, mostly to a major extent) need their disk performance to be superb. That means more disks, not bigger ones. It’s the inverse of memory: If you want 1 TB of disk space, you don’t want it on a single 1 TB drive. Ideally, a dozen 100 GB drives would be preferable. That’s because disks have physical constraints on their performance, so by spreading your data across more disks, you get better performance.
Choose disks based on rotational speed and average seek time -- faster speeds and smaller times are better. Don’t worry about transfer speeds, which is what disk vendors like to tout; with high rotational speed and fast seek times, you generally get great transfer speeds as part of the package.
ABOUT THE AUTHOR
Don Jones is a senior partner and principal technologist for strategic consulting firm Concentrated Technology LLC. Contact him through www.ConcentratedTech.com.