Running SQL Server in the cloud can give users of the database software more flexibility and scalability than on-premises...
deployments offer -- and maybe save them some money, too. But things often go wrong with cloud systems, making a continued focus on high availability and disaster recovery a must.
That's the message delivered by David Bermingham, senior technical evangelist at SIOS Technology Corp., in a webinar produced by the SQL Server cluster management software company. Bermingham detailed a set of SQL Server high availability best practices for cloud users, while also comparing and contrasting the high availability features offered by Microsoft Azure and the AWS and Google clouds.
A solid high availability strategy can help create an efficient workflow, reliable uptime and effective disaster recovery procedures for cloud-based SQL Server systems, Bermingham said. The alternative is less pleasant: database downtime, lost data and other problems that can cause big headaches for SQL Server database administrators (DBAs).
Here are some of the SQL Server high availability best practices that Bermingham recommended.
Read the fine print before using a cloud platform
Azure's service-level agreement (SLA) promises just 22 minutes of database downtime per month. However, that only applies when two or more instances of SQL Server are grouped together in an availability set consisting of multiple virtual machines (VMs), according to Bermingham. If you set up only one instance on its own, you could end up with nothing but "a dial tone," he said. "It's ping-able, it's online, but it doesn't guarantee that the storage is available."
In its SLA for SQL Server systems, AWS promises a tantalizing 4.5 minutes of downtime -- or 99.99% uptime -- per month, Bermingham said. But similar to Azure's SLA, the uptime guarantee only applies to two or more database instances that are paired with one another, he added.
Google Cloud Platform likewise promises only 4.5 minutes of database downtime per month, Bermingham said. However, he noted that the definition of downtime in Google's SLA is a "loss of external connectivity or persistent disk access for all running instances" in a deployment with instances placed across two or more zones in the same compute region.
Be prepared for system outages
While the cloud can be a useful resource, it's important to recognize that cloud services can -- and sometimes do -- fail. Because the cloud isn't infallible, the SQL Server high availability best practices outlined by Bermingham include having a contingency plan in the event of an outage.
"You've got to make sure you have redundant internet access," Bermingham said. "Plan on how your applications are connecting to SQL [Server] and how your clients are connecting to your applications."
An unexpected loss of internet service is a single point of failure that users will need to find a way to cope with or work around, he added.
Another thing to take into consideration is the use of multiple availability zones for a deployment of SQL Server in the cloud. Because it's entirely possible for an availability zone to go completely dark, Bermingham recommended deploying databases across zones to help compensate for a full outage in one.
Bermingham also stressed the importance of putting a disaster recovery plan in place, as data in the cloud can be lost in a wide variety of ways -- from the catastrophic to the mundane. Natural disasters like fire and flooding may destroy or damage the physical servers that hold important data, but "most of the outages we've seen have been due to some sort of human error," he said.
Take advantage of high availability tools
When implementing SQL Server high availability best practices for cloud-based systems, steps must be taken to ensure that the important data remains available for use no matter what.
"If you're running SQL Server in the cloud, then you're still managing [it], and you need to make sure it stays online," Bermingham said in the webinar, which was posted on the MSSQLTips.com website. He pointed to high availability features in SQL Server that can help DBAs do so in both on-premises and cloud systems -- for example, Always On Availability Groups and Always On Failover Cluster Instances.
David Berminghamsenior technical evangelist, SIOS
Introduced in SQL Server 2012, Always On Availability Groups tie a set of primary databases to as many as eight corresponding sets of secondary databases that are configured to fail over together if an outage occurs. Because SQL Server is running on each instance, the technology enables a very quick failover, Bermingham said.
He added that page repairs can be done automatically without the use of third-party products, and that DBAs are able to do backups and run reports on the secondary systems. Bermingham noted, however, that Always On Availability Groups only protect user databases, not the system databases used to manage and maintain SQL Server.
Always On Failover Cluster Instances use Microsoft's Windows Server Failover Clustering (WSFC) feature to provide high availability protection for an entire SQL Server instance. A failover cluster instance is deployed across multiple WSFC nodes for redundancy in case of failures. As a result, DBAs aren't required to manage availability or maintain passwords and usernames across multiple instances, Bermingham said.
Each cloud platform provider also provides options to manage storage availability in SQL Server systems, he said. For example, Microsoft offers Azure Managed Disks, a software program released in 2017. Among other features, Azure Managed Disks reduces potential storage outages for Azure VMs in an availability set by spreading data across different storage units.