Have you wanted to build an availability group but didn’t know where to start? 
If so, we have discounted SQL Server Availability Group training for you!

Nashville, TN learn how to implement and monitor Always On Availability Group Solutions!

Nashville, TN learn how to implement and monitor Always On Availability Group Solutions!

Join us for pre-conference training brought to us by SQL Saturday Nashville! John Sterrett will be presenting a half day precon in Mufreesboro, TN on Thursday, January 11, 2018, from 1:00 pm to 4:30 pm!

SQL Server Availability Group Training in Nashville, TN

In this half-day session, you will learn how to build your first availability group while also learning how availability groups work with other components like active directory, storage, and DNS. You will walk away with a checklist to help your future deployments while also learning how to implement, monitor, troubleshoot and use availability groups.

In this session we will cover:

  • Understanding the difference between Availability Groups and Failover Cluster Instances
  • Configure Windows Failover Cluster Service (WFCS)
  • Understanding Quorum in WFCS
  • Pre-staging Active Directory Objects
  • Learn how Availability Groups use DNS
  • Build Availability Groups
  • Implementing Planned Downtime Failovers
  • Troubleshoot Common Availability Group Problems
  • Proactive monitoring Availability Groups
  • Backups for your Availability Groups Databases
  • Managing Connectivity
  • Handling SQL Agent Jobs
  • Making SQL Server Reporting Services Highly Available Utilizing Availability Groups

Space is limited so act fast and Register here now!

Come for the SQL Server Availability Group Pre-con and stay for the full day of free SQL training on Saturday!

SQLSaturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence, and Analytics. Join us on Jan 13, 2018, at Middle Tennessee State University (MTSU), 1301 East Main Street, Murfreesboro, Nashville, Tennessee, 37132.

Nashville Availability Group Training

Nashville SQL Peeps get Your Learn On!

In some DBA circles, backups are just as popular as politicians! However, recoverability is the most important task for database administrators.  While SQL Server 2017 added so many great features like graph, SQL on Linux, and more.  Today, I want to focus on two small underdog features that might be game changers on how you do backups.

SQL Server Backups are often as popular as politicians.

SQL Server Backups are often as popular as politicians.

Smart Differential Backups

Databases are getting bigger, not smaller. More storage capacity is needed for these backups. Backup compression might hurt your storage capacity. Today, I am seeing more policies include full and differential backups along with transactional log backups. Differential backups are used to offset daily full backups. Typically people will use time increments as the basis for when backups should occur.  It’s very common to see automated jobs that do weekly full and daily differentials to reduce storage capacity needed for backups.

How often does your data change? Is the rate of change very consistent or does it change depending on the week?  Let’s assume this week it’s Tuesday and over 80% of your data pages have changed. You are not benefiting from taking daily differentials for the rest of the week. The opposite goes for data that doesn’t change that often.  Maybe you can save a lot of space by doing less frequent full backups.

Leveraging smart differential backups could greatly reduce your storage footprint and potentially reduce the time it takes to recover.

In SQL Server 2017 you can see exactly how many pages changed since your last full backup. This could be leveraged to determine if you should take a full or differential backup.  Backup solutions and backup vendors will be better for this.

 select CAST(ROUND((modified_extent_page_count*100.0)/allocated_extent_page_count,2) as decimal(6,2)) AS 'DiffChangePct' ,modified_extent_page_count ,allocated_extent_page_count from sys.dm_db_file_space_usage GO

Smart Transactional Log Backups

 The time your users are offline while you are recovering to the point of failure is critical. It could be the difference between keeping and losing customers.  Point-in-time recovery is mandatory for a critical database.  Transactional log backups have to be restored in order.

Recovery Point Objectives (RPO) drive how often you take transactional log backups.  If you have a policy that says you can only lose ten minutes of data, you need transactional log backups every ten minutes. Is this really true if there were no changes? What if your RPO is driven by the amount of data loss and not the time of the loss?  Either way, you can now control when transactional log backups occur based on the amount of data that has changed since the last transactional log backup.

 SELECT name AS 'DatabaseName',dls.log_since_last_log_backup_mb, dls.log_truncation_holdup_reason, dls.active_vlf_count, dls.active_log_size_mb FROM sys.databases s CROSS APPLY sys.dm_db_log_stats(s.database_id) dls

This post was written by John Sterrett, CEO & Principal Consultant for Procure SQL.  Sign up for our monthly newsletter to receive free tips.  See below for some great related articles.

The post SQL Server 2017: Making Backups Great Again! appeared first on SQL Server Consulting & Remote DBA Service.

Last week we covered five reasons why log shipping should be used.  I got a great question that I thought should be answered here in a short blog post.  The question is “how do you configure transactional log shipping to include compression when taking the transactional log backups?”

The Missing Question

First, a better question is “should we be compressing our SQL Server Backups?” In a second we will address log shipping, but first I wanted to just focus on compressing backups. The answer is “it depends” (typical DBA response).  There is CPU overhead for utilizing compression. We will not focus too much on this here as you can enable resource governor to limit CPU usage for your backups.

Today, we will focus on capacity and misleading storage saving results due to enabling backup compression.  Looking at a traditional DBA role, you might have exposure to view your server, drives, and the size of the backups. Therefore, taking a compressed backup leads you to likely see less storage being used for the backup file compared to other backups of the same database.  This is typically the goal for backup compression to reduce storage space consumed by backups.  

Another question you should be asking yourself is, “What storage system is being used for your SQL Server backups?” For example, storage access networks (SAN) might have its own compression and native SQL Server backup compression might hurt the impact of the SAN’s compression, which could cause more RAW storage to be used. Therefore, there really isn’t a silver bullet to always use SQL Server backup compression in every environment.  You need to understand if there is any compression built into the storage used for your backups, and understand how backup compression impacts the storage system before deciding to utilize backup compression.

Compress Log Shipping Backup

Okay, now that we got our disclaimer common real-world mistake out of the way.  Here is how you would enable backup compression for transactional log shipping.

You can access Log Shipping Settings from Database Properties

Log Shipping Backup Settings on Primary

Log Shipping Backup Settings on Primary

Here are Log Shipping backup compression options.

Here are Log Shipping backup compression options.

Enabling Compression with T-SQL Backups

Enabling backup compression with T-SQL is as simple as just adding the compression option.  The following is a common example of backing up a database with compression.

[code language=”sql”] BACKUP DATABASE [LSDemo] TO DISK = N’c:\somewhere\LSDemo.bak’ WITH COMPRESSION [/code]

Enable Compression by Default

If all of your backups for an instance of SQL Server go to storage where utilizing native SQL Server compression provides improvement, then you can consider enabling compression by default. Therefore, all backups taken on this instance of SQL Server would then use backup compression by default unless another option was given during the T-SQL execution of the backup command.

[code lang=”sql”]
EXEC sys.sp_configure N’backup compression default’, N’1′
GO
RECONFIGURE WITH OVERRIDE
GO
[/code]

SQL Server comes with several options to keep your data highly available. Today we are going to cover five different reasons why Transactional Log Shipping should be one of your best tools in your SQL Server DBA Toolbelt. 

Validates Transactional Log Backups

Your last line of defense in any SQL Server Disaster Plan is recovering backups.  To be able to recover to a point of failure you have to leverage transactional log backups while using the full or bulk-logged recovery model.  Every transactional log backup has to be restored one by one in-sync after your last full and differential backups to recover to the point of failure.  Wouldn’t it be nice if you could automate restoring every log backup so you knew you had good backups? Transactional log shipping is the easiest way to complete this task. It also gives you a copy you can easily bring online without having to restore all of your backups.

Secondary Copy of Transactional Log Backups

We already mentioned that the last line of defense in any disaster recovery plan is restoring backups.  Wouldn’t it be nice if you had copies of your transactional log backups on multiple servers? Maybe even multiple data centers?  Well, transactional log shipping also helps automate this as well. The second step in transactional log shipping is copying your transactional log backups. Now you just have to make sure your full and differential backups are copied.

Recover Data Quickly for Very Large Tables

Have you ever forgotten to add or highlight the where condition while you are running a delete or update statement? If you haven’t, do not be surprised if it happens in the future by you or someone else in your company.  This can be an extremely painful recovery process if it happens against tables with terabytes of data.  With all other SQL Server High Availability features like mirroring, availability groups, and failover cluster instances, the data changed almost instantly so you couldn’t use this solution to recover.  You now have bad data that is highly available.

With transactional log shipping, you can delay the restore time so if you discover the changed data by mistake you could disable the SQL Agent Jobs and pull the missing data in parallel of implementing your plan to get the rest of the data from the last restore.  This can bring most of your data online quickly while you are restoring a very large backup.  The best part is you don’t have to recover your log shipping copy.  You can put it in standby mode and read data while it’s still in a state where you can continue to restore transactional backups (more on this below).

Poor Person’s Reporting Copy

Would it be nice to have a readable copy of your database that you could use for reporting? Even if you have Web Edition. Transactional log shipping gives you this ability even though you might not know that it exists.  When you are restoring the transactional log backups with transactional log shipping you have two options.  The transactional log backups can be restored in standby mode which allows you to read data in-between transactional log backup restores. You can also use the default, which is no recovery.  This doesn’t allow any access to read the backups. This is just like your mirror copy in database mirroring (without snapshot copy on enterprise edition).

There is one big catch with using standby as your recovery option.  You have to specify if you want to delay restores if an active session exists or pick to terminate the connections to force the transactional backup restore.

Improve Downtime for Migrations and Upgrades

When you migrate a database or upgrade using the side by side option you have to have another copy of the database.  Ideally, the source and destination would be identical at the point of implementing the migration.  With very large databases this could require a very large downtime so backup and restore, or detach and attach, are good options unless you have a very large maintenance window. Seeding the data so that only a small amount of changed data needs to be synced is ideal.  Log shipping is an excellent way to do this especially if you have multiple copies that need to be synced for implementing availability groups.  Here is how we made a 60TB database highly available for an Availability Group implementation with zero downtime. Example

My New Log Shipping Feature Request 

I know, you are already thinking that Log Shipping has been around forever and it doesn’t need any new features, but I got one.  In SQL Server 2016 Direct Seeding was added to streamline how the initial sync occurs for Availability Groups. I would like to see this same feature extended for log shipping so we have an easy automated option to perform the initial sync for transactional log shipping.

Curious about the basics of SQL Server High Availability?  Please join our founder and MVP, John Sterrett, where he will be presenting SQL Server High Availability 101 on Tuesday, July 18th, 2017 at the North Austin SQL Server User Group.  

[Update – Sept 5th]

You can now watch the full presentation online.  You can see also see some reference links below as well. If you have any questions feel free to contact us!

Links:

High Availability Reference Material

Abstract:

Here is a quick synopsis of what will be discussed:

Have you ever wondered how companies keep their databases online through unplanned disasters? Have you wondered which high availability features might be best for our shop? In this session, we will go over a quick look at log shipping, database mirroring, transactional replication, failover cluster instances, and availability groups. John will identify pros and cons for each feature and share some tips from the field.

You are also invited to meet afterward for networking and cocktails!  It is a great opportunity to learn, connect with fellow SQL Server professionals, and build solid relationships for your future.  We cannot wait to see you there!

The most important task a DBA can perform is Recovery.

The work of a SQL Server DBA is ever-changing; data is fluid, and accordingly, so is the manner in which data is treated.  Thankfully there are a vast number of ways to keep up with the changes a DBA faces in his/her career.  There are various blogs, hashtags, local PASS chapter meetings, SQL Saturdays, and a host of people online willing and able to help.

I love a challenge, so this month’s blog invitation, T-SQL Tuesday #85 – Backup and Recovery hosted by Kenneth Fisher (b|t), is right up my alley, as this is the first thing I am learning as a new DBA!

 tsql2sday150x150

In a recent twitter poll, John Sterrett asked which is a DBA’s favorite job.

Favorite job as a DBA

Favorite job as a DBA

It is clear by the answer that Backups and Restore is not in the top percentage of favorites.  Why is that?  Well, because it is the simplest job that can be performed, and probably the least “sexy” of all the things a DBA does.  It does not require any special tools or shiny new toys,  Backup and Restore is the most basic of the basics when learning to be a DBA.

Backups are essential to a successful Restore.  Imagine you were asked to recover data that was never backed up…ever… as in never, ever, NEVER.  That feeling you have crawling up your spine right now, that is fear, anxiety, and panic.  If you don’t care for that feeling, you need to learn more about BACKUPS and RESTORES.

If it is so fundamental, why is recovery the most important job of a DBA?  Very simply, backups are the foundation of a disaster recovery plan; however, they are useless if you cannot recover with minimal data loss.

The three key things I have learned while studying backups and restores are:

  1. If you have no restore model, your database and any and all backups are WORTHLESS.
  2. If you have no automation process in place, you should start planning a new career.
  3. If #1 and #2 are ignored, know where to find a good lawyer.

If you do not understand why backups are so important, think about dropping your phone in water… and then it being eaten by an alligator.  Do you have your contacts, photos, passwords, banking information, etc., backed up to the cloud?  No?  My friend, now you understand why backups are so very important!  Don’t be that person who stands crying in a swamp  because an alligator is digesting your data!

Help! A gator ate my data!

Help! A gator ate my data!

DBAs should know by heart the various kinds of backups, how they are used, exactly what they do, and when they should be performed.  The good DBA knows that installing an automated process to perform backups is the key to a long and successful career.  Also, testing, testing, testing is KEY.  Backups and Restores should be the first things taught to a junior DBA, accidental DBA, or a DBA in training.

Hindsight is 20/20, so the saying goes.  Perhaps that is the reason so many DBAs skip learning backups and restores.  We don’t always know there is a need for something until there is a dire need for something.  Perhaps this is one reason all my beginning DBA books cover all the “fun stuff” first and throw in the backups and restores somewhere near the end of the book.  Case in point, I provide you with two examples of critical recovery failures.

Well I have Found the Quickest Way to Get Sacked

Childcare App Wipes Users’ Data

Backups and Recovery are so very important, that is why I am learning this first as a new DBA.  I am studying a great book by John Sterrett (b/t)  and Tim Radney (b/t) titled SQL Server 2014 Backup and Recovery.  I strongly suggest everyone get a copy and read this book.

Please come back, this is the 1st in a series of blog posts regarding Backups and Restores.  See you next time when we begin to discuss types of Backups and Restores in-depth!

Thank you for reading!