Posts

Tuning select statements is less complicated than tuning update statements because your code changes do not change data in your tables. Tuning update statements on the other hand can be tricky as you are changing data while you are testing. See multiple ways on how you can streamline tuning update statements to quickly get to your starting point to make sure you are comparing apples to apples.

You are out doing your scheduled sweeps of the environment and you notice a huge I/O consumer is one of your update procedures. Your initial thought may be to panic because how can you test data that is changing? You have several methods to tuning queries that adjust data that allow data to revert between tests.

NOTE: We always recommend tuning all queries in non-production until we have our final tested solution to implement in production.

Tuning Update Statement with Rollback Transaction

One of the more basic methods for executing queries without changing data is to wrap the statements are you running in a BEGIN TRANSACTION and ROLLBACK TRANSACTION. This allows everything to execute and give you the performance statistics you are looking for, while leaving the data intact to allow for repeat testing.

Tuning Update Statements using Explicit Transactions. This is BEGIN TRANSACTION and ROLLBACK or COMMIT TRANSACTION.
Using rollback to revert a change made to data

Tuning Update Statements using Database Backups

You have a couple options with your standard SQL Server backups to assist in reverting changes made to the testing environment and the same concepts are applied to your basic DR strategy. If the test is short, you can take a full backup before making a change and restore that backup to restart a test.

Tuning Update Statements with Restoring Full Backups. You can use a full backup to revert a change while tuning.
Using a full backup to revert a data change

If you are making a series of changes over a period of time and don’t want to completely restart your test, you can utilize point in time recovery. Once your database is in full recovery, you can start with a full backup and take log backups at set intervals or before different pieces of your change. This allows you to restore to a specific point in your test.

Tuning Update Statements with using Point in time recovery with full and log backups for tuning
Using a full backup with transaction log backups for Point in Time Recovery

Tuning Update Statements with Database Snapshot

Database Snapshots are a great way to quickly revert large databases to an initial state. Database Snapshots keep a record of database pages as they change and the copy of the page so those extents can be reverted instead of restoring the whole database. Therefore, they require the associated source database as it is not a full copy of the database. Snapshots contain the original version of files of the source database and a pointer file that shows the changes since the snapshot was taken, called a sparse file. The source database must be available for snapshots to be usable. This makes them not an ideal DR solution, but great for analysis in auditing and in tuning situations.

Tuning Update Statements while using snapshots to revert a data change for tuning
Using 2016 Database Snapshots to revert a change

Summary

There are many ways to quickly get back to your original state to allow for quick repetitive testing for queries that change data. A huge benefit of the methods I listed today allow you to also document where transactions and backups occur so you can open a .sql file and have all the needed steps in one document. Do you like other ways to revert changes? Maybe temporal tables? I’d be happy to cover that in a post in the near future. Tell me all about your methods in the comments below!

The post Tuning Basics – Updating an Update appeared first on dbWonderKid.

Great question!

     Before I answer this right off, ask yourself these questions: “Do I want to lose access to all other databases on the instance?  What would happen if I lose the model my company demands for the specific way every new database must be created?  Would anyone notice if I had to restore after a disaster and no one had correct passwords to the database?”  That shiver that just ran up your spine is your answer.  Absolutely YES, the system databases (Master, MSDB, and Model) should have backups!  

Master

     It is recommended that the Master be backed up as often as necessary to protect the data: a weekly backup with additional backups after substantial updates is highly recommended.  If for some reason the Master becomes unusable, restoring from the backup is the best way to get up and running.  Go here for information on Restoring the Master. If you do not have a valid backup of the Master, rebuilding the Master is the only option.  You can click here to find more information about what it takes to Rebuild System Databases.

Model

       Best practices recommend creating FULL backups of the Model Database, and doing so only when necessary for your business needs.  It is a small database that rarely sees changes; however, it is important to make sure it is backed up especially immediately after customizing its database options. 

MSDB

     Microsoft recommends to perform backups on the MSDB database whenever it is updated.  

TempDB

      What about TempDB?  Isn’t it a System Database as well?  Despite the importance of backups and recovery, the only database that cannot be backed up or restored is TempDB!  Why can it not be backed up or restored? TempDB is recreated each time the server is restarted, so any temporary objects like tables, indexes, etc, are cleared automatically.  As seen here, backup nor recovery are even an option!  

“But, backups take so long to run!”

    Feeling like this might be too much trouble?  As with any other backup, these too can be automated by using a SQL Agent job!   There is absolutely no reason NOT to back up your system databases.  If you feel otherwise, might I suggest you keep an updated resume close at hand.

Corruption.  We know it is everywhere.  It is surely a hot-button issue in the news.  If you haven’t given much thought to database integrity, now is the time to sit up and pay attention.  Corruption can occur at any time.  Most of the time database corruption is caused by a hardware issue.  No matter the reason, being proactive on database integrity will ensure your spot as a hero DBA in the face of corruption.

“A single lie destroys a whole reputation of integrity” – Baltasar Gracian

Integrity is one of those words people throw around quite often these days.  The definition of ‘integrity’ is the quality of being honest and having strong moral principles.  How can data have strong moral principles?  Does ‘data Integrity’ mean something different?  Yes, data integrity refers to the accuracy and consistency of stored data. 

Have Backups, Will Travel

When was the last time an integrity check was run on your database?  If you are sitting there scratching your brain trying to find the answer to that question, you may have serious issues with your database and not know it.

“But, I have solid backup plans in place, so this means I am okay.  Right?”

While having a solid backup and recovery plan in place is an absolute must, you may just have solid backups of corrupt data.  Regular integrity checks will test the allocation and structural integrity of the objects in the database.  This can test a single database, multiple databases (does not determine the consistency of one database to another), and even database indexes.  Integrity checks are very important to the health of your database and can be automated.  It is suggested to run the integrity check as often as your full backups are run. 

As discussed in an earlier blog, Validating SQL Server Backups, your data validation needs to take place BEFORE the backups are taken.  A best practice is to run a DBCC CHECKDB on your data to check for potential corruption.  Running CHECKDB regularly against your production databases will detect corruption quickly.  Thus providing a better chance to recover valid data from a backup, or being able to repair the corruption. CHECKDB will check the logical and physical integrity of the database by running these three primary checks*:

  • CHECKALLOC – checks the consistency of the database;
  • CHECKTABLE – checks the pages and structures of the table or indexed view; and
  • CHECKCATALOG – checks catalog consistency. 

Where to Look

Wondering if you have missing integrity checks or if they have ever been performed on your database?  The following T-SQL script will show when/if integrity checks were performed on your databases.  (Bonus) Running this script regularly will help track down missing integrity checks.

If you are looking for the last date the DBCC checks ran, the T-SQL script to use is as follows:


IF OBJECT_ID('tempdb..#DBCCs') IS NOT NULL
DROP TABLE #DBCCs;
CREATE TABLE #DBCCs
(
ID INT IDENTITY(1, 1)
PRIMARY KEY ,
ParentObject VARCHAR(255) ,
Object VARCHAR(255) ,
Field VARCHAR(255) ,
Value VARCHAR(255) ,
DbName NVARCHAR(128) NULL
)

/*Check for the last good DBCC CHECKDB date */
BEGIN
EXEC sp_MSforeachdb N'USE [?];
INSERT #DBCCs
(ParentObject,
Object,
Field,
Value)
EXEC (''DBCC DBInfo() With TableResults, NO_INFOMSGS'');
UPDATE #DBCCs SET DbName = N''?'' WHERE DbName IS NULL;';

WITH DB2
AS ( SELECT DISTINCT
Field ,
Value ,
DbName
FROM #DBCCs
WHERE Field = 'dbi_dbccLastKnownGood'
)
SELECT @@servername AS Instance ,
DB2.DbName AS DatabaseName ,
CONVERT(DATETIME, DB2.Value, 121) AS DateOfIntegrityCheck
FROM DB2
WHERE DB2.DbName NOT IN ( 'tempdb' )
END

The result will look similar to this.  However, let’s hope your results show a date closer to today’s date than my own!  If you see that your databases do not have integrity checks in place, check your backup and recovery plans and double check your agent jobs to see if perhaps the checks were scheduled but were turned off.

Exact Date Last Integrity Check

Recommendations

It is recommended that DBCC CHECKDB is run against all production databases on a regular schedule.  The best practice is to have this automated and scheduled as a SQL Agent job to run as a regular part of maintenance.  More specifically, to run the integrity check directly before purging any full backups.  Doing so will ensure that corruption is detected quickly, which will give you a much better chance to recover from your backup or being able to repair the corruption.

Remember, SQL Server is very forgiving and will back up a corrupt database!  Corrupt databases are recoverable, but they might have data pages that are totally worthless!

    Full disclosure time: Bulk Logged Recovery Model is  quite confusing to me.  And as it seems, to many others.  I wrote a bit about it in SQL Server Recovery Models and decided that it was so complex, I really wanted to learn more and to explore what works and what doesn’t.  Let’s take a deep dive into bulk logged recovery!

Deep Dive into Bulk Logged Recovery

Why would you choose Bulk Logged Recovery?

    Switching from full recovery to bulk logged recovery does have its perks when you have a very large amount of data to insert.  Most notably in a data warehouse setting, switching to bulk logged recovery to perform bulk inserts make perfect sense as you are dealing with very large amounts of data being updated at one time.  Also, when doing an index rebuild switching to bulk logged recovery can improve performance while performing operations on large amounts of data at once.

Are there better ways to insert a large amount of data at once?

     Bulk Logged Recovery uses minimal logging for bulk-logged operations, this reduces log space usage. I must add a caveat here; it makes it faster and reduces the usage in the file, but it results in a very large log backup. 

“Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large.” Reference Myths & Truths

     However, there is a risk of data loss for the bulk operations because these operations prevent capturing changes on a transactional basis.  A point in time recovery (PIT) while using bulk logged recovery is not possible because the minimally logged operations cannot be restored.  This can be an issue. So, if you have a bulk operation that needs to be handled, but you want to ensure point in time restore of each transaction in that operation, what is an alternative solution?  It is important to note that you can indeed recover a transaction log containing bulk logged operations but not to a particular point in time.  Instead you can take a transaction log backup as soon as the bulk operation is finished and regain PIT recovery.

     You can still perform bulk operations in full recovery model, it just means that they will be fully logged and that you will not see the performance gains from minimal logging.  It is the price you have to pay, you sacrifice performance for PIT restore abilities of the transactions within the bulk operation.  Ultimately your decision will have to be based on weighing what your company demands for I/O and RPO/RTO (Recovery Point Objective/Recovery Time Objective).  Do you know what your company’s RPO/RTO plans entail?  Now is a good time to find out!

     Feeling uneasy?  Wondering if there are there other ways to process bulk operations?

     There are different methods one can utilize for optimizing bulk operations.  These methods include using minimal logging, batching, disabling triggers and constraints, and many others that can be found here

How best to ensure limited data loss using Bulk Logged Recovery.

    So you decide you are going to use bulk logged recovery and you want to make sure that you are set up for success, there are a few things to keep in mind.  It is recommended that you perform bulk inserts using bulk logged recovery when there is the least amount of activity on your database.  Also take into consideration how difficult or easy it will be to recreate data if there is a failure during the bulk insert.  There is no PIT restore of the bulk operation using bulk logged recovery.  If the bulk operation is interrupted at any point, the entire operation must be performed again in its entirety.

Still want to proceed?

Wait!   

First, before you switch from full recovery, take an extra log backup.  If all things go badly, at least you will be able to get your database back to the point before you switch recovery models.  This is highly recommended!  If not, this is what we call an RGE (resume generating event).

     Let’s walk through the process of getting ready and switching recovery models.  Our first step in this exercise is to create a table.  We then go on to taking a log backup, inserting data manually, taking a log backup, and then on to switching to bulk logged recovery.

This is a great visual of what we will be doing.

 


--Step 1--

Use BLRDB
GO

DROP TABLE dbo.BulkDataTest;

CREATE TABLE dbo.BulkDataTest
(Price money NULL,
ProductID int PRIMARY KEY NOT NULL,
ProductName varchar (25) NOT NULL,
ProductDescription text NULL)
GO

BACKUP DATABASE LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'

-- Step 2--
Insert into dbo.BulkDataTest
(Price, ProductID, ProductName, ProductDescription)
VALUES ('456', '456123', 'HeroBike', 'Red Bike with Hero Cape Handles');

-- Step 3 --

BACKUP LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0815.bak'

-- Step 4 --
-- Switch to Bulk Logged Recovery Model --
USE [master]
GO
ALTER DATABASE [BLRDB] SET RECOVERY BULK_LOGGED WITH NO_WAIT
GO
-- verify Recovery mode  This script will ensure that you are in the desired recovery model--
select name, recovery_model_desc from sys.databases

Our next steps will be to insert our bulk data, insert manual data, take log backups, switch back to Full Recovery, and take an additional log backup.

-- Step 5 --
-- Commit both transactions below at the same time--
USE [BLRDB]
GO

BULK INSERT BulkDataTest
FROM 'C:\DBA\TestDocs\demo_bulk_insert_26.csv'
With (FIELDTERMINATOR = ',' ,
ROWTERMINATOR = '\n' ,
ROWS_PER_BATCH = 100000,
TABLOCK
);
GO

INSERT INTO BulkDataTest
(Price, ProductID, ProductName, ProductDescription)
VALUES ('1099', '1111111', 'HoverCraft', 'BippityBoppityBoop');

Select *
From dbo.BulkDataTest

-- Step 6 --
--take log backup--

BACKUP LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0900.bak'

-- Step 7 --
--Insert more rows manually--

INSERT INTO dbo.BulkDataTest
(Price, ProductID, ProductName, ProductDescription)
VALUES ('56', '111117', 'TheCheap', 'One of the cheapest bikes ever made'),
('58' , '111118', 'NewerModel', 'This one is for beginners'),
('591' , '111119', 'ABetterOne', 'Okay this one is good') ;

-- Step 8 --
-- Switch back to Full Recovery Mode--
USE [master]
GO
ALTER DATABASE [BLRDB] SET RECOVERY FULL WITH NO_WAIT
GO

Use BLRDB
GO
-- Step 9 --
--Insert more rows manually--
INSERT INTO dbo.BulkDataTest
(Price, ProductID, ProductName, ProductDescription)
VALUES ('36', '111120', 'BoyBike', 'This is a bike for tall 8yo'),
('136', '111121', 'ManBike', 'This is a bike for tall men'),
('236', '111122', 'ShortBike', 'This is a bike for under 5foot');

-- Step 10 --
--Take Log Backup--
BACKUP LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0930.bak'

Ready for a challenge?

Now let’s simulate a dropped database and walk through restore! The following steps will walk you through to be able to answer the questions below.

Question 1: Restore backup to step 6.  What is missing?  Do you have the single row inserts?  Is the data from the bulk insert there?


--Drop Database--
USE Master
GO

DROP DATABASE [BLRDB]

--Restore Full Backup --

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Full_02232018.bak'
WITH NORECOVERY;

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'
WITH NORECOVERY;

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0815.bak'
WITH NORECOVERY;

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0900.bak'
WITH NORECOVERY;

RESTORE DATABASE BLRDB
WITH RECOVERY;

use [BLRDB]
go

SELECT *
FROM dbo.BulkDataTest

--For Qustion 1, Restore Step 6 --

USE Master
GO

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Full_02232018.bak'
WITH NORECOVERY;
GO

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'
WITH NORECOVERY;
GO

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0930.bak'
WITH NORECOVERY;
GO

RESTORE DATABASE BLRDB WITH RECOVERY;
GO

use [BLRDB]
go

SELECT *
FROM dbo.BulkDataTest

Question 2: Restore to backup at step 10.  What is missing?  Do you have everything?


-- For Question 2, Restore Step 10 --
-- Drop DB--
USE Master
GO

DROP DATABASE [BLRDB]
GO

USE Master
GO

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Full_02232018.bak'
WITH NORECOVERY;
GO

RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'
WITH NORECOVERY;
GO

Restore Database BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0815.bak'
WITH NORECOVERY;
GO

Restore Database BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0900.bak'
WITH NORECOVERY;
GO

Restore Database BLRDB
FROM DISK = 'C:\DBA\Backups\BLRDB_Log_02232018_0930.bak'
WITH RECOVERY;

USE BLRDB
GO

SELECT *
FROM dbo.BulkDataTest

Swimming back up.  Swimming Back Up

     We have taken the plunge and now it is time to swim back to the surface and discuss what we have found.  Working through this exercise and answering these questions should show you how bulk logged recovery model works, why it is important to switch back to full recovery, and most importantly, why full and log backups are a must!

     What are your answers?  Your answers should show that if you are only able to restore to step 6 (the point at which you took a log backup but had not yet switched back to full recovery and taken another log backup) all data operations that were performed since your bulk operation have to be recreated!  The second answer should show you that it is imperative to take a log backup before switching to bulk recovery, take a log backup after your bulk insert, and take another log backup after reverting back to full Recovery.  If you follow those steps, your answer shows that all of your data operations were in fact logged and can be restored up to the last log backup without data loss.

      You have your SQL Server Backup Plan and your Database Recovery Model set.  How do you know if your Backups are good?  TEST!  Validating SQL Server Backups will ensure that you are in a good place when it is time to bring your database back from the dead!  

Don’t assume that your Backups are solid and let them sit on a shelf.  Corrupt backups are recoverable, but worthless.

 There are several methods for validating your Backups.

    • RESTORE –  The most effective way to validate that your backups are good is to run a test Restore.  If your Restore is successful, you have a solid backup.  Make sure to run a test restore on your Full, Differential, Point in Time, and Transaction Logs!   “Bonus points” if you automate refreshing non-production.
  • Backup with CHECKSUM It may not be realistic to run regular test restores on every single database, this is where CHECKSUM is your friend.  CHECKSUM is part of a backup operation which will instruct SQL Server to test each page being backed up with its corresponding checksum, making sure that no corruption has occurred during the read/write process.  If a bad checksum is found, the backup will fail.  If the backup completes successfully, there are no broken page checksums.
    • BEWARE though, this does not ensure that the database is corruption free, CHECKSUM only verifies that we are not backing up an already-corrupt database. (Later in this post we discuss checking data for corruption.) If it seems like too much trouble to write a CHECKSUM script every time you want to perform a backup, keep in mind that these can be automated as SQL Agent Jobs!  A sample T-SQL script for using CHECKSUM is as follows:


Backup Database TestDB
To Disk='G:DBABackupsTestDBFull_MMDDYYYY.bak'
With CheckSum;

    • VERIFY – It is not wise to rely solely on CHECKSUM, a good addition is to use RESTORE VERIFYONLY.  This will verify the backup header, and also that the backup file is readable.  Note that much like CHECKSUM, this will check to see if there are errors during the read/write process of the backup; however, it will not verify that the data itself is valid or not corrupt.  Despite the name “RESTORE VERIFONLY”, it does not actually restore the data.   VERIFY too can be automated to perform each time your backups utilizing CHECKSUM run. 
  • CHECKSUM on Restore –  Databases where BACKUP WITH CHECKSUM have been performed can then be additionally verified as part of the restore process.  This will check data pages contained in the backup file and compare it against the CHECKSUM used during the backup. Additionally, if available, the page checksum can be verified as well. If they match, you have a winner… 
    More Details on CHECKSUM and BACKUP CHECKSUM


Restore Database TestDB;
From Disk='G:DBABackupsTestDBFull_MMDDYYYY.bak'
With VerifyOnly;

Data Validation Prior to Taking Backups

    Keep in mind that if your data is corrupt prior to a backup, SQL Server can BACKUP that CORRUPTED DATA.  The validation methods mentioned above guard you against corruption occurring during backups, not against corrupted data within the backup.  For data validation prior to backups being run, it is suggested that DBCC CHECKDB be performed on each database on a regular basis.

  • DBCC CHECKDB –  SQL Server is very forgiving and will usually backup and restore corrupted data.  A best practice is to run a DBCC CHECKDB on your data to check for potential corruption.  Running CHECKDB regularly against your production databases will detect corruption quickly.  Thus providing a better chance to recover valid data from a backup, or being able to repair the corruption. CHECKDB will check the logical and physical integrity of the database by running these three primary checks*:
      • CHECKALLOC – checks the consistency of the database;
      • CHECKTABLE – checks the pages and structures of the table or indexed view; and
    • CHECKCATALOG – checks catalog consistency. 

 Automate Validation Steps  

    Corruption can happen at any time, most of the time it is related to a hardware issue.  Automating the steps necessary to validate your data and backups will help ensure you have the best practices in place to efficiently recover from catastrophic data loss.  Being able to backup and restore is not as important as being able to recover with valid data.   Despite the above keys for validation, the only true way to verify that your backups are valid is to actually restore the database.  It bears repeating: corrupt backups are recoverable, but worthless.  

*A full list of DBCC CHECKDB checks can be found here.

In my last post (A Beginner’s Guide to SQL Server Backups) we discussed the basics of SQL Server Backups.  As Backups are the foundation for Recovery, the next logical discussion should be Recovery Models.  For reference, I have included this Sample Backup Plan from the earlier post:

Sample Backup Plan

Sample Backup Plan

There are Three SQL Server Recovery Models

Simple Recovery Model

This is the most basic of the Recovery models.  It gives you the ability to quickly recover your entire database in the event of a failure; (or if you have the need to restore your database to another server) however, this only recovers data to the end of the last backup.

Thankfully, Differential Backups can be utilized with this recovery model. NOTE: Any changes made after the last Full or Differential Backup will be lost.  Transaction Log Backups cannot be used in Simple Recovery Model.  There is no Point-In-Time recovery in this model (recovering to a specific transaction or point in time).  Looking at the sample backup plan above… if you have an issue occur Saturday afternoon, you will lose any changes in data since your last Differential Backup that ran Friday night.  If you do not opt for Differential Backups and only perform Full Backups once a week, your changes and data will be lost for the full week.  (GASP!)

Why would you ever consider using such a basic, Simple Recovery Model?  Actually, there are a few really good (and perfectly safe) reasons why you choose to use Simple Recovery Model:

  • Your data is not critical and can easily be recreated
  • The database is only used for test or development
  • Data is static and does not change
  • Losing any or all transactions since the last backup is not a problem

Full Recovery Model

This is the most inclusive of the Recovery models; you can recover (depending on your valid backups) data up to the last transaction that was run before the failure. Log backups are a must for this recovery model, including Tail Log Backups (which will be discussed at length in a later post).  Data can be restored to a specific point in time (once again, depending on your backup plan)!  

        • Let’s explore that for a moment using the sample backup plan above.  Say you take Full Backups on Sunday night, Differential Backups every week night, and Transaction Log Backups every half hour.  If you have a failure and must restore data, by using all the backups to the specific time you need to restore, you can recover data at any moment in time!  Using this recovery model, you have a potential of minimal to no data loss (once again…depending on your backup plan and whether your backups are restorable).  You want to be the DBA who is able to restore quickly and recover data to the very last moment before the failure or crash occurred.  YES, we all want to be THAT person!

Bulk Logged Recovery Model

Once again, this Recovery model requires log backups in order to prevent the log file from continually growing.  This is a special purpose recovery model that is not widely used, and Microsoft states that it should only be used “intermittently to improve the performance of certain large-scale bulk operations, such as bulk imports of large amounts of data.”   It allows high performance bulk-copy operations and is only available when in Full Recovery.  Keep in mind that with the Bulk Logged model you can recover to the end of any backup; however, if a bulk transaction has occurred during the last log backup, those transactions will have to be redone.  From the bulk operation on, you can no longer utilize point in time restore. 

OOOOOOOOOO, I dropped the ENTIRE database!

While studying Backups and Recovery, I have been working in SQL Server 2016 AdventureWorks Database (Download Here).  To see if I REALLY understood the process of Backup and Recovery, I made a Copy Only Backup of my original database, made some changes to my database, took Differential Backups, made another Full Backup, made more changes, and made more backups….and then dropped EVERYTHING.  POOF….gone.  While I did do this intentionally, my heart was racing.  Sparkly spots darted in and out of my vision, and my heart pounded in my ears.  Would I be able to get it back?  Would I be able to restore it?  WHAT HAVE I DONE??????

Photo Credit @Lance_LT

I Dropped the DB! Photo Credit @Lance_LT

How do I know that my backups were good?

I firmly believe you learn more from one of your failures than you do from one hundred of your successes.  TEST TEST TEST TEST those backups!  I am afforded some extra wiggle room while I am working on my virtual machine and not actually working on a client’s production database.

So, I did it, I dropped that DB like a hot rock.  I attempted to restore my database in Simple Recovery Model.   My Differentials would not restore.  Somewhere along the way in the backup or restore process, I messed up.   (I will give you a hint at my mistake, I did not restore the Full Backup with NORECOVERY before I tried to restore the Differential Backup with RECOVERY – we will go into Recovery processes in a later post).  

Knowing that I had made a Copy Only Backup before I started monkeying around with the database, I set out to restore faith in myself.  And boom, I restored the Copy Only Backup!  The day was saved!  Well, mostly.  I lost some data changes, but still had the original database, so that was good-ish.

Lessons Learned

At a later date I repeated the steps above and was able to fully recover a database that was previously in Simple Recovery Model complete to the last Differential Backup (noting my mistake and learning from it).  I additionally created another database to test the Full Recovery Model with Full Database Backup, Differential, Transaction Log Backups, also utilizing Tail of the Log backup to get to the very last transaction that was run after the last backup and before my data files AND database were deleted. (This is pretty in-depth and is covered in Recovery Using Tail Log Backup.)

Please come back next time when we will explore ways to validate those backups.

Thank you for reading!

As was discussed in a previous blog, (The Most Important Role of a SQL Server DBA) Backups and Recovery are the cornerstone of all successful SQL Server DBA careers.  Exactly how much attention is paid to backups until something drastic happens?  NOT AS MUCH AS SHOULD BE!

What is a SQL Server Backup?

A backup is the process of putting data into backup devices that the system creates and maintains.  A backup device can be a disk, file, a tape, or even the cloud.  Easy enough!

Backups are your Keys to Success

Backups are your Keys to Success

There are four different methods of backup in SQL Server:

  • Full Backup – A Full Backup is a copy of the entire database as it exists at the time the backup was completed. It is all-encompassing; every detail of your database (data, stored procedures, tables, functions, views, indexes, and so forth) will be backed up. This is why the Full Backup is the foundation of every restore sequence.  These Full backups can be quite large and put a strain on your storage capacity.  Despite the name, a full backup will only backup the data files; the transaction log must be backed up separately.  To ensure you can recover to the point of failure quickly, you will want to also utilize Differential and Transaction Log Backups (we will cover these next).  Without a Full Backup, your Differential and Transaction Log Backups are useless as you cannot restore a database without a Full Backup!  
  • Differential Backup – Differential Backups capture only changed data from the last Full Backup. Simple terms: this will backup only the changes since the last Full backup, not backup the entire database. In the event of an outage, the Differential Backups can greatly reduce the time to recover.  Using both Differential Backups and Full Backups will dramatically reduce required storage space, as the Full Backups can be very large and the Differentials are remarkably smaller.  
    • How does a Differential Backup know what data has changed?  When data is changed on a page, it sets a flag.  This flag indicates which extent (collection of 8 database pages) the Differential Backup will need to backup.  If your Differential Backups are set to run daily, each day’s data changes are recorded and the flag will not be cleared until a Full Backup has been executed.  Say on Monday you have 3 flags set, and Thursday you have 4 more set, your differential backup on Thursday night will contain data changes represented by all 7 flags, so the flags from the Monday Differential are not cleared by the subsequent Differential Backups.  The flags would only be cleared by a Full Backup.
  • Transaction Log Backup – A Transaction Log is a physical file that stores a record of all the transactions that have occurred on a specific database; much like a recorder, maintaining a record of all the modifications.  The Transaction Log Backup will capture the information and then releases the space in the log file so it is able to be used again.  In doing this, the Transaction Log Backup truncates the information in the Transaction Log, but the data is not deleted, it is stored in the backup!  Without Transaction Log Backups, the Transaction Logs will run and grow nonstop, chewing through valuable storage space.   
    • Not only does backing up and truncating the Transaction Log manage the filesize, it also decreases the potential for catastrophic data loss.  Having all the Transaction Log Backups since your last Full Backup will allow you to perform point in time restores.  Transaction Log Backups are ideally set up to execute every few minutes to every hour, depending on your company’s threshold for data loss.
  • Copy Only Backup – A Copy Only Backup doesn’t change the Differential.  It can be a full backup in that it is a record of the database as it exists when the backup is taken; however, it does not reset flags.  It is ideal for doing a full backup in the middle of the week without disrupting any other backups.  A Copy Only Backup can be used in a Dev space for troubleshooting, or preparing to move to a new environment.

Here is an example of a solid weekly backup plan which uses Full, Differential, and Transaction Log Backups:

Sample Backup Plan

Sample Backup Plan

Okay, great…backed up, now what?

Okay, so now you know what SQL Server backups are, a description of each backup type, an idea of how and what they back up, and have an idea of a good plan of action to create a solid backup plan.  So, why are backups so important?  Do you know how easy it is to accidentally update or delete data?   It is just one T-SQL statement away with the wrong filter, or no filter at all.  Having good up to date backups and being able to restore them is the difference between looking like a hero and being forced to find a new job.  Backups are your foundation for Recovery.

Please come back, this is the 2nd in a series of blog posts regarding SQL Server Backups and Recovery.  See you next time when we begin to discuss SQL Server Recovery Models.  Thank you for reading!

 

ProcureSQL is proud to be a first-time sponsor for Houston Techfest 2017. 

We would love to see you at this years Houston Techfest!  The admission is FREE!  You can register here.  The complete agenda is posted on their facebook page here.

What is Houston Techfest? 

     It is a large community-run event focusing on SQL Server, other data solutions, security, mobile development, and agile development.   It will be held on Saturday, October 14, 2017 starting at 8am to 6:30 pm.  The event will take place at the NRG Center, North Stadium Drive, Houston, Texas 77054. 

Did you miss seeing John at Microsoft Ignite in Orlando?  

     If you did not see John’s session at Microsoft Ignite, you are in luck!  Not only will we be sponsoring, but John Sterrett will be presenting three sessions entitled “Make Your SQL Server Queries Go Faster!” , “Learn SQL Backups and Master SQL Restores” and “Successfully Migrating Existing Databases to Azure SQL Database.”  Come join us for another great day of free education!

John Sterrett speaking at Ignite

John Sterrett at #MSIgnite 2017

     Drop by our booth and chat with Angela and John, they will have some cool swag and a chance to win a great prize!

The most important task a DBA can perform is Recovery.

The work of a SQL Server DBA is ever-changing; data is fluid, and accordingly, so is the manner in which data is treated.  Thankfully there are a vast number of ways to keep up with the changes a DBA faces in his/her career.  There are various blogs, hashtags, local PASS chapter meetings, SQL Saturdays, and a host of people online willing and able to help.

I love a challenge, so this month’s blog invitation, T-SQL Tuesday #85 – Backup and Recovery hosted by Kenneth Fisher (b|t), is right up my alley, as this is the first thing I am learning as a new DBA!

 tsql2sday150x150

In a recent twitter poll, John Sterrett asked which is a DBA’s favorite job.

Favorite job as a DBA

Favorite job as a DBA

It is clear by the answer that Backups and Restore is not in the top percentage of favorites.  Why is that?  Well, because it is the simplest job that can be performed, and probably the least “sexy” of all the things a DBA does.  It does not require any special tools or shiny new toys,  Backup and Restore is the most basic of the basics when learning to be a DBA.

Backups are essential to a successful Restore.  Imagine you were asked to recover data that was never backed up…ever… as in never, ever, NEVER.  That feeling you have crawling up your spine right now, that is fear, anxiety, and panic.  If you don’t care for that feeling, you need to learn more about BACKUPS and RESTORES.

If it is so fundamental, why is recovery the most important job of a DBA?  Very simply, backups are the foundation of a disaster recovery plan; however, they are useless if you cannot recover with minimal data loss.

The three key things I have learned while studying backups and restores are:

  1. If you have no restore model, your database and any and all backups are WORTHLESS.
  2. If you have no automation process in place, you should start planning a new career.
  3. If #1 and #2 are ignored, know where to find a good lawyer.

If you do not understand why backups are so important, think about dropping your phone in water… and then it being eaten by an alligator.  Do you have your contacts, photos, passwords, banking information, etc., backed up to the cloud?  No?  My friend, now you understand why backups are so very important!  Don’t be that person who stands crying in a swamp  because an alligator is digesting your data!

Help! A gator ate my data!

Help! A gator ate my data!

DBAs should know by heart the various kinds of backups, how they are used, exactly what they do, and when they should be performed.  The good DBA knows that installing an automated process to perform backups is the key to a long and successful career.  Also, testing, testing, testing is KEY.  Backups and Restores should be the first things taught to a junior DBA, accidental DBA, or a DBA in training.

Hindsight is 20/20, so the saying goes.  Perhaps that is the reason so many DBAs skip learning backups and restores.  We don’t always know there is a need for something until there is a dire need for something.  Perhaps this is one reason all my beginning DBA books cover all the “fun stuff” first and throw in the backups and restores somewhere near the end of the book.  Case in point, I provide you with two examples of critical recovery failures.

Well I have Found the Quickest Way to Get Sacked

Childcare App Wipes Users’ Data

Backups and Recovery are so very important, that is why I am learning this first as a new DBA.  I am studying a great book by John Sterrett (b/t)  and Tim Radney (b/t) titled SQL Server 2014 Backup and Recovery.  I strongly suggest everyone get a copy and read this book.

Please come back, this is the 1st in a series of blog posts regarding Backups and Restores.  See you next time when we begin to discuss types of Backups and Restores in-depth!

Thank you for reading!