Once again it is PASS Summit week in Seattle. This is the biggest event in the world for SQL Server and Microsoft Data Professionals to gather to connect,

Frye First Time Speaker

Frye First Time Speaker

share and learn.

I will never forget being anxious and scared the first time I gave a presentation at PASS in front of hundreds of people.  Therefore, one of my favorite traditions during PASS Summit is to find a first-time speaker at PASS and try to make them at ease by having their friends and peers wear something special during their first presentation.

This year, I couldn’t think of a better person than Jeremy Frye. I have known Jeremy for years. I have been blessed to work with him at RDX. While everyone in the community knows him as the speaker who wears a Pittsburgh Pirates hat at SQL Saturday’s he is an inspiration to me. He is proof that good guys can be successful in this community. He is one of the most humble, kind and helpful people I know.  I have been

Dress like Frye Day!

Dress like Frye Day!

blessed to see him share his knowledge for years and am excited for everyone to do so this week as well. Therefore, to help Jeremy for his first session we will be giving out Pittsburgh Pirates hats to support Jeremy. If you can make Jeremy’s session on “Speed Up Your SSAS Data Refresh with Dynamic Partition Processing” at 11:00 am in room 604 on Friday (I like to call it FryeDay) come on by the RDX booth and ask for a Pirates hat. When you see Jeremy around this week tell him you got this and you cannot wait for his session!

 

The post Join us for FryeDay! appeared first on SQL Server Consulting & Remote DBA Service.

With Microsoft’s Ignite conference this week a lot of new features are being advertised all over the internet.  Most likely if you are following along you have heard of Big Data Clusters, Spark, Performance Tuning, and security features.  I am really excited about this new feature and hope you are as excited about it too.

 

The post Best Hidden Feature in SQL Server 2019 appeared first on SQL Server Consulting & Remote DBA Service.

[Updated on 10/25/2018]

With MSIgnite being this week tons of great new features are being announced.  While many will focus on Spark integration, performance tuning, security, and containers sometimes it’s the simple things that give us the most value. I will talk about a feature that is hidden on page 17 in Microsoft’s SQL 2019 whitepaper. This feature is not getting a lot of love now but everyone will love it.  I will be honest, I didn’t even know about it until reading the white paper.  What is really super cool is that this new feature will find its way into SQL Server 2017 and SQL Server 2016. More on this in a bit.

In the past, I have struggled with the classic string truncation error. Everyone has at one point in time or another. It’s tricky to troubleshoot especially when you are processing a big batch of data.

String or binary data would be truncated

So even in SQL 2019, SQL Server 2017 CU12, and SQL Server 2016 (upcoming SQL Server 2016 SP2 CU)  by default, you will see the very classic “string or binary data would be truncated. The statement has been terminated.” message. Developers, DBA’s, IT Professionals, and almost everyone who has ever touched SQL Server have seen this frustrating error at least once. It is very frustrating for two separate reasons. One, you have no idea which column had the truncation. Second, you have no idea which row also had the error.

Let’s take a look at a very simple example.

use [tempdb]
go

if exists (select 1 from sys.tables where name like 'WorldSeries')
drop table dbo.WorldSeries
GO

CREATE TABLE dbo.WorldSeries (TeamName varchar(200), Championships varchar(3))
GO

INSERT INTO dbo.WorldSeries (TeamName, Championships) VALUES ('Texas Rangers', null)

INSERT INTO dbo.WorldSeries (TeamName, Championships) VALUES ('Houston Astros', 'one'), 
('Atlanta Braves', 'two'), ('Pittsburgh Pirates', 'five')
GO

You will see that our first statement on inserting the Texas Rangers completes successfully even though the Rangers have zero (I like NULL) championships.  We have an error in the next batch of doing three inserts in a single query so the whole transaction will rollback. In this example, we could figure out where our error is but let’s pretend we are doing a massive data load into several columns with billions of records. Actually, I don’t even want to think about troubleshooting that and with this new hot feature in SQL Server 2019, you won’t have too either.

SQL Server 2019 Feature Everyone Will Love

Okay, I think you know where this is going… Let’s not prolong this any longer. We will rerun the last statement with just adding trace flag 460.

DBCC traceon(460)
INSERT INTO dbo.WorldSeries (TeamName, Championships) VALUES ('Houston Astros', 'one'), 
('Atlanta Braves', 'two'), ('Pittsburgh Pirates', 'five')

GO

Now it is crystal clear.  The Pittsburgh Pirates have won too many championships to fit into this table. Okay, I guess we can both agree the string “five” is too big to fit into a varchar(3) column.

John’s Thoughts and Recommendation

All you have to do in SQL Server 2019 CTP 2.0 (Current newest public release) is enable trace flag 460.  I hope, the need for trace flag 460 goes away when SQL Server 2019 is Generally Available.  That said, enabling the trace flag is easy and this feature is going to be a lifesaver especially for people consuming data from other sources and struggle troubleshooting data that doesn’t fit into their data model due to the length of a value.

I am sure I am not the only person who provided feedback on this new feature to Microsoft.  That said, I am really excited to see Pedro’s announcement that the trace flag 460 shouldn’t be required when SQL Server 2019 is Generally Available (GA). The new error message (message 2628) will replace the old error message (message 8152). I am also excited that you will be able to utilize this new feature in SQL Server 2016 and 2017. It makes perfect sense for backward compatibility that you will need to provide trace flag 460 if you are using SQL Server 2017 CU 12 or SQL Server 2016.  Way to go Pedro and the SQL Server Team. This is amazing news!

use [tempdb]
go

if exists (select 1 from sys.tables where name like 'WorldSeries')
drop table dbo.WorldSeries
GO

CREATE TABLE dbo.WorldSeries (TeamName varchar(200), Championships varchar(3))
GO

INSERT INTO dbo.WorldSeries (TeamName, Championships) VALUES ('Texas Rangers', null)

INSERT INTO dbo.WorldSeries (TeamName, Championships) VALUES ('Houston Astros', 'one'), 
('Atlanta Braves', 'two'), ('Pittsburgh Pirates', 'five')
GO

In my SQL Server Recovery Models blog, I touched a bit on my experience with recovery using Tail Log Backup.  In this post we will take an in-depth look at Tail Log Backup; what they are, why they should be in your toolbelt, and lastly line out the steps to successfully take a Tail Log Backup.

What is a Tail Log Backup?

Simply put, a Tail Log Backup contains log records that were not yet backed up at the time of failure.  So if Transaction Log Backups occur every 15 minutes, and you suffered loss at the 11 minute mark, the Tail Log Backup includes all data changes during the time span between the last successful Transaction Log Backup and minute 11.    Recovery using the Tail Log backups can be performed in either Full Recovery or Bulk Logged Recovery, but cannot be used in Simple Recovery Model. 

Why are these important?

Is it possible to recover with no data loss?  YES*!  This is where our new friend comes into action!   Taking a Tail Log backup is done to prevent data loss and helps recover right up to the point of disaster. (This is also referred to as Point In Time Restore.)

Keep in mind: in order to recover your database to its most recent point, you must have a valid Full Backup and valid Transaction Log Backup sequence!

After a disaster, if you can take a Tail Log Backup, have all the preceding Transaction Log Backups, have a valid Full Backup, and you are in Full Recovery mode, it is possible to recover with NO DATA LOSS!  For this to be possible in Bulk-Logged Recovery mode, no minimally logged operations must have occurred.

In what case would you ever need a Tail Log Backup?

Any time you have a damaged database and are needing to restore, it is best to check to see if you need a Tail Log Backup.  The question you need to ask is “Do I have Transaction Log Backups?”   If the answer is yes, your recovery will be much faster!  Another question to ask is “Is the Server still available?”

Server Still Available

If the database is damaged but the server is still available, it is pretty easy to take a Tail Log Backup.  When the data files are damaged or missing, you will get an error if you try to take a normal log backup.  But if you use NO TRUNCATE, you will be able to take a Log Backup.

 
BACKUP LOG [TestDB] TO DISK = 'G:\DBA\Backups\TestDB_Log_Tail.bck' ;
WITH INIT,
NO_TRUNCATE;

*Note:  In order to successfully take a Tail Log Backup, you must use NO TRUNCATE. That will allow the log backup even if the database files are damaged or not there.  Using INIT will overwrite any existing set and you will still end up with only one backup in case the command is run twice.

Server Not Available

Let’s say the server has crashed and cannot be brought back online.  If you are lucky enough to still have access to all the Full Backups and Log Backup files, you can attach them to another server and automatically recover.

If the database is damaged and the server is not available, taking a Tail Log Backup becomes a little more difficult. Rest assured, there is still an option to try.

You will need to create a dummy database with the same name as the one that is damaged.

  1. Next, set the database offline.
  2. Delete the files from the dummy database.
  3. Drop in the log file from the real database.
--Create a dummy database with the same name
CREATE DATABASE [TestDB];
GO
--Set the database offline
ALTER DATABASE [TestDB] SET OFFLINE;
GO
--Insert Log file from original database--Take Tail Log Backup
BACKUP LOG [TestDB] TO DISK = 'G:\DBA\Backups\TestDB_Log_Tail.bck' ;
WITH INIT,
NO_TRUNCATE;
GO

Now you are ready to take a Tail Log Backup as detailed above.  This will allow you to recover to the point of failure!  In my next post, we will do a deep dive into Recovery Using Tail Log Backups.

Thank you for reading!

I had a blast speaking at SQL Saturday Denver.  To help everyone moving to SQL Server 2017  I have included my slides and demo code below.  If you end up having any questions with the code or need help feel free to contact me.

SQL Saturday Denver Bonus Content

You can also review our SQL Anti-Patterns

The following an Automatic Tuning in the Real-World example with one of our customers during their SQL Server 2017 upgrade.

Great question!

     Before I answer this right off, ask yourself these questions: “Do I want to lose access to all other databases on the instance?  What would happen if I lose the model my company demands for the specific way every new database must be created?  Would anyone notice if I had to restore after a disaster and no one had correct passwords to the database?”  That shiver that just ran up your spine is your answer.  Absolutely YES, the system databases (Master, MSDB, and Model) should have backups!  

Master

     It is recommended that the Master be backed up as often as necessary to protect the data: a weekly backup with additional backups after substantial updates is highly recommended.  If for some reason the Master becomes unusable, restoring from the backup is the best way to get up and running.  Go here for information on Restoring the Master. If you do not have a valid backup of the Master, rebuilding the Master is the only option.  You can click here to find more information about what it takes to Rebuild System Databases.

Model

       Best practices recommend creating FULL backups of the Model Database, and doing so only when necessary for your business needs.  It is a small database that rarely sees changes; however, it is important to make sure it is backed up especially immediately after customizing its database options. 

MSDB

     Microsoft recommends to perform backups on the MSDB database whenever it is updated.  

TempDB

      What about TempDB?  Isn’t it a System Database as well?  Despite the importance of backups and recovery, the only database that cannot be backed up or restored is TempDB!  Why can it not be backed up or restored? TempDB is recreated each time the server is restarted, so any temporary objects like tables, indexes, etc, are cleared automatically.  As seen here, backup nor recovery are even an option!  

“But, backups take so long to run!”

    Feeling like this might be too much trouble?  As with any other backup, these too can be automated by using a SQL Agent job!   There is absolutely no reason NOT to back up your system databases.  If you feel otherwise, might I suggest you keep an updated resume close at hand.

In SQL Server 2016 we saw Query Store.  Query Store was a game changer to help database administrators identify troublesome queries. Query Store helps DBAs make those queries run faster.  Microsoft’s marketing team even jumped on to help coin the phrase, “SQL Server It Just Runs Faster.” With SQL Server 2017, this started to get even better with automatic tuning. Don’t worry database administrators.  Automatic Tuning will just enhance your career and not replace it.

SQL Server 2017 Automatic Tuning looks for queries where execution plans change and performance regresses. This feature depends on Query Store being enabled. Note, even if you don’t turn on Automatic Tuning you still get the benefits of having access to the data. That is right. Automatic Tuning would tell you what it would do if it was enabled.  Think of this as free performance tuning training.  Go look at the DMVs and try to understand why the optimizer would want to lock in an execution plan. We will actually go through a real-world example:

Automatic Tuning with SQL Server 2017

First, let’s take a quick look at the output of the data. You can find the query and results we will focus on below.

SELECT reason, score,
script = JSON_VALUE(details, '$.implementationDetails.script'),
planForceDetails.*,
estimated_gain = (regressedPlanExecutionCount+recommendedPlanExecutionCount)
*(regressedPlanCpuTimeAverage-recommendedPlanCpuTimeAverage)/1000000,
error_prone = IIF(regressedPlanErrorCount>recommendedPlanErrorCount, 'YES','NO')
FROM sys.dm_db_tuning_recommendations
CROSS APPLY OPENJSON (Details, '$.planForceDetails')
WITH ( [query_id] int '$.queryId',
[current plan_id] int '$.regressedPlanId',
[recommended plan_id] int '$.recommendedPlanId',

regressedPlanErrorCount int,
recommendedPlanErrorCount int,

regressedPlanExecutionCount int,
regressedPlanCpuTimeAverage float,
recommendedPlanExecutionCount int,
recommendedPlanCpuTimeAverage float

) as planForceDetails;

I will break the results down into two photos to make them fit well in this blog post.

Free Tuning Recommendations with Automatic Tuning in SQL Server 2017

Free Tuning Recommendations with SQL Server 2017 (1/2)

Automatic Tuning Results in SQL Server 2017

Free Tuning Recommendations with SQL Server 2017 (2/2)

Now we know in the query store query_id 2271 has CPU time changing from 7,235ms to 26ms. That’s a big difference. Let’s take that query and look at its findings by using the tracked query report inside SSMS.

Query Store find history of a query

Find my Changed Query. Did the plans change?

Here we can see the major difference between the two execution plans. One is averaging over 14 seconds in duration while the other is under a second.

Reviewing Query performance in Query Store

Query Store showing the performance difference between the two plans

Now we can select both plans on the screen above and look at the execution plans side by side inside of SSMS. When doing so, we see the common example of the optimizer determining if it is better to scan an index vs a seek with a key lookup.

Side by Side Execution Plan Review in SSMS.

Using SSMS to compare auto tuning recommended query.

To complete the example I want to point out that automatic tuning would lock in the index seek plan (Plan 2392). In SQL Server 2016 you can do this as well manually inside Query Store. With SQL Server 2017 it can be done automatically for you with Automatic Tuning. If you have ever woken up to slow performance due to an execution plan changing and performance going through the drain this might be a life saver.

If you would like to learn about performance tuning changes in SQL Server 2016 and 2017 sign up for our newsletter or catch me talking about these features at SQL Saturday Denver and SQL Saturday Pittsburgh this month.  If you need any help with tuning or upgrading contact us. We would love to chat with you!

 

The post SQL Server Automatic Tuning in the Real-World appeared first on SQL Server Consulting & Remote DBA Service.

I hope everyone enjoyed my session on Making Your Queries Go Faster as much as I did planning and delivering it. If you would like to replay the demos on your time you can grab the slide and code . When you go through the demos, if you have any questions, give me a shout.

Groupby Demos and Source Codes

If I learned anything from Brent Ozar its to always provide free extra stuff when you present at conferences.  Therefore, below are some bonus videos.

First, here is a Real-World look into Automatic Tuning

Finding Top Offenders with Extended Events

Benchmarking a Query

Scalar Functions vs Table-Value Functions

The Procure SQL team is out and about in September, sharing their knowledge for free at SQL Saturdays as well as online webinars!  We are very proud of our team and their efforts to share knowledge and experiences.  Be sure to say hello and ask questions when you see them!

GroupBy September 7, 2018

We are proud to announce that Martin Perez, close friend of Procure SQL, will be joining John Sterrett in presenting for GroupBy.Org!  GroupBy is FREE online SQL Server training by the community, for the community!  This month’s training  is scheduled for September 7, 2018 beginning at 8:00 AM Eastern, with the last session starting at 2:15 PM!

Registration is free! GroupBy

Martin Perez (11:45 AM Est)

Martin will be presenting “T-SQL Tools: Simplicity for Synchronizing Changes” at 11:45 AM Eastern.
Any DBA or developer that has ever faced the problem of moving data changes across databases will surely benefit from this session.  One of the most common problems a DBA or database developer may face is finding an syncing data changes efficiently between multiple systems.  He will explore different methods focusing on effective use of TSQL, and how these methods can be mixed and matched in your ETL workflows:

  • Brute Force
  • Log Shipping/Read Only Standby
  • Rowversion datatype
  • Change Tracking
  • Change Data Capture
  • Temporal Tables

Martin hopes to assist the DBA or Developer free up time and energy by understanding all the available options.  Come find out why brute force is not the only path forward!

John Sterrett (1:00 PM Est)

Directly following Martin, John Sterrett will be presenting “Make Your Own SQL Server Queries Go Faster” at 1:00 PM Eastern.
Have you ever developed slowly running queries and wondered how to make your own SQL Server queries go faster?  It is how John stumbled into his career in performance tuning.  In this session, you will learn some T-SQL performance tuning tips that can be used in the office immediately. You will learn how to identify top offender queries and be able to benchmark queries. Finally, he will cover some coding patterns that are developer friendly (but are not SQL Server database engine friendly) and show you how to rewrite them so they can run faster.

SQL Saturday #774 – Denver, CO September 15, 2018

If you are in the Denver, Colorado area, be sure to register for SQL Saturday #774!  The free training event will be held on Saturday, September 15, 2018 at the University of Denver in Sturm Hall.  Registration is free!

Two of our team members will be presenting!  Both John Sterrett and Angela Tidwell will be presenting at 9:00 AM.  

John Sterrett

John will be presenting “Automate the Pain Away with Adaptive Query Processing and Automatic Tuning
In this Intermediate level session, John will explore how tuning SQL Server workloads is changing. In the past, you would rely on dynamic management views, execution plan cache, and server-side traces or extend events. In SQL Server 2012 Columnstore indexes were introduced; In SQL 2014 the cardinality estimator got a huge overhaul. With SQL Server 2016 we were given query store to allow us to get more insight into our performance pains and execution plan changes. SQL Server 2017 gave us Automatic Tuning and Adaptive Query Processing to help automate performance improvements. In this session, he will harness ways to improve performance into the new so you can automate the pain away and spend more time identifying and tuning pain points that cannot be automated.

Angela Tidwell

Angela will be presenting “Become the Most Valuable Player: Soft Skills for the Hard Market
This session is for beginners and seasoned pros alike.  Whether you are just starting a career, advancing into senior leadership, or trying to woo a potential client, soft skills can be your ticket to success. Often times we focus on showing how smart we are instead of showing what a good team member we can be.  In this session, Angela will discuss the role and types of soft skills needed to showcase your X-factor.  Need to hone your skills?  She will explore innovative and fun ways in which to do just that!  Get ready, Angela likes to wake you up and make you think: she likes to have fun in this session!

SQL Saturday #802 – San Diego, CA September 22, 2018

Planning to be in San Diego on September 22, 2018? If so, you can catch Jay Falck speaking at SQL Saturday #802!
SQL Saturday San Diego will be held at UCSD Extension – University City Center and Registration is free!

Jay Falck

Jay will be presenting “Introduction to Linux for SQL Server
SQL Server is now available on Linux but you only know Windows, now what?  In this beginner level course, Jay will introduce the Linux version of Windows commands used on a daily basis for administering SQL Server.  He will explore updating Linux, updating SQL Server, moving files between Windows and Linux, and backing up and restoring databases from one system to another.  Also he will dive into default file locations for SQL Server and what can be moved and how to accomplish that.

SQL Saturday #770 – Pittsburgh, PA September 29, 2018

Pittsburgh Technical College will be hosting SQL Saturday #770 on September 29, 2018.  As always, registration for SQL Saturday is free!

John Sterrett

John will be presenting Automate the Pain Away with Adaptive Query Processing and Automatic Tuning at 10:00 AM Eastern.
The way you tune your SQL Server workloads are changing.  In the past, you would rely on dynamic management views, execution plan cache, and server-side traces or extend events. In his intermediate level session, John will explore the ways to improve performance into the new so you can automate the pain away and spend more time identifying and tuning pain points that cannot be automated.

Time is money!

     Your boss keeps talking about RPO (Recovery Point Objective)  and RTO (Recovery Time Objective).  Do you just nod your head like you know what he/she is talking about?  Maybe that scenario just happened and you are searching the internet for what these terms mean.  If so, welcome!  No one likes to think about disasters, but they happen all too often.  Planning for the worst and hoping for the best will keep your data safe and your job even safer. Let’s take some time and explore what RPO and RTO mean, why these things are important, and what you need to do next to be a hero DBA!

RTO (Recovery Time Objective)

     Recovery Time Objective is the amount of time in which your company expects you to have the database fully restored after a disaster.  That is, how much downtime is acceptable for disaster recovery or planned outages.  Each company is different, and most reference RTO in terms of nines. 

     The nines measure for a company that measures 365 days a year, 24 hours a day as follows:

5 9’s – 99.999% (this translates to about 5 minutes of acceptable downtime per year)
4 9’s – 99.99% (this translates at about 52.5 minutes per year and is much easier to achieve)
3 9’s – 99.9%  (this translates at about 8.75 hours per year)
2 9’s – 99% (translates to about 3.5 days a year)

     To decide what RTO is best for your company, you need to take into consideration your data needs.  Not all companies run on a 365/24 schedule.  Some companies only measure downtime between 8am-6pm Monday through Friday, or only on the weekends.   This will drastically change the translation of the 9’s.  Another thing to think about is whether the measured downtown includes time for maintenance or patching, times when the database must be offline.  If maintenance time is eliminated from consideration, meeting the higher 9’s is much easier.

     If your company insists on an RTO of 5 Nines and does not take into consideration maintenance or patching, then you must speak with the persons in charge to discuss the RPO.  It is possible to adhere to the strict 5 minutes of downtime, but the point at which you are able to recover, will definitely be restricted.

RPO (Recovery Point Objective)

     Recovery Point Objective is the level of data or work that is acceptable to lose in the event of a disaster.   Ideally, companies will want ZERO data or work loss.  While that IS achievable, it will all depend on  valid backups and the extent of damage the database suffered at the point of disaster.  

     An RPO of 15 minutes means that the data and work must be recoverable to a point within 15 minutes of the disaster, meaning that it is expected that only 15 minutes of work or data may be lost.  Stop right here and think about your backup  plans and recovery models.  Restoring a database that is in simple recovery model should not take as long as a restoring one in full recovery model. It is important to remember (from previous blog posts), the recovery model dictates how much data you can recover. It is also important to remember, the ability to recover ANY data at all is fully dependent on having valid backups.

Run Book

     Another term you might hear is “Run Book”.  A Run Book is a physical or digital collection of information that is needed to restart the database in case of disaster.  There are many items that should be included in the runbook.  Some of the essential items one should consider having in the runbook are:

  • Server level info, configuration, purpose, etc.
  • List of all databases and applications using them
  • List of agent jobs and proper response to a failure
  • Disaster Recovery process with all contacts, RPO/RTO, etc. required to bring it back (based on level of issue)
  • Security
  • Backup schedules

     When considering a run book, think about what someone would need if they were new to the company and the only person available to restart the database.  What information would that person need?  Making sure your run book is up to date on a regular basis is certainly a great idea!

Preparing for disaster

     Keep in mind that if you prepare for the worst, you will be less likely to be caught off-guard with a manager breathing down your neck asking “WHEN WILL WE BE BACK UP AND RUNNING?!?!”  Do you have any idea how long it will take to restore your database?  If your answer is “no,” I would suggest doing a restore to see how long this takes.  Further, I would suggest making it a habit to perform drills so that you and your team know what to do in the event of a disaster, and exactly how long it takes to get your company back up and running. Having a solid backup schedule, validating those backups, and keeping your company’s expectations in mind, you will be ready to handle any data disaster that may be thrown your way.