Hello, Everyone; this is John Sterrett from Procure SQL. Today, we will discuss how you can validate SQL Server Backups with a single line of PowerShell.

Due to the recent global IT outage, I thought this would be an excellent time to focus on the last line of defense—your database backups. I have good news if you are not validating your SQL Server backups today.

DbaTools Is Your Friend

Did you know you can validate your backups with a single PowerShell line? This is just one of several amazing things you can do with dbatools in a single line of PowerShell.

John, what do you mean by validating SQL Server backups?

  • I mean, find your last backup
  • See if the backup still exist
  • Restore the previous backup(s)
  • Run an Integrity Check
  • Document the time it took along the way

Validating SQL Server Backups – How Do We Validate Our Backups?

DBATools has a module named Test-DbaLastBackup.

You could run it with the following command to run against all your databases using the instance name provided below.

$result = Test-DbaLastBackup -SqlInstance serverNameGoesHere

You could also have it run for a single database with a command similar to the one below.

$result = Test-DbaLastBackup -SqlInstance serverNameGoesHere -Database ProcureSQL

What happens with Test-DbaLastBackup?

Great question! If we learned anything from the recent global IT downtime, it’s to validate and test everything!

I love to see what’s happening under the hood, so I set up an extended event trace to capture all the SQL statements running. I can see the commands used to find the backups, the restore, the integrity check, and the dropping of the database created during the restore.

All the excellent things I will share are below.

Extended Event

The following is the script for the extended event. I run this to capture events created by my DBATools command in Powershell. Once I start the extended event trace, I run the PowerShell command to do a single check on a database, as shown above. I then stop the capture and review.

CREATE EVENT SESSION [completed] ON SERVER 
ADD EVENT sqlserver.sp_statement_completed(
  ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.query_hash,sqlserver.session_id,sqlserver.sql_text,sqlserver.username)
    WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[client_app_name],N'dbatools PowerShell module%'))),
ADD EVENT sqlserver.sql_batch_completed(
    ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.query_hash,sqlserver.session_id,sqlserver.sql_text,sqlserver.username)
    WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[client_app_name],N'dbatools PowerShell module%'))),
ADD EVENT sqlserver.sql_statement_completed(
    ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.query_hash,sqlserver.session_id,sqlserver.sql_text,sqlserver.username)
    WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[client_app_name],N'dbatools PowerShell module%')))
ADD TARGET package0.event_file(SET filename=N'completed',max_file_size=(50),max_rollover_files=(8))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO

Validate SQL Server Backups – Resulting Events / Statements

Here, you can see we captured many SQL Statements during this capture. Below, I will minimize and focus on key ones that prove what happens when you run this command for your database(s).

The query used to build up the backup set to find the last backup was too big to screenshot, so I included it below.

/* Get backup history */
SELECT
                        a.BackupSetRank,
                        a.Server,
                        '' as AvailabilityGroupName,
                        a.[Database],
                        a.DatabaseId,
                        a.Username,
                        a.Start,
                        a.[End],
                        a.Duration,
                        a.[Path],
                        a.Type,
                        a.TotalSize,
                        a.CompressedBackupSize,
                        a.MediaSetId,
                        a.BackupSetID,
                        a.Software,
                        a.position,
                        a.first_lsn,
                        a.database_backup_lsn,
                        a.checkpoint_lsn,
                        a.last_lsn,
                        a.first_lsn as 'FirstLSN',
                        a.database_backup_lsn as 'DatabaseBackupLsn',
                        a.checkpoint_lsn as 'CheckpointLsn',
                        a.last_lsn as 'LastLsn',
                        a.software_major_version,
                        a.DeviceType,
                        a.is_copy_only,
                        a.last_recovery_fork_guid,
                        a.recovery_model,
                        a.EncryptorThumbprint,
                        a.EncryptorType,
                        a.KeyAlgorithm
                    FROM (
                        SELECT
                        RANK() OVER (ORDER BY backupset.last_lsn desc, backupset.backup_finish_date DESC) AS 'BackupSetRank',
                        backupset.database_name AS [Database],
                        (SELECT database_id FROM sys.databases WHERE name = backupset.database_name) AS DatabaseId,
                        backupset.user_name AS Username,
                        backupset.backup_start_date AS Start,
                        backupset.server_name as [Server],
                        backupset.backup_finish_date AS [End],
                        DATEDIFF(SECOND, backupset.backup_start_date, backupset.backup_finish_date) AS Duration,
                        mediafamily.physical_device_name AS Path,
                        
                backupset.backup_size AS TotalSize,
                backupset.compressed_backup_size as CompressedBackupSize,
                encryptor_thumbprint as EncryptorThumbprint,
                encryptor_type as EncryptorType,
                key_algorithm AS KeyAlgorithm,
                        CASE backupset.type
                        WHEN 'L' THEN 'Log'
                        WHEN 'D' THEN 'Full'
                        WHEN 'F' THEN 'File'
                        WHEN 'I' THEN 'Differential'
                        WHEN 'G' THEN 'Differential File'
                        WHEN 'P' THEN 'Partial Full'
                        WHEN 'Q' THEN 'Partial Differential'
                        ELSE NULL
                        END AS Type,
                        backupset.media_set_id AS MediaSetId,
                        mediafamily.media_family_id as mediafamilyid,
                        backupset.backup_set_id as BackupSetID,
                        CASE mediafamily.device_type
                        WHEN 2 THEN 'Disk'
                        WHEN 102 THEN 'Permanent Disk Device'
                        WHEN 5 THEN 'Tape'
                        WHEN 105 THEN 'Permanent Tape Device'
                        WHEN 6 THEN 'Pipe'
                        WHEN 106 THEN 'Permanent Pipe Device'
                        WHEN 7 THEN 'Virtual Device'
                        WHEN 9 THEN 'URL'
                        ELSE 'Unknown'
                        END AS DeviceType,
                        backupset.position,
                        backupset.first_lsn,
                        backupset.database_backup_lsn,
                        backupset.checkpoint_lsn,
                        backupset.last_lsn,
                        backupset.software_major_version,
                        mediaset.software_name AS Software,
                        backupset.is_copy_only,
                        backupset.last_recovery_fork_guid,
                        backupset.recovery_model
                        FROM msdb..backupmediafamily AS mediafamily
                        JOIN msdb..backupmediaset AS mediaset ON mediafamily.media_set_id = mediaset.media_set_id
                        JOIN msdb..backupset AS backupset ON backupset.media_set_id = mediaset.media_set_id
                        JOIN (
                            SELECT TOP 1 database_name, database_guid, last_recovery_fork_guid
                            FROM msdb..backupset
                            WHERE database_name = 'CorruptionChallenge8'
                            ORDER BY backup_finish_date DESC
                            ) AS last_guids ON last_guids.database_name = backupset.database_name AND last_guids.database_guid = backupset.database_guid AND last_guids.last_recovery_fork_guid = backupset.last_recovery_fork_guid
                    WHERE (type = 'D' OR type = 'P')
                     AND is_copy_only='0' 
                    
                    AND backupset.backup_finish_date >= CONVERT(datetime,'1970-01-01T00:00:00',126)
                    
                     AND mediafamily.mirror='0' 
                    ) AS a
                    WHERE a.BackupSetRank = 1
                    ORDER BY a.Type;

The following is a screenshot of the results of validating a database with a good backup and no corruption.

 

What should I expect with a corrupted database?

Great question! I thought of the same one, so I grabbed a corrupt database from Steve Stedman’s Corruption Challenge and ran the experiment. I will admit my findings were not what I was expecting, either. This is why you shouldn’t take candy from strangers or run scripts without testing them in non-production and validating their results.

After restoring the corrupted database that had been successfully backed up, I performed a manual integrity check to validate that it would fail, as shown below.

Hopefully, you will have a process or tool to monitor your SQL Server error log and alert you of errors like these. If you duplicate this example, your process or tool will pick up these Severity 16 errors for corruption. I would validate that as well.

Validate SQL Server Backups – PowerShell Results

Okay, was I the only one who expected to see Failed as the status for the integrity check (DBCCResult)?

Instead, it’s blank, as I show below. So, when you dump these results back out, make sure to make your check for anything other than Success.

I submitted a bug to DbaTools and post back here with any updates.

Other Questions….

I had some other questions, too, which are answered on the official documentation page for the Test-DbaLastBackup command. I will list them below, but you can review the documentation to find the answers.

  • What if I want to test the last full backup?
  • What if I want to test the last full and last differential?
  • What if I wanted to offload the validation process to another server?
  • What if I don’t want to drop the database but use this to restore for non-production testing?
  • What if I wanted to do physical only for the integrity check?
  • What if I want to do performance testing of my restore process by changing the Max Transfer Size and Buffer Counts?

What questions did we miss that you would like answered? Let us know in the comments.

Checkout this quick video to see how you can start to load and transform your data with notebooks. Data Engineering with Microsoft Fabric becomes easier once you understand how to leverage notebooks to your advantage.

If you are in Austin Texas on March 8 & 9, 2024 don’t miss SQL Saturday Ausin where you can learn more about Microsoft Fabric, PowerBI, SQL Server and more.

In five minutes, Justin gives you a Microsoft Fabric overview to help you ingest, process, store, monitor, and govern your data with Microsoft’s new analytics tool. He will discuss user personas like Data Engineering and Data Science within Microsoft Fabric and how the personas will use tools like Data Activator, Power BI, Data Factory, Real-Time Analytics, Notebooks, Spark, Lakehouse, and more.

In five minutes, Justin at Procure SQL breaks down how Microsoft Fabric tools help you ingest, process, store, monitor, and govern your data with Microsoft’s new analytics tool.

Got Microsoft Fabric or Analytics Questions?

We would love to discuss your questions from watching this Microsoft Fabric overview video or anything related to the Microsoft Data Platform. Fill out the form below, and we will be in touch soon.

This week, John asked, “Can you query data inside a lakehouse using SQL Server Management Studio?

Yes, did you know you could query your data inside a Microsoft Fabric Lakehouse with SSMS (SQL Server Management Studio)? John didn’t so Justin Cunninham shows you how to leverage SSMS with your data in four minutes In this data architect tip.

If you find yourself needing some Data Architect as a service contact us. If you enjoyed this video, sign up for our newsletter.

Replay in the Cloud like a RockStar
Replay like a Rockstar!

Want to save money, validate performance, and make sure you don’t have errors while migrating to Azure SQL Database, Azure SQL Managed Instance, SQL Server RDS in Amazon AWS? In this video, you will learn how to use the Data Experimentation Assistant to replay and compare your on-premise or cloud SQL Server workloads on-demand. First, you will learn how to capture a workload on-premise or in the cloud. Next, you will master replaying your workload on-demand as needed. Finally, you will do an analysis of comparing a baseline workload and another workload with your changes.

Replay in the Cloud Video

Workload Replay is the Secret Weapon for a Successful Migration to the Cloud

Slidedeck

Recommended Reads

If you liked this video, check out the following other resources.

I am a consultant in Austin who can help make your data go fast, be secure and highly available. When I am engaged in a performance tuning project priority #1 isn’t to make sure your data go faster. Priority #1 is to make sure we get the same result sets while making your data go faster.

Free SQL Data Compare with T-SQL?

There are several tools out there that can be used to compare data. Today, I want to share how you can quickly do this on your own with T-SQL!

Let’s simplify the process. Our goal is to check two temp tables and validate if any of the data is different. This would include inserts, updates, and deletes. For this example, I will just do a dump of Sales.SalesOrderDetail in AdventureWorks into two temp tables as shown below.

SELECT * INTO #Tmp1
FROM Sales.SalesOrderDetail SELECT * INTO #Tmp2
FROM Sales.SalesOrderDetail

Now we shouldn’t see any differences since we used the same table to create both temp tables. We are going to use two different SQL operators to compare these two temp tables while applying some data changes. We will focus on the UNION ALL and EXCEPT operators.

The Power of EXCEPT

Except is an underrated and underused SQL operation. In a nutshell, it will give you the results of the first query that are different from the next query. So, if the data of any column in #tmp1 is different from #tmp2 or if the row doesn’t exist in #tmp2 but is in #tmp1 it will get returned.

SELECT * FROM #Tmp1
EXCEPT SELECT * FROM #Tmp2

Let’s go ahead and modify a column in #Tmp1 so you can see how this works. We are going to set OrderQty to five when SalesOrderId is 45313 and SalesOrderDetailId is 6210. This will change just one column in one row. We will then select these columns from both temp tables to see the change.

This is how most people would start using T-SQL to identify changes in data.

UPDATE #Tmp1 SET OrderQty = 5 WHERE SalesOrderID = 45313 AND SalesOrderDetailID = 6210 SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp1 WHERE SalesOrderID = 45313 AND SalesOrderDetailID = 6210 SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp2 WHERE SalesOrderID = 45313 AND SalesOrderDetailID = 6210 
Data Compare is easy when we know what changed.
Data Compare is easy when we know what changed and not much changed. Just select it..
Data Compare is easy when we know what changed. Data Compare is easy when we know what changed and not much changed. Just select it..

Finding Data Changes The Easy Way

Selecting the two tables is easy if we know what change occurred and there aren’t many changes. This can get complicated quickly. Therefore, if we just want to quickly know if we have differences lets take a look at my goto method using EXCEPT. To make this example easier to read instead of using “SELECT *” I will just focus on columns that are changing. In a real example, I would want to know if any columns changed.

SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp1
EXCEPT SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp2
Data Compare using EXCEPT quickly lets us see that we had a data change
Data Compare using EXCEPT quickly lets us see that we had a data change

If an insert or a column change occurs in #tmp1 we will see it in our EXCEPT SQL statement. This isn’t true if the change is only in #tmp2.

For example, an insert in #tmp2 or delete in #tmp1 would not be shown. To see this we would have to switch the temp tables in the EXCEPT clause as shown below.

INSERT INTO #tmp2 (SalesOrderId, ProductID, SpecialOfferID, OrderQty, UnitPrice, UnitPriceDiscount,LineTotal, rowguid, ModifiedDate)
VALUES (45313, 1, 3, 1,1.25,0,
1.25*1, NEWID(), GETDATE()) DELETE FROM #Tmp1
WHERE SalesOrderID = 45313 AND SalesOrderDetailID = 6211 SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp1
EXCEPT SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp2
/* We will now see our insert and delete */
SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp2
EXCEPT SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp1
Our first EXEMPT clause only shows the update that occurred in #tmp1. The delete in #tmp1 and insert in #tmp2 cannot be seen because the data doesn't exist in #tmp1.
Our first EXEMPT clause only shows the update that occurred in #tmp1. The delete in #tmp1 and insert in #tmp2 cannot be seen because the data doesn’t exist in #tmp1.
Our Second EXEMPT shows the insert in #tmp2, delete in #tmp1 and update on #tmp1 because the column is different on #tmp2
Our Second EXEMPT shows the insert in #tmp2, delete in #tmp1 and update on #tmp1 because the column is different on #tmp2

Our first except shows us data in #tmp1 that is not in #tmp2 because the OrderQty column changed in #tmp1. The second EXCEPT shows us data in #tmp2 that isn’t in #tmp1 because of our insert into #tmp2 and also our delete from #tmp1 would be found in #tmp2 but not #tmp1.

UNION ALL for the Win!

To wrap this up now we can include a UNION ALL operation between the two EXCEPT operations. This would get us any data changes to the columns selected from the temp tables.

SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp1
EXCEPT SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp2
UNION ALL
SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp2
EXCEPT SELECT SalesOrderId, SalesOrderDetailID, OrderQty FROM #Tmp1
UNION ALL and EXCEPT for the free Data Compare Win! Quickly shows rows that are different between the two tables.
UNION ALL and EXCEPT for the free Data Compare Win! Quickly shows rows that are different between the two tables.

Typically, I need to verify is the data before and after is the same. This is a quick and easy way to get that answer. Now I know you might want to take this to the next level. You might be thinking how do I just get the unique key for the table and columns that changed. I will leave that as an exercise for you.

If you enjoyed this post subscribe to get more free SQL tips.

The post Free SQL Data Compare appeared first on SQL Server Consulting & Remote DBA Service.

Query Store for Workload Replays
Query Store for Workload Replays

This month’s T-SQL Tuesday is hosted by Tracy Boggiano. Tracy invites us all to write about adopting Query Store. Today, I wanted to share my favorite but a very unique way I use the Query Store for Workload Replays.

You can read more about the invite in detail by clicking on the T-SQL Tuesday logo in this post.

Today, I wanted to talk about my least favorite part of replaying workloads. It’s having an extended event or server-side trace running during a workload replay only so we can compare the results at a query-level when the replay is finished. Now, this might seem like a trivial thing but when you have workloads over 10k batch requests/sec this can consume terabytes of data quickly. The worst part is waiting to read all the data, slice and dice the data for analysis.

Starting with SQL Server 2016 there is a better and faster way to go! You can replace your extended event or server-side trace with Query Store captured data. Today, I will show you how to use the Query Store for the same purpose.

Different Settings

Keep in mind our goal here is very different from the typical use case for using the Query Store. We want to capture metrics for all the queries executed during a workload replay. Nothing more and nothing less.

If we have the runtime results for multiple replays we can then easily compare the workload performance between the workload replays.

Most of our changes from the regular Query Store best practices are shown below:

  • Max Size (MB) – Need to make sure there is enough space to capture your whole workload. This size will vary by how much workload is being replayed.
  • Query Store Capture Mode set to All. Normally, not ideal, but remember we want to capture metrics for our whole workload being replayed.
  • Size Based Cleanup Mode set Off – Yup, we don’t want to lose our workload data that is capture until we persist in our ideal form. More on this later.

The Capture Process

Now, this is where you would use Database Experimentation Assistant (DEA), Distributed Replay or some other process to replay your consistent workload in an isolated non-production environment. This subject we will cover in another future post. For now, we will just have two replays called “Baseline” and “Change”. This simulates a baseline replay with no schema changes and then another change replay with a change introduced in the schema.

To capture our workload we just enable the Query store with our settings mentioned above and also clear out the query store right before our workload replay starts to help ensure we are just capturing our workload.

USE [master]
GO
ALTER DATABASE [YourDatabase] SET QUERY_STORE = ON
GO
ALTER DATABASE [YourDatabase] SET QUERY_STORE (OPERATION_MODE = READ_ONLY, MAX_STORAGE_SIZE_MB = 10000, QUERY_CAPTURE_MODE = AUTO, SIZE_BASED_CLEANUP_MODE = OFF)
GO
ALTER DATABASE [YourDatabase] SET QUERY_STORE CLEAR
GO

Stop Capturing Query Store Data

Once your replay is finished we will want to disable the query store from writing data into the query store. We want the least amount of non-workload data inside of the Query Store when we are using it for the sole purpose of comparing workloads.

USE [master]
GO
ALTER DATABASE [YourDatabase] SET QUERY_STORE (OPERATION_MODE = READ_ONLY)
GO

Prepare Query Store Data for Long-Term Analysis

Now for smaller workloads, one might be happy with utilizing DBCC CLONEDATABASE to have a schema-copy of their workload with Query Store data persisted. This is perfectly fine. With bigger workloads being captured I have noticed there are ways to improve the performance of query store when doing analysis of the query store data. For example, clustered columnstore indexes can be very helpful for performance and compacity. Therefore, I like to have a schema for each replay and import the data. The following is a quick example of setting up a schema for a “baseline” replay and a “change” replay.

CREATE DATABASE [DBA]
GO
use [DBA]
GO
CREATE SCHEMA Baseline;
GO
CREATE SCHEMA Compare;
GO

Next, we will import our captured data from our baseline replay that’s in our read-only query store database. I also like to have a baked-in aggregate of metrics for reads, writes, duration and CPU at the query level.

use [YourDatabase]
GO
/* Load Data */
SELECT * INTO DBA.Baseline.query_store_runtime_stats FROM sys.query_store_runtime_stats; SELECT * INTO DBA.Baseline.query_store_runtime_stats_interval from sys.query_store_runtime_stats_interval; select * INTO DBA.Baseline.query_store_plan from sys.query_store_plan; select * INTO DBA.Baseline.query_store_query
from sys.query_store_query; select * INTO DBA.Baseline.query_store_query_text
from sys.query_store_query_text;
/* Addition for SQL 2017 */
select * INTO DBA.Baseline.query_store_wait_stats from sys.query_store_wait_stats use [DBA]
GO SELECT SUM(Count_executions) AS TotalExecutions,
SUM(Count_executions*avg_duration) AS TotalDuration,
SUM(Count_executions*avg_logical_io_reads) AS TotalReads,
SUM(Count_executions*avg_logical_io_writes) AS TotalWrites,
SUM(count_executions*avg_cpu_time) AS TotalCPU,
query_hash
INTO Baseline.QueryResults
FROM Baseline.query_store_runtime_stats rs
JOIN Baseline.query_store_plan p ON rs.plan_id = p.plan_id
JOIN Baseline.query_store_query q ON p.query_id = q.query_id
GROUP BY q.query_hash

Next, we would reset the database to our starting position and add our query store settings as mentioned above in this blog post and replay or workload again. This time, we would dump our data into the “change” schema

use [YourDatabase]
GO
/* Load Data */
SELECT * INTO DBA.Compare.query_store_runtime_stats FROM sys.query_store_runtime_stats; SELECT * INTO DBA.Compare.query_store_runtime_stats_interval from sys.query_store_runtime_stats_interval; select * INTO DBA.Compare.query_store_plan from sys.query_store_plan; select * INTO DBA.Compare.query_store_query
from sys.query_store_query; select * INTO DBA.Compare.query_store_query_text
from sys.query_store_query_text; select * INTO DBA.Compare.query_store_wait_stats from sys.query_store_wait_stats use [DBA]
GO SELECT SUM(Count_executions) AS TotalExecutions,
SUM(Count_executions*avg_duration) AS TotalDuration,
SUM(Count_executions*avg_logical_io_reads) AS TotalReads,
SUM(Count_executions*avg_logical_io_writes) AS TotalWrites,
SUM(count_executions*avg_cpu_time) AS TotalCPU,
query_hash
INTO Compare.QueryResults
FROM Compare.query_store_runtime_stats rs
JOIN Compare.query_store_plan p ON rs.plan_id = p.plan_id
JOIN Compare.query_store_query q ON p.query_id = q.query_id
GROUP BY q.query_hash

Comparing Workload Results

Now that we have our two workloads imported we can now compare to see how the workload changed per query. I will break this down into two quick steps. First, get deltas per query. Second, get totals for how many times a query might be different in the query store. More on this a little later in the post.

/* Query Store Results */
use [DBA]
GO SELECT DISTINCT c.TotalExecutions - b.TotalExecutions AS ExecutionDelta,
c.TotalExecutions AS CompareExecutions,
b.TotalExecutions AS BaselineExecutions,
c.TotalDuration - b.TotalDuration AS DurationDelta,
c.TotalCPU - b.TotalCPU AS CPUDelta,
c.TotalReads - b.TotalReads AS ReadDelta,
c.TotalWrites - b.TotalWrites AS WriteDelta,
c.TotalReads AS CompareReads,
b.TotalReads AS BaselineReads,
c.TotalCPU AS CompareCPU,
b.TotalCPU AS BaselineCPU,
c.TotalDuration AS CompareDuration,
b.TotalDuration AS BaselineDuration,
c.query_hash
--q.query_sql_text
INTO #CTE
FROM Baseline.QueryResults b
JOIN Compare.QueryResults c ON b.query_hash = c.query_hash select COUNT(query_sql_text) AS QueryCount, MAX(query_sql_text) query_sql_text, MIN(query_id) MinQueryID, qsq.query_hash
INTO #Compare
from Compare.query_store_query qsq
JOIN Compare.query_store_query_text q ON qsq.query_text_id = q.query_text_id where qsq.is_internal_query = 0
GROUP BY query_hash select COUNT(query_sql_text) AS QueryCount, MAX(query_sql_text) query_sql_text, MIN(query_id) MinQueryID, qsq.query_hash
INTO #Baseline
from Baseline.query_store_query qsq
JOIN Baseline.query_store_query_text q ON qsq.query_text_id = q.query_text_id where qsq.is_internal_query = 0
GROUP BY query_hash select cte.*
, a.QueryCount AS Compare_QueryCount
, b.QueryCount AS Baseline_QueryCount
, a.MinQueryID AS Compare_MinQueryID
, b.MinQueryID AS Baseline_MinQueryID
, a.query_sql_text
FROM #CTE cte JOIN #Compare a on cte.query_hash = a.query_hash
JOIN #Baseline b on cte.query_hash = b.query_hash
WHERE 1=1
AND ExecutionDelta = 0
ORDER BY ReadDelta ASC

Query Store for Workload Replays

Query Store for Workload Replay gives you performance metrics to the query level.
Workload Replays compared down to the query execution level is priceless!

Lessons Learned Along the Way!

Initially, working with the query store I thought query_id was going to be my best friend. I quickly learned that my old friend query_hash is more helpful for multiple reasons. One, I can easily compare queries between different replays. That’s right now all workload replays get you the same query_id even when the workload is the exact same being replayed. Two, I can compare them with different databases as well. Finally, query_hash is very helpful with ad-hoc workloads as I can aggregate all the different query_ids that have the same query hash.

Need Performance Tuning Help?

Let us help you! Schedule a free 30-minute chat to see if we can point you the right direction. If you liked this blog post signup for our free newsletter! Also, take advantage of our free training videos!.

The post Query Store for Workload Replays appeared first on SQL Server Consulting & Remote DBA Service.

I recently spoke at a conference and was asked what is the easiest way to import databases to Azure SQL Database. Therefore, I wanted to share how I do this with DBATools.io. You can use the same code to just export if you need a local copy of an Azure SQL database as well.

Import-Module dbatools -Force
<# Variables #>
$BackupPath = "C:\Demo\AzureSQL\Bacpac" #folder location for backups
$SourceInstance = "sql2019\sql2016"
$DBName = "AdventureWorksLT2012"
$AzureDestInstance = "procuresqlsc.database.windows.net"
$DBNameDest = $DBName <# backpac options for import and export #>
$option = New-DbaDacOption -Type Bacpac -Action Export
$option.CommandTimeout = 0 $option2 = New-DbaDacOption -Type Bacpac -Action Publish
$option2.CommandTimeout = 0 <# The following assums Azure SQL Database exists and is empty Azure will create database by default if it doesn't exist #>
$bacpac = Export-DbaDacPackage -Type Bacpac -DacOption $option -Path `
$BackupPath -SqlInstance $SourceInstance -Database $DBName Publish-DbaDacPackage -Type Bacpac -SqlInstance `
$AzureDestInstance -Database $DBNameDest -Path $bacpac.path ` -DacOption $option2 -SqlCredential username 

What Is my Performance Tier?

Great question, as of 3/3/2020 if the database in Azure SQL Database does not exist then it will be created. When its created the following database uses the default performance tier. This is General Purpose (Gen5) with 2 vCores.

Default performance tier for a new Azure SQL Database costs $371.87 per month.
The default cost of a new Azure SQL Database is 371.87 per month.

How to create cheaper databases

Great question, you can import databases to Azure SQL Database cheaper using PowerShell. It is as simple as using the Azure PowerShell Module. The following example below I use my existing Azure SQL Database server and I end up creating a new database with the “S0” tier.

<# Install Azure Powershell module requires local admin
Install-Module -Name Az -AllowClobber -Scope AllUsers
#> Import-Module Az -Force
$RGName = "<Your Existing Resource Group>"
$ServerName = "<Your Azure SQL Database Server>"
$Tenant = "<Add Tenant ID>"
$Subscription = "<Add your Subscription ID>" Connect-AzAccount -Tenant $Tenant -SubscriptionId $Subscription
$server = Get-AzSqlServer -ResourceGroupName $RGName -ServerName $ServerName $db = New-AzSqlDatabase -ResourceGroupName $RGName `
-ServerName $server.ServerName -DatabaseName $DBName `
-RequestedServiceObjectiveName "S0"

Your Homework Assignment

You got all the nuts and bolts to generate a script that can migrate all your databases on an instance. Then you can import databases to Azure SQL Database in one loop.

Need Help Moving to Azure?

Let us help you! Schedule a free 30-minute chat to see if we can point you the right direction. Take advantage of our free resources too.

If you liked this blog post signup for our free newsletter!

The post Import Databases to Azure SQL Database appeared first on SQL Server Consulting & Remote DBA Service.

Today I wanted to cover how you can grant the least privilege required to stop, start or restart an Azure VM. This is also a fun great example of how you can create custom Azure Security Roles too. That’s right, we are killing two birds with one stone today.

Why Should you create a custom Role?

Where possible I like to grant security towards resource groups. Therefore, let’s assume we got multiple VM’s built for the developer group to do some testing. You want to grant people access to start, restart or stop any VM in that group. We can then grant access to the resource group to our custom role. As VMs come in and out of the resource group they would inherit our custom group.

Now, you might be wondering why don’t I just give them the “Contributor” role or the “Virtual Machine Contributor” role and be on our way? Well, if you were to do this on a resource group you just gave access to create VM’s and a whole lot more.

Least privileged roles are your best friend. Today, you will see they are also not that hard to create either.

How do we create custom roles?

Great question, first you need to identify what tasks do we need the role to complete. In our case, you have to be able to see a VM in order to take any action against the VM. Then we want to start, stop (deallocate), and restart the VM. Digging through IAM. I found the following security options.

 "Microsoft.Compute/*/read",
 "Microsoft.Compute/virtualMachines/start/action",
 "Microsoft.Compute/virtualMachines/restart/action",
 "Microsoft.Compute/virtualMachines/deallocate/action"

Now, we can create our custom JSON text file that we will then import using Azure CLI. Below you will find a sample JSON file to build our custom security role. You will need to add your subscription id(s). You can also change your name and description you would see in the Azure Portal.

 "Name": "VM Operator", "IsCustom": true, "Description": "Can start, restart and stop (deallocate) virtual machines.",
 "Actions": [ "Microsoft.Compute/*/read", 
"Microsoft.Compute/virtualMachines/start/action", "Microsoft.Compute/virtualMachines/restart/action", 
"Microsoft.Compute/virtualMachines/deallocate/action" ], 
"NotActions": [ ], "AssignableScopes": 
[ "/subscriptions/<Subscription ID Goes Here>" ] }

How to Import Custom Security

Now that we are ready to go with our custom security role in a JSON file. We can then utilize Azure CLI to log in to the tenant and import our security role. First, we will log in to Azure with CLI as shown below.

az login --username <myEmailAddress> -t <customerTenantId-or-Domain>

Now we will load our saved JSON file. After a few minutes, we should then see our new security role in the Azure portal.

az role definition create --role-definition IAMRole-VMOperator.json

Now you can grant access to your custom role just like you would with any other role in Azure.

The post Allow users to start/stop Azure VMs appeared first on SQL Server Consulting & Remote DBA Service.

Hello everyone, this is John your Austin SQL Server Consultant here and today I am going to answer a question that comes up often so I wanted to blog about it for everyone. The question of the day is where can I download the previous SQL Server Updates?

The History towards Updates

Back in the day when we were young but not a kid anymore there were service packs and cumulative updates. We could download these separately and all of the updates were easy to find. Now today, if you click on a KB article to download an update you get pointed to the latest update as shown below.

Current SQL Server KB articles point to only the latest update.

How far is My SQL Server on Updates?

This is also another great question. My favorite place to find all the history of updates toward SQL Server is the SQL Server Build List Blog. You can cross-reference this towards your version by running the following query below.

You can use SELECT @@VERSION to get your current version number.
You can use SELECT @@VERSION to get your current version number.

I fully get exactly why Microsoft is trying to point everyone to the latest update. Normally, it makes perfect sense but let’s take a look at today Jan 9th, 2020. I am planning to update SQL Server 2017 to CU17. Its been out for two months. Today CU18 is released and if I wasn’t careful I would have downloaded a different update than expected.

SQL Server Blog List is a great resource for finding a list of all SQL Server Updates
SQL Server Blog List is a great resource for finding a list of all SQL Server Updates

Getting a previous SQL Server Update

So, on to the solution. It’s actually an easy one but also one that is easy to overlook as well. Let’s go back to the new standard update page for SQL Server updates.

That is right, the Microsoft Update Catalog is your best friend to find all your updates for Microsoft products including SQL Server. You can search for the product you want. For example, in this case, I am looking for SQL Server 2017 and can see all the previous updates for SQL Server.

All SQL Server Updates Can be Found in the Microsoft Update Catalog.
All the SQL Server Updates can be found in the Microsoft Update Catalog

If you enjoyed this tip and found it helped make sure to join our newsletter so you can enjoy more free tips, tricks, and video demos.

The post Previous SQL Server Updates appeared first on SQL Server Consulting & Remote DBA Service.