T-SQL Tuesday #96: Folks Who Have Made A Difference
Today’s blog post is about T-SQL Tuesday. If you haven’t seen T-SQL Tuesday before its a great monthly call for all SQL Server bloggers to write about one topic that always changes from month to month. In this months installment, we’re focusing on folks who made a difference.
Looking back, I wrote about this subject almost seven years ago. While I covered a lot of great people there is a group left out. I wished I included them because they were the first to believe in me, mentor me, and help me become the IT Pro I am today.
The Greater Wheeling Chapter of AITP taught me how to be the best IT Professional I could be. While I attended to eat great food and to learn tech. I learned a whole lot more. I learned that it’s the people who matter and that technology will always change. I started to learn how to lead here as well. I became a chapter president, heck even became the VP of the region. I learned a lot of skills here that helped make me the IT person and business owner I am today.
Earlier this year, Dolph Santorine the current President of GWC of AITP had a speaker cancel. In a bind, he asked if I could do a webinar to help. I refused to do the webinar because I told him I would jump on a flight and be there to do it in-person. During this trip, over a Bridgeport brownie, I learned there might be another way I could pay things forward back to the GWC of AITP.
The GWC of AITP could use another great event to raise awareness and drive up membership. I will be using my knowledge and #sqlfamily to help bring a third SQL Saturday to Wheeling, WV.
That’s right, we’re going to go from SQL Saturday #36 to #717 with a good old Throwback SQL Saturday. Save the date: April 28th. Like the old days, this will be a low budget grassroots event that brings some of the best SQL Server training to West Virginia. We are going to prove again that if a SQL Saturday can happen in Wheeling, WV it can happen anywhere!
/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.png00JohnSterrett/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngJohnSterrett2017-11-14 15:11:302017-11-14 15:11:30T-SQL Tuesday #096: The Group Who Changed My Career Forever!
In some DBA circles, backups are just as popular as politicians! However, recoverability is the most important task for database administrators. While SQL Server 2017 added so many great features like graph, SQL on Linux, and more. Today, I want to focus on two small underdog features that might be game changers on how you do backups.
SQL Server Backups are often as popular as politicians.
Smart Differential Backups
Databases are getting bigger, not smaller. More storage capacity is needed for these backups. Backup compression might hurt your storage capacity. Today, I am seeing more policies include full and differential backups along with transactional log backups. Differential backups are used to offset daily full backups. Typically people will use time increments as the basis for when backups should occur. It’s very common to see automated jobs that do weekly full and daily differentials to reduce storage capacity needed for backups.
How often does your data change? Is the rate of change very consistent or does it change depending on the week? Let’s assume this week it’s Tuesday and over 80% of your data pages have changed. You are not benefiting from taking daily differentials for the rest of the week. The opposite goes for data that doesn’t change that often. Maybe you can save a lot of space by doing less frequent full backups.
Leveraging smart differential backups could greatly reduce your storage footprint and potentially reduce the time it takes to recover.
In SQL Server 2017 you can see exactly how many pages changed since your last full backup. This could be leveraged to determine if you should take a full or differential backup. Backup solutions and backup vendors will be better for this.
select CAST(ROUND((modified_extent_page_count*100.0)/allocated_extent_page_count,2) as decimal(6,2)) AS 'DiffChangePct' ,modified_extent_page_count ,allocated_extent_page_count from sys.dm_db_file_space_usage GO
Smart Transactional Log Backups
The time your users are offline while you are recovering to the point of failure is critical. It could be the difference between keeping and losing customers. Point-in-time recovery is mandatory for a critical database. Transactional log backups have to be restored in order.
Recovery Point Objectives (RPO) drive how often you take transactional log backups. If you have a policy that says you can only lose ten minutes of data, you need transactional log backups every ten minutes. Is this really true if there were no changes? What if your RPO is driven by the amount of data loss and not the time of the loss? Either way, you can now control when transactional log backups occur based on the amount of data that has changed since the last transactional log backup.
SELECT name AS 'DatabaseName',dls.log_since_last_log_backup_mb, dls.log_truncation_holdup_reason, dls.active_vlf_count, dls.active_log_size_mb FROM sys.databases s CROSS APPLY sys.dm_db_log_stats(s.database_id) dls
https://procuresql.com/wp-content/uploads/2018/09/Procure-SQL-SQL-Server-2017-Making-Backups-Great-Again-1.png5661095JohnSterrett/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngJohnSterrett2017-10-31 21:12:292017-10-31 21:12:29SQL Server 2017: Making Backups Great Again!
ProcureSQL is proud to be a first-time sponsor for Houston Techfest 2017.
We would love to see you at this years Houston Techfest! The admission is FREE! You can register here. The complete agenda is posted on their facebook page here.
What is Houston Techfest?
It is a large community-run event focusing on SQL Server, other data solutions, security, mobile development, and agile development. It will be held on Saturday, October 14, 2017 starting at 8am to 6:30 pm. The event will take place at the NRG Center, North Stadium Drive, Houston, Texas 77054.
Did you miss seeing John at Microsoft Ignite in Orlando?
If you did not see John’s session at Microsoft Ignite, you are in luck! Not only will we be sponsoring, but John Sterrett will be presenting three sessions entitled “Make Your SQL Server Queries Go Faster!” , “Learn SQL Backups and Master SQL Restores” and “Successfully Migrating Existing Databases to Azure SQL Database.” Come join us for another great day of free education!
John Sterrett at #MSIgnite 2017
Drop by our booth and chat with Angela and John, they will have some cool swag and a chance to win a great prize!
https://procuresql.com/wp-content/uploads/2018/09/Procure-SQL-Come-See-Us-at-Houston-TechFest-2017-1.jpg5641093Angela Tidwell/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngAngela Tidwell2017-10-05 20:41:452017-10-05 20:41:45Come See Us at Houston TechFest 2017!
We are proud to announce that John Sterrett will be speaking at Microsoft Ignite. This will be John’s first time speaking at the Microsoft Ignite conference. John will be talking about migrating existing databases over to Azure. If you are going and passionate about your data, SQL Server, or just want some help solving your business problems with SQL he would love to meet you there.
https://procuresql.com/wp-content/uploads/2018/09/Procure-SQL-Speaking-at-Microsoft-Ignite-2017-1.png5781096JohnSterrett/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngJohnSterrett2017-08-15 16:03:532017-08-15 16:03:53Speaking at Microsoft Ignite 2017
In my opinion, automation will enhance DBAs. It will allow DBAs to provide more value to the business.Good DBAs will be fine. If you are a DBA with ten years of DBA experience who just repeats year one ten times, this might be a different story for a different blog post. Therefore, automation enhances the Good DBAs careers just like the sync button enhances turntablists but the sync button doesn’t replace good turntablists’ careers.
Follow along to see if you agree or disagree. I look forward to seeing your feedback in the comments section on this topic.
Some of my good friends know I have a passion for DJing on the ones and twos. A subset of those friends know I took a job in high school just so I could buy two Technics 1200s and records. In my free time, I started taking an online DJ Masterclassand it quickly jumped out to me that the sync button for turntablists, club DJs (see just like DBAs there are several different types of DJs) isn’t much different from automation for good DBAs. The sync button is a way to automate the tedious task of making sure two different songs are played at the same speed so they could be mixed together. In the past few years, I started to hear buzz that automation will replace DBAs so I wanted to write a blog post providing more details into a recent tweet of mine.
Saying the DBA's job is going away due to automation is just like saying the sync button will replace all DJs. #NotGoingToHappen
What is this sync Button You Speak Of? How does it relate to DBAing?
Going back to my high school days for a second. One of the core fundamentals of being a turntablist is using beat matching to keep the dance floor going as you transition from one song to another song. It took me months to learn this fundamental skill. I had to learn how to measure BPM (beats per minute) to see if the songs could even mix together. I had to learn beats (think of pages) and bars (think of extents) and learn how to mix the songs on the same beat on right bars. I had to learn how to adjust the speed of each of the two records to make the BPMs match on my own. When mixing, I had to make sure the two records stayed on beat during the mix. Syncing records isn’t the hard part of DJing, it’s just a tedious part.
Today, most DJ software packages have a sync button so this process can be automated. Now we can hit the sync button and the BPMs will match almost all of the time. What would you do if you didn’t learn the basics and relied on the sync buttonand the BPM for your song is wrong, or worse, your venue doesn’t have the sync button? I think this would be similar to having corruption occur and not knowing the basics behind DBCC CHECKDB, Backups and Restores. You won’t be keeping people on the dance floor for long, or have a successful DBA Career.
If you don’t know the basics, you will rely on the software and have no other options if and when the software doesn’t automatically work for you.
I Love Automation, I have loved it for years!
I love automation because it allows me to automate tedious tasks and focus on ones I love and which provide value. For now, I will talk about automation in general and will focus a bit later on some new features with SQL Server that help automate things.
I have been using automation as a tool in my toolbox for almost my whole DBA career. I have given talks on Getting Started with PowerShell for DBAs; my first PASS Member Session was on utilizing Policy-Based Management to automate basic health checks. You can even use 3rd party tools or custom scripts like this, this,this, this, or this to help with some automation. Just make sure they work for you and your environment. I have seen a nice automated SQL Server installer built a very long time ago that used JavaScript before JavaScript was cool. I bet it actually started with JScript (Insert painful JScript vs JavaScript memories from my web development days).
The language used doesn’t matter, what matters is what the process does.
At the same time, I have actually seen a software service provider application that in my opinion didn’t monitor transactional log backups correctly, yikes. I mention this not to scare you, but to remind you that you need to know the basics to know if an automation process works for your needs. If you don’t know what the automated process is supposed to do how do you know it is doing it correctly? Using the wrong automation tool, or using the tools incorrectly, could be very hurtful to your career. This looks a lot like the sync button for DJing to me. Automation tools/scripts also evolve and change so this isn’t a ‘set it and forget it’ thing either, in my humble opinion.
It has been a tool in their toolbag for quite a while, not a new thing. DBAs also know how to leverage it as a great tool instead of seeing it as a new tool that replaces their jobs. BIML is a great automation tool for Business Intelligence (BI) Developers. Does BIML automate good BI Developers out of their jobs? Nope. Does it help them automate so they can focus more time toward high-value tasks and get more things done? Yep, it sure does.
If automation is new to you as a DBA, or maybe you are starting out as a DBA, I suggest checking out some well-proven open source tools in a non-production environment. If you try to test out automation, several solutions include options to script out but not execute the tasks. This can be very helpful for learning (just like I used the script feature in SSMS to learn a lot about T-SQL behind the GUI). If you are looking for a place to start, I highly recommend dbatools.io . It’s amazing to look at all the things one could do with this single tool.
Can Self-Tuning Databases Completely Automate DBA Tuning?
Performance tuning is one of my favorite parts of being a DBA; therefore, there is no way I would skip talking about some of the great enhancements that allow SQL Server to tune your own environment in Azure SQL Database and the box product. It’s purely amazing to see some of the new cutting-edge tuning features that, in my opinion, are game changers that will help Microsoft further separate themselves from all other database management systems on the market.
Don’t hold your breath while you try to read about some of these great performance improvements provided just since SQL Server 2014. There is cardinality estimator changes,auto tuning for adding and removing indexes, adaptive plan changes, query store, resumable index rebuilds, heck good old Database Tuning Advisor gets better and wait for it…. Automate Tuning. Don’t take my word for it that these features will be amazing. Go check out Lance Tidwell’s about some of these new features in his PASS Summit talk in November. Actually, take a second to read his abstract. Notice the following, “We will also look at some of the pitfalls and potential issues that can arise from these new features.” I find it interesting that it looks like Automated Tuning and other features might not safely automate all your tuning needs.
I don’t foresee Self-Tuning Databases being able to self-tune everything.
A friend of mine Joey D’Antoni wrote a great blog post recently on Self-Tuning Databases. Let’s go by this statement in his blog post, “I’m sure this would take years and many versions to come into effect.” Let’s fast forward and assume this is fully in effect including even multiple features outside of execution plan changes. I will throw in an example, Artificial Intelligence that could find several common known T-SQL Anti-Patterns and rework the code for all of them so the optimizer friendly version executes.
Let’s take a look at why I do not foresee automated tuning fully replacing Performance Tuning DBAs.
The way people implement business processes with technology will prevent automated tuning from fixing everything in your environment. It could be hardware, database design, could be bad coding, bad anti-patterns in code, misuse of ORMs, ORMs struggling to your data model, etc. My point is, I don’t see performance tuning getting completely automated to the point where performance tuning skills will be obsolete. I am not alone, another good friend of mine, Grant Fritchey writes, “Finally, despite the fact of adaptive plans and automated regressions, you’re still writing code that NO amount of automation can fix.”
Even though it is a fantastic time to leverage cloud services not all companies are going to put all of their critical data there. Many are still using SQL Server 2005. Some still work with mainframes. Using the same logic, shouldn’t those DBAs have been automated away years ago?
migrations to multiple cloud providers and even helped clients spin up their own cloud environments that they offer as a service, while also providing them Remote DBA Services as well.
I know what you’re thinking. Why do I need DBA help if I am in the cloud?
If someone is using a managed service for their databases in a platform as a service model (PaaS) like Amazon RDS or Azure SQL Databases, why would they need a DBA to manage the databases, isn’t the managed service managing them for you? Well, it’s simple, your needs might not be completely aligned with the service level agreements that are offered. Take a second and read Amazon RDS SLA and then Azure SQL Database SLA. Keep in mind, I am not telling you to not leverage these services; I just want you to know what is covered and what gaps DBAs can still fill. If you also take a look atAmazon RDS Best Practices you see several things that can keep DBAs busy.
Did you notice any mentions of your data in the SLA’s?
Is anything mentioned in writing about what will happen when stack dumps occur or data corruption happens? Did you see anything written on how you get notified if corruption happens and what actions will be taken on your behalf that might cause data loss, or if you should make business decisions and take action?
Thinking this could not happen to you? I ask because while performing ourhealth check on a non-Microsoft cloud PaaS provider for a new client, we detected stack dumps… and this was the first time our client was notified, yikes!
The DBA Profession Is Not Going Away
To wrap this up, I don’t see the DBA profession going away. The DBA role has pivoted and evolved constantly over the years. I don’t see that changing. It’s one of the many reasons why I am passionate about helping clients solve their business problems with SQL Server. While some thinkmachines are close to automating away your job as a DBA, I would hate it if new DBAs or experienced DBAs see written content like this and assume they need to change their career. Not only do I respectfully disagree, I do not see this happening and I am not alone.
When you want to focus on growing your skills, I see several benefits towards investing time both inside and outside of the role of a DBA.
I think automation will enhance good DBAs and reduce time spent on repeatable trivial tasks allowing DBAs to move faster and provide more value. I don’t see good DBAs being replaced by a handful of PowerShell scripts. To me, this is the DBA version of the sync button that allows turntablists to spend less time on preparing a mix and more time to providing a better experience to the dance floor.
Last week we covered five reasons why log shipping should be used. I got a great question that I thought should be answered here in a short blog post. The question is “how do you configure transactional log shipping to include compression when taking the transactional log backups?”
The Missing Question
First, a better question is “should we be compressing our SQL Server Backups?” In a second we will address log shipping, but first I wanted to just focus on compressing backups. The answer is “it depends” (typical DBA response). There is CPU overhead for utilizing compression. We will not focus too much on this here as you can enable resource governor to limit CPU usage for your backups.
Today, we will focus on capacity and misleading storage saving results due to enabling backup compression. Looking at a traditional DBA role, you might have exposure to view your server, drives, and the size of the backups. Therefore, taking a compressed backup leads you to likely see less storage being used for the backup file compared to other backups of the same database. This is typically the goal for backup compression to reduce storage space consumed by backups.
Another question you should be asking yourself is, “What storage system is being used for your SQL Server backups?” For example, storage access networks (SAN) might have its own compression and native SQL Server backup compression might hurt the impact of the SAN’s compression, which could cause more RAW storage to be used. Therefore, there really isn’t a silver bullet to always use SQL Server backup compression in every environment. You need to understand if there is any compression built into the storage used for your backups, and understand how backup compression impacts the storage system before deciding to utilize backup compression.
Compress Log Shipping Backup
Okay, now that we got our disclaimer common real-world mistake out of the way. Here is how you would enable backup compression for transactional log shipping.
You can access Log Shipping Settings from Database Properties
Log Shipping Backup Settings on Primary
Here are Log Shipping backup compression options.
Enabling Compression with T-SQL Backups
Enabling backup compression with T-SQL is as simple as just adding the compression option. The following is a common example of backing up a database with compression.
[code language=”sql”] BACKUP DATABASE [LSDemo] TO DISK = N’c:\somewhere\LSDemo.bak’ WITH COMPRESSION [/code]
Enable Compression by Default
If all of your backups for an instance of SQL Server go to storage where utilizing native SQL Server compression provides improvement, then you can consider enabling compression by default. Therefore, all backups taken on this instance of SQL Server would then use backup compression by default unless another option was given during the T-SQL execution of the backup command.
[code lang=”sql”]
EXEC sys.sp_configure N’backup compression default’, N’1′
GO
RECONFIGURE WITH OVERRIDE
GO
[/code]
https://procuresql.com/wp-content/uploads/2018/09/Procure-SQL-Should-We-Compress-SQL-Server-Backups-1.jpg5601454JohnSterrett/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngJohnSterrett2017-07-25 21:17:292017-07-25 21:17:29Should We Compress SQL Server Backups?
SQL Server comes with several options to keep your data highly available. Today we are going to cover five different reasons why Transactional Log Shipping should be one of your best tools in your SQL Server DBA Toolbelt.
Validates Transactional Log Backups
Your last line of defense in any SQL Server Disaster Plan is recovering backups. To be able to recover to a point of failure you have to leverage transactional log backups while using the full or bulk-logged recovery model. Every transactional log backup has to be restored one by one in-sync after your last full and differential backups to recover to the point of failure. Wouldn’t it be nice if you could automate restoring every log backup so you knew you had good backups? Transactional log shipping is the easiest way to complete this task. It also gives you a copy you can easily bring online without having to restore all of your backups.
Secondary Copy of Transactional Log Backups
We already mentioned that the last line of defense in any disaster recovery plan is restoring backups. Wouldn’t it be nice if you had copies of your transactional log backups on multiple servers? Maybe even multiple data centers? Well, transactional log shipping also helps automate this as well. The second step in transactional log shipping is copying your transactional log backups. Now you just have to make sure your full and differential backups are copied.
Recover Data Quickly for Very Large Tables
Have you ever forgotten to add or highlight the where condition while you are running a delete or update statement? If you haven’t, do not be surprised if it happens in the future by you or someone else in your company. This can be an extremely painful recovery process if it happens against tables with terabytes of data. With all other SQL Server High Availability features like mirroring, availability groups, and failover cluster instances, the data changed almost instantly so you couldn’t use this solution to recover. You now have bad data that is highly available.
With transactional log shipping, you can delay the restore time so if you discover the changed data by mistake you could disable the SQL Agent Jobs and pull the missing data in parallel of implementing your plan to get the rest of the data from the last restore. This can bring most of your data online quickly while you are restoring a very large backup. The best part is you don’t have to recover your log shipping copy. You can put it in standby mode and read data while it’s still in a state where you can continue to restore transactional backups (more on this below).
Poor Person’s Reporting Copy
Would it be nice to have a readable copy of your database that you could use for reporting? Even if you have Web Edition. Transactional log shipping gives you this ability even though you might not know that it exists. When you are restoring the transactional log backups with transactional log shipping you have two options. The transactional log backups can be restored in standby mode which allows you to read data in-between transactional log backup restores. You can also use the default, which is no recovery. This doesn’t allow any access to read the backups. This is just like your mirror copy in database mirroring (without snapshot copy on enterprise edition).
There is one big catch with using standby as your recovery option. You have to specify if you want to delay restores if an active session exists or pick to terminate the connections to force the transactional backup restore.
Improve Downtime for Migrations and Upgrades
When you migrate a database or upgrade using the side by side option you have to have another copy of the database. Ideally, the source and destination would be identical at the point of implementing the migration. With very large databases this could require a very large downtime so backup and restore, or detach and attach, are good options unless you have a very large maintenance window. Seeding the data so that only a small amount of changed data needs to be synced is ideal. Log shipping is an excellent way to do this especially if you have multiple copies that need to be synced for implementing availability groups. Here is how we made a 60TB database highly available for an Availability Group implementation with zero downtime. Example
My New Log Shipping Feature Request
I know, you are already thinking that Log Shipping has been around forever and it doesn’t need any new features, but I got one. In SQL Server 2016 Direct Seeding was added to streamline how the initial sync occurs for Availability Groups. I would like to see this same feature extended for log shipping so we have an easy automated option to perform the initial sync for transactional log shipping.
https://procuresql.com/wp-content/uploads/2018/09/Procure-SQL-Five-Reasons-Why-Log-Shipping-Should-Be-Used-1.jpg5951092JohnSterrett/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngJohnSterrett2017-07-18 19:02:382017-07-18 19:02:38Five Reasons Why Log Shipping Should Be Used
Curious about the basics of SQL Server High Availability? Please join our founder and MVP, John Sterrett, where he will be presenting SQL Server High Availability 101on Tuesday, July 18th, 2017 at the North Austin SQL Server User Group.
[Update – Sept 5th]
You can now watch the full presentation online. You can see also see some reference links below as well. If you have any questions feel free to contact us!
Here is a quick synopsis of what will be discussed:
Have you ever wondered how companies keep their databases online through unplanned disasters? Have you wondered which high availability features might be best for our shop? In this session, we will go over a quick look at log shipping, database mirroring, transactional replication, failover cluster instances, and availability groups. John will identify pros and cons for each feature and share some tips from the field.
You are also invited to meet afterward for networking and cocktails! It is a great opportunity to learn, connect with fellow SQL Server professionals, and build solid relationships for your future. We cannot wait to see you there!
https://procuresql.com/wp-content/uploads/2018/09/Procure-SQL-Want-to-Learn-which-SQL-Server-High-Availability-Features-Are-For-You-1.png5951095Angela Tidwell/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngAngela Tidwell2017-07-13 18:02:572017-07-13 18:02:57Want to Learn which SQL Server High Availability Features Are For You?
Procure SQL is a proud sponsor of PASS SQL Saturday Houston, June 17, 2017! This is a free event for you, your employees, teammates or anyone interested in learning SQL Server for free. Make sure you sign up and come on down.
Please stop by the booth and say hello to me. I would love to meet you! Mention that you read this blog and I will have a gift for you!
I will have a prize wheel to spin for cool prizes, one of which is a hand spinner! They are all the rage. Have you ever played with one? They are great for calming me during heavy database work!
We have the spinners…
Our founder, John Sterrett, will be speaking at 4pm. Be sure to drop in and say hello before or after the session!
His session is titled Table Partitioning: the Secret Weapon for Your Big Data Problems.
"Is big data starting to slow you down? Is data growth putting your maintenance tasks in danger of not meeting your service level agreements?
Do you wish you could archive old data with minimal impact to your tables during the archive process?
Do you wish you could eliminate the majority of the data in your tables when you query them? If so, it's time you consider implementing
table partitioning to help with your general performance and to reduce your maintenance window for completing your maintenance tasks.
In this session, you will learn how table partitioning works and when it should be implemented. You will be able to understand how to
initially configure partition functions and partition schemes to have a rock solid partitioned table. You learn how to implement new features
added in SQL Server 2014 to improve table partitioning. Finally, you will learn how to implement an automated sliding window to maintain
your partitioned tables for ensuring optimal performance."
https://procuresql.com/wp-content/uploads/2018/09/SQLSaturday_Horizontal-1.jpg5631093Angela Tidwell/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngAngela Tidwell2017-06-15 15:39:322017-06-15 15:39:32Meet us at SQL Saturday Houston