Corruption. We know it is everywhere. It is surely a hot-button issue in the news. If you haven’t given much thought to database integrity, now is the time to sit up and pay attention. Corruption can occur at any time. Most of the time database corruption is caused by a hardware issue. No matter the reason, being proactive on database integrity will ensure your spot as a hero DBA in the face of corruption.
“A single lie destroys a whole reputation of integrity” – Baltasar Gracian
Integrity is one of those words people throw around quite often these days. The definition of ‘integrity’ is the quality of being honest and having strong moral principles. How can data have strong moral principles? Does ‘data Integrity’ mean something different? Yes, data integrity refers to the accuracy and consistency of stored data.
Have Backups, Will Travel
When was the last time an integrity check was run on your database? If you are sitting there scratching your brain trying to find the answer to that question, you may have serious issues with your database and not know it.
“But, I have solid backup plans in place, so this means I am okay. Right?”
While having a solid backup and recovery plan in place is an absolute must, you may just have solid backups of corrupt data. Regular integrity checks will test the allocation and structural integrity of the objects in the database. This can test a single database, multiple databases (does not determine the consistency of one database to another), and even database indexes. Integrity checks are very important to the health of your database and can be automated. It is suggested to run the integrity check as often as your full backups are run.
As discussed in an earlier blog, Validating SQL Server Backups, your data validation needs to take place BEFORE the backups are taken. A best practice is to run a DBCC CHECKDB on your data to check for potential corruption. Running CHECKDB regularly against your production databases will detect corruption quickly. Thus providing a better chance to recover valid data from a backup, or being able to repair the corruption. CHECKDB will check the logical and physical integrity of the database by running these three primary checks*:
CHECKALLOC – checks the consistency of the database;
CHECKTABLE – checks the pages and structures of the table or indexed view; and
CHECKCATALOG – checks catalog consistency.
Where to Look
Wondering if you have missing integrity checks or if they have ever been performed on your database? The following T-SQL script will show when/if integrity checks were performed on your databases. (Bonus) Running this script regularly will help track down missing integrity checks.
If you are looking for the last date the DBCC checks ran, the T-SQL script to use is as follows:
IF OBJECT_ID('tempdb..#DBCCs') IS NOT NULL
DROP TABLE #DBCCs;
CREATE TABLE #DBCCs
(
ID INT IDENTITY(1, 1)
PRIMARY KEY ,
ParentObject VARCHAR(255) ,
Object VARCHAR(255) ,
Field VARCHAR(255) ,
Value VARCHAR(255) ,
DbName NVARCHAR(128) NULL
)
/*Check for the last good DBCC CHECKDB date */
BEGIN
EXEC sp_MSforeachdb N'USE [?];
INSERT #DBCCs
(ParentObject,
Object,
Field,
Value)
EXEC (''DBCC DBInfo() With TableResults, NO_INFOMSGS'');
UPDATE #DBCCs SET DbName = N''?'' WHERE DbName IS NULL;';
WITH DB2
AS ( SELECT DISTINCT
Field ,
Value ,
DbName
FROM #DBCCs
WHERE Field = 'dbi_dbccLastKnownGood'
)
SELECT @@servername AS Instance ,
DB2.DbName AS DatabaseName ,
CONVERT(DATETIME, DB2.Value, 121) AS DateOfIntegrityCheck
FROM DB2
WHERE DB2.DbName NOT IN ( 'tempdb' )
END
The result will look similar to this. However, let’s hope your results show a date closer to today’s date than my own! If you see that your databases do not have integrity checks in place, check your backup and recovery plans and double check your agent jobs to see if perhaps the checks were scheduled but were turned off.
Recommendations
It is recommended that DBCC CHECKDB is run against all production databases on a regular schedule. The best practice is to have this automated and scheduled as a SQL Agent job to run as a regular part of maintenance. More specifically, to run the integrity check directly before purging any full backups. Doing so will ensure that corruption is detected quickly, which will give you a much better chance to recover from your backup or being able to repair the corruption.
Remember, SQL Server is very forgiving and will back up a corrupt database! Corrupt databases are recoverable, but they might have data pages that are totally worthless!
https://procuresql.com/wp-content/uploads/2019/02/Image-5-2.jpg9602325Angela Tidwell/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngAngela Tidwell2018-08-06 02:31:582018-08-06 02:31:58Does Your Database Have any Integrity?
Full disclosure time: Bulk Logged Recovery Model is quite confusing to me. And as it seems, to many others. I wrote a bit about it in SQL Server Recovery Models and decided that it was so complex, I really wanted to learn more and to explore what works and what doesn’t. Let’s take a deep dive into bulk logged recovery!
Why would you choose Bulk Logged Recovery?
Switching from full recovery to bulk logged recovery does have its perks when you have a very large amount of data to insert. Most notably in a data warehouse setting, switching to bulk logged recovery to perform bulk inserts make perfect sense as you are dealing with very large amounts of data being updated at one time. Also, when doing an index rebuild switching to bulk logged recovery can improve performance while performing operations on large amounts of data at once.
Are there better ways to insert a large amount of data at once?
Bulk Logged Recovery uses minimal logging for bulk-logged operations, this reduces log space usage. I must add a caveat here; it makes it faster and reduces the usage in the file, but it results in a very large log backup.
“Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large.” Reference Myths & Truths
However, there is a risk of data loss for the bulk operations because these operations prevent capturing changes on a transactional basis. A point in time recovery (PIT) while using bulk logged recovery is not possible because the minimally logged operations cannot be restored. This can be an issue. So, if you have a bulk operation that needs to be handled, but you want to ensure point in time restore of each transaction in that operation, what is an alternative solution? It is important to note that you can indeed recover a transaction log containing bulk logged operations but not to a particular point in time. Instead you can take a transaction log backup as soon as the bulk operation is finished and regain PIT recovery.
You can still perform bulk operations in full recovery model, it just means that they will be fully logged and that you will not see the performance gains from minimal logging. It is the price you have to pay, you sacrifice performance for PIT restore abilities of the transactions within the bulk operation. Ultimately your decision will have to be based on weighing what your company demands for I/O and RPO/RTO (Recovery Point Objective/Recovery Time Objective). Do you know what your company’s RPO/RTO plans entail? Now is a good time to find out!
Feeling uneasy? Wondering if there are there other ways to process bulk operations?
There are different methods one can utilize for optimizing bulk operations. These methods include using minimal logging, batching, disabling triggers and constraints, and many others that can be found here.
How best to ensure limited data loss using Bulk Logged Recovery.
So you decide you are going to use bulk logged recovery and you want to make sure that you are set up for success, there are a few things to keep in mind. It is recommended that you perform bulk inserts using bulk logged recovery when there is the least amount of activity on your database. Also take into consideration how difficult or easy it will be to recreate data if there is a failure during the bulk insert. There is no PIT restore of the bulk operation using bulk logged recovery. If the bulk operation is interrupted at any point, the entire operation must be performed again in its entirety.
Still want to proceed?
First, before you switch from full recovery, take an extra log backup. If all things go badly, at least you will be able to get your database back to the point before you switch recovery models. This is highly recommended! If not, this is what we call an RGE (resume generating event).
Let’s walk through the process of getting ready and switching recovery models. Our first step in this exercise is to create a table. We then go on to taking a log backup, inserting data manually, taking a log backup, and then on to switching to bulk logged recovery.
This is a great visual of what we will be doing.
--Step 1--
Use BLRDB
GO
DROP TABLE dbo.BulkDataTest;
CREATE TABLE dbo.BulkDataTest
(Price money NULL,
ProductID int PRIMARY KEY NOT NULL,
ProductName varchar (25) NOT NULL,
ProductDescription text NULL)
GO
BACKUP DATABASE LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'
-- Step 2--
Insert into dbo.BulkDataTest
(Price, ProductID, ProductName, ProductDescription)
VALUES ('456', '456123', 'HeroBike', 'Red Bike with Hero Cape Handles');
-- Step 3 --
BACKUP LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0815.bak'
-- Step 4 --
-- Switch to Bulk Logged Recovery Model --
USE [master]
GO
ALTER DATABASE [BLRDB] SET RECOVERY BULK_LOGGED WITH NO_WAIT
GO
-- verify Recovery mode This script will ensure that you are in the desired recovery model--
select name, recovery_model_desc from sys.databases
Our next steps will be to insert our bulk data, insert manual data, take log backups, switch back to Full Recovery, and take an additional log backup.
-- Step 5 --
-- Commit both transactions below at the same time--
USE [BLRDB]
GO
BULK INSERT BulkDataTest
FROM 'C:\DBA\TestDocs\demo_bulk_insert_26.csv'
With (FIELDTERMINATOR = ',' ,
ROWTERMINATOR = '\n' ,
ROWS_PER_BATCH = 100000,
TABLOCK
);
GO
BACKUP LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0900.bak'
-- Step 7 --
--Insert more rows manually--
INSERT INTO dbo.BulkDataTest
(Price, ProductID, ProductName, ProductDescription)
VALUES ('56', '111117', 'TheCheap', 'One of the cheapest bikes ever made'),
('58' , '111118', 'NewerModel', 'This one is for beginners'),
('591' , '111119', 'ABetterOne', 'Okay this one is good') ;
-- Step 8 --
-- Switch back to Full Recovery Mode--
USE [master]
GO
ALTER DATABASE [BLRDB] SET RECOVERY FULL WITH NO_WAIT
GO
Use BLRDB
GO
-- Step 9 --
--Insert more rows manually--
INSERT INTO dbo.BulkDataTest
(Price, ProductID, ProductName, ProductDescription)
VALUES ('36', '111120', 'BoyBike', 'This is a bike for tall 8yo'),
('136', '111121', 'ManBike', 'This is a bike for tall men'),
('236', '111122', 'ShortBike', 'This is a bike for under 5foot');
-- Step 10 --
--Take Log Backup--
BACKUP LOG [BLRDB]
TO DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0930.bak'
Ready for a challenge?
Now let’s simulate a dropped database and walk through restore! The following steps will walk you through to be able to answer the questions below.
Question 1: Restore backup to step 6. What is missing? Do you have the single row inserts? Is the data from the bulk insert there?
--Drop Database--
USE Master
GO
DROP DATABASE [BLRDB]
--Restore Full Backup --
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Full_02232018.bak'
WITH NORECOVERY;
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'
WITH NORECOVERY;
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0815.bak'
WITH NORECOVERY;
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0900.bak'
WITH NORECOVERY;
RESTORE DATABASE BLRDB
WITH RECOVERY;
use [BLRDB]
go
SELECT *
FROM dbo.BulkDataTest
--For Qustion 1, Restore Step 6 --
USE Master
GO
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Full_02232018.bak'
WITH NORECOVERY;
GO
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'
WITH NORECOVERY;
GO
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0930.bak'
WITH NORECOVERY;
GO
RESTORE DATABASE BLRDB WITH RECOVERY;
GO
use [BLRDB]
go
SELECT *
FROM dbo.BulkDataTest
Question 2: Restore to backup at step 10. What is missing? Do you have everything?
-- For Question 2, Restore Step 10 --
-- Drop DB--
USE Master
GO
DROP DATABASE [BLRDB]
GO
USE Master
GO
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Full_02232018.bak'
WITH NORECOVERY;
GO
RESTORE DATABASE BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0800.bak'
WITH NORECOVERY;
GO
Restore Database BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0815.bak'
WITH NORECOVERY;
GO
Restore Database BLRDB
FROM DISK ='C:\DBA\Backups\BLRDB_Log_02232018_0900.bak'
WITH NORECOVERY;
GO
Restore Database BLRDB
FROM DISK = 'C:\DBA\Backups\BLRDB_Log_02232018_0930.bak'
WITH RECOVERY;
USE BLRDB
GO
SELECT *
FROM dbo.BulkDataTest
Swimming back up.
We have taken the plunge and now it is time to swim back to the surface and discuss what we have found. Working through this exercise and answering these questions should show you how bulk logged recovery model works, why it is important to switch back to full recovery, and most importantly, why full and log backups are a must!
What are your answers? Your answers should show that if you are only able to restore to step 6 (the point at which you took a log backup but had not yet switched back to full recovery and taken another log backup) all data operations that were performed since your bulk operation have to be recreated! The second answer should show you that it is imperative to take a log backup before switching to bulk recovery, take a log backup after your bulk insert, and take another log backup after reverting back to full Recovery. If you follow those steps, your answer shows that all of your data operations were in fact logged and can be restored up to the last log backup without data loss.
Recently I have heard a lot of people discussing SQL Server System Databases. The topic of system databases seems to be a deceptively basic one. But how many people truly take the time to understand what system databases are and what purpose they serve? Come along with me and let’s explore system databases.
What are System Databases and what do they do?
System Databases are needed for your SQL Server to operate. These include Master, Model, MSDB, Resource, and TempDB. For Azure SQL Database, only Master and TembDB apply.
Master – The Master Database records all the system-level information for and instance of SQL Server. This information includes logon accounts, linked servers, and system configuration settings. The Master also records the existence of all other databases and the location of those files, and records the initialization information for SQL Server. This means that SQL Server CANNOT START if the Master database is unavailable. Think of this like the master key to your SQL Server door.
Model – The Model Database is used as the template for all databases created on the instance. Modifications can be made to the Model DB that will be applied to all databases created after the Model DB has been altered. These changes include database size, collation, and recovery model. A full list of options that can/cannot be modified on a Model DB for SQL Server 2016 is availablehere. The list for SQL Server 2014 Model DB options is locatedhere. And for SQL Server 2012 the options arehere.
MSDB – The MSDB database is used by SQL Server Agent for scheduling alerts and jobs. It is also used by Service Broker, Database Mail, SSIS, data collector, and policy based management. SQL Server maintains a complete history of all online backups and restores within the tables in MSDB. This history includes the name of the person or program that performed the backup, the time of the backup, and the drives of files where the backup is stored. SQL Server Management Studio then uses this information to propose a plan for restoring a database and applying any transaction log backups.
Resource – The Resource database is a read-only database that contains all the system objects that are included with SQL Server. The Resource database does not contain user data or metadata. Since it is a read-only database you will not see it listed on your instance as the other databases in the photo above.
TempDB – The TempDB Database is a database that is available to all users connected to the instance of SQL Server. It is used to hold objects that are created by users such as temporary tables and indexes, temporary stored procedures, table variables, and cursors. It also stores objects that are created internally such as work tables, work files, and sort results for operations such as creating or rebuilding indexes. Think of TempDB like the “junk drawer” in your home. Each item is needed at specific times, then is thrown into the drawer to sit a while. More items are thrown in the drawer. Everyone throws items in the drawer. Eventually the drawer becomes too full and it begins to spill out. No one ever wants to clean out the junk drawer, and eventually you need a bigger drawer.
Restrictions – Despite all of the operations that can be performed on the TempDB, the following are operations that CANNOT:
Adding filegroups.
Backing up or restoring the database.
Changing collation. The default collation is the server collation.
Creating a database snapshot.
Dropping the database.
Dropping the guest user from the database.
Enabling change data capture.
Participating in database mirroring.
Removing the primary filegroup, primary data file, or log file.
Renaming the database or primary filegroup.
Running DBCC CHECKALLOC.
Running DBCC CHECKCATALOG.
Setting the database to OFFLINE.
Setting the database or primary filegroup to READ_ONLY.
You must backup your TempDB! True or False?
In a previous blog, I discussed that theMost Important Role of a SQL Server DBA is the ability to understand and perform backups and recovery. I went on to discuss backups in A Beginner’s Guide to SQL Server Backupsas well asRecovery Models. Despite the importance of backups and recovery, the only database that cannot be backed up or restored is TempDB! Why can it not be backed up or restored? TempDB is recreated each time the server is restarted, so any temporary objects like tables, indexes, etc., are cleared automatically. As seen here, backup nor recovery are even an option!
Should System Databases be backed up?
Before I answer this right off, ask yourself these questions: “Do I want to lose access to all other databases on the instance? What would happen if I lose the model my company demands for the specific way every new database must be created? Would anyone notice if I had to restore after a disaster and no one had correct passwords to the database?” That shiver that just ran up your spine is your answer. Absolutely YES, the system databases (Master, MSDB, and Model) should have backups!
It is recommended that the Master be backed up as often as necessary to protect the data: a weekly backup with additional backups after substantial updates is highly recommended. If for some reason the Master becomes unusable, restoring from the backup is the best way to get up and running. Go here for information onRestoring the Master. If you do not have a valid backup of the Master, rebuilding the Master is the only option. You can click here to find more information about what it takes toRebuild System Databases.
Best practices recommend creating full backups of the Model Database, and doing so only when necessary for your business needs. It is a small database that rarely sees changes; however, it is important to make sure it is backed up especially immediately after customizing its database options. Microsoft also recommends to perform backups on the MSDB database whenever it is updated.
Feeling like this might be too much trouble? As with any other backup, these too can be automated by using a SQL Agent job! There is absolutely no reason NOT to back up your system databases as often as you backup your regular databases. If you feel otherwise, might I suggest you keep an updated resume close at hand.
Which recovery model should be used?
This brings us down to recovery modes. As a default the Master and MSDB are set to Simple recovery model, the Model is user configurable; however best practices recommends setting MSDB to Full recovery model, especially if the backup and restore history tables are used.
*Note that if you change the recovery model to Full,transaction log backupswill need to be performed as well. You don’t want your Master or MSDB logs to become full and risk losing all your data!
Pop quiz: How often should you back up your TempDB?
(a) once a month
(b) weekly
(c) daily
(d) yearly on February 30th
Answer: Trick question! TempDB cannot be backed up.
We are back this year! Not only are we sponsoring SQL Saturday Houston, we are also speaking! All three of us are presenting! Go here to see the full schedule.
What is SQL Saturday?
SQL Saturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence, and Analytics. SQL Saturday Houston will be held on June 23, 2018 at San Jacinto College – South Campus, 13735 Beamer Road, Houston, Texas 77089. Check-in and breakfast starts at 7:30am. The first sessions begin at 8:30 am. There are sessions for beginners, intermediate, and advanced levels. Topics covered at this SQL Saturday are:
Powershell
Application & Database Development
BI Platform Architecture, Development & Administration
Cloud Application Development & Deployment
Enterprise Database Administration & Deployment
Professional Development
Strategy & Architecture
Remember, this is a FREE event, but only a few spots remain! Don’t wait, click here to register!
Where will we be?
We will each be at the Procure SQL booth with smiling faces, fun giveaways, and answers to your SQL Server questions! Please stop by and say hello. If not at the booth, you can find us attending a session or giving one of our own!
Angela will start out the day at 9:45 am in room 117. She will be presenting her professional development session “Becoming the MVP: Soft Skills for the Hard Market.” In this interactive, round-table discussion, Angela explores how soft skills are important at all levels of a person’s career. The importance of soft skills in the job market, specific skills, and how to hone them will be top priority. She has been known to give away prizes, so make sure to say hello!
Jay comes in next at 11:00 am in room 149. Jay’s presentation is “Linux for SQL Server” and is a high-level overview of the differences and similarities between Linux and Windows for those who haven’t been exposed or may need a refresher. Don’t be mistaken, even though this session is a high-level overview, it is fantastic for beginners! Jay will introduce the Linux version of Windows commands used on a daily basis for administering SQL Server. Next, he will explore updating Linux, updating SQL Server, moving files between Windows and Linux, and backing up and restoring databases from one system to another. He will round out the session by taking a look at default file locations for SQL Server and what can be moved and how to accomplish that.
John is waking up the afternoon crowd at 1:30 in room 113. He is presenting “Automate the Pain Away with Query Store and Automatic Tuning” which is an intermediate level presentation which explains how execution plans get invalidated and why data skew could be the root cause of seeing different execution plans for the same query. He will further explore options for forcing a query to use a particular execution plan. Finally, he will discuss how this complex problem can be identified and resolved simply using new features in SQL Server 2016 and SQL Server 2017 called Query Store and Automatic Tuning. You won’t want to miss out on that!
What happens after the sessions are done?
Stick around after the last sessions because at 5:00 we all gather together for final remarks and sponsor raffles! We will be giving away a new Super Nintendo Entertainment System Classic Edition! To enter, just drop your raffle ticket in the bucket at our booth.
But wait, there’s more!
The fun doesn’t stop here. We leave from the event to an after party which is being held at Main Event, 1125 Magnolia Ave., Webster, Texas 77598. Party starts at 6! The after parties are a great way to unwind, network, and chat up the speakers and new SQL friends you made during the sessions!
John Sterrett shares his thoughts on successfully transitioning to an entry-level IT job.
Recently, I worked with AITP to share some tips that helped me successfully transition from a Computer Engineering student at the University of Oklahoma to an entry-level information technology professional. Once, I completed the interview I noticed that my advice was more of a secret sauce to being a successful informational technology professional than just a how-to guide to getting that first entry-level information technology job. Therefore, I wanted to share with everyone.
/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.png00JohnSterrett/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngJohnSterrett2018-06-05 14:40:112018-06-05 14:40:11Four Tips for Entry-Level and Experienced IT Professionals
In October 2011, after working with SQL Server since 1998, I discovered something called SQL Saturday and PASS. For those unaware, these are local events, setup and run by volunteers. Speakers come in from all parts of the country to share, for free, their knowledge of SQL Server and other topics of interest to data professionals. These events also draw sponsors of various types. Without this symbiosis, the events would not work.
My First SQL Saturday
At this first event, held on the campus of the University of Texas at Austin, I got to meet and learn from some of the big names in the community, referred to as SQL Family. Thomas LaRock (t/b) kicked off the morning with a keynote. Afterwards, I found Tom is a HUGE bacon aficionado. This was my first realization these folks were approachable and not “full of themselves.” Very different from other professions I’ve been around. After the keynote, it was off to learn from the likes of Jim Murphy (t/b), Wes Brown (t), Joe Celko, and many others.
I was hooked. I started traveling the country to attend these events as an attendee, usually at my own expense. Thanks to SQL Saturday, I’ve been to Portland ME, Atlanta, Louisville, Omaha, San Diego, Houston, Salt Lake City, and countless others.
One Road Leads to Many Paths
In October 2013, while at a CodeCamp in Austin, I met John Sterrett (t/b). He had decided it was time for Austin to host a SQL Saturday, the first since October 2011. It was a big hit and repeated in 2015. Unfortunately, we haven’t been able to find a suitable venue since; but hope to be able to bring SQL Saturday back in 2019. I tell this part of the story because John is responsible for the next part.
It used to be a standing joke that we saw each other more away from Austin than in Austin. One frequent topic of conversation was, “Hey Jay, you come to these events, why don’t you speak at them?” My answer was always, “I don’t know anything these people don’t already know.” He kept telling me the same thing I had heard other speakers say: At least one person doesn’t know what you know, or you have a different take on something others speak about.
Diving in to Become a Speaker
It took several years of prodding, but I did my first SQL User Group presentation to the Austin community in June 2017. It had been years since I was up in front of a non-work group, but it went well, except for the demos of course. Now I was hooked again, I went to work on my next presentation idea, which was delivered late in 2017. It went well enough I decided, OK, I was coerced into submitting to a couple of SQL Saturday events, Wheeling and Albuquerque. I was selected to speak at Wheeling but unfortunately had to withdraw from Albuquerque.
Fast forward, OK, not really fast, but I’m getting there. I tried very hard not to fall into the trap I heard all speakers talking about: don’t change your presentation last minute, have an extra laptop so you can deal with hardware failures, make sure you have plenty of cables, etc. I made it to Wheeling with extra laptop and redundant cables but found myself updating the presentation on Saturday Morning to include a new item I had seen earlier in the week.
Whew, I did it!
Even with last minute changes, everything went well with my first SQL Saturday presentation. Feedback was helpful even though it was limited. Based on the input, I’ve updated the presentation and been accepted to speak at SQL Saturday events in Los Angeles and Houston. I will also be co-presenting with John Sterrett at SQL Saturday Dallas. I admit to a bit of nervousness as I expect both to be bigger, but I think I’m up to the challenge.
I’m looking forward to preparing new presentations and being a speaker and volunteer at many of the locations I’ve just been attending to this point.
I had the distinct honor of helping John Sterrett (t/b) and Dolph Santorine (t) organize and run SQL Saturday #717 in Wheeling, West Virginia on April 28, 2018. It was a chance meeting at the event that drove home to me why we invest so much of our personal time and effort into organizing/running/presenting at SQL Saturdays. The short answer as to why we do it is: We do all of this to promote free education to all who choose to seek it. That’s right…. FREE. Education is the best gift that can be bestowed upon and received by anyone. I met Kyle, this was his first SQL Saturday and his tweet sums it all up:
“All you’ve done since you got back from #SQLSaturday is talk about all the classes you were at. Don’t you wanna talk about something else?” #whoops
Each SQL Saturday is organized by individuals or teams of volunteers. We do not get paid for our time. The speakers do not get paid for their time. Speakers are sometimes compensated by their employers when they present at SQL Saturdays, but most of the time these speakers pay the travel and lodging expenses out of their own pockets! The events are funded by sponsors. Thus we are able to give away free education from active industry professionals!
Sponsorship is key.
Like I mentioned, the events are funded solely by sponsorship. The cool thing about sponsoring a SQL Saturday is that not only are you expanding your marketing campaign for a minimal amount of money, you too are giving back to a noble cause. Marian Wright Edelman wrote, “Education is for improving the lives of others and for leaving your community and the world better than you found it.” Being a sponsor allows you the awesome privilege of helping make a difference in someone’s life.
Why we do what we do.
During a chance meeting at SQL Saturday Wheeling, I met a USMC Veteran who is trying to find his way in the tech field. He was referred to the event by a college professor. Much to his surprise, not only did he walk away from the event armed with tons of information and a new-found excitement for learning SQL, he walked away with a lot of new SQL friends and mentors. He was introduced to fellow Ex-Marine and Veteran, Jim Donahoe (t/b), who then put him in contact with others in the industry who can directly help make an impact on his career! That is at the core of who we are as a community. We left as we rise.
In the SQL community, we pride ourselves in giving back and helping others. On twitter we even have an open hashtag (#SQLHelp) where you can ask questions and get responses from experts in the SQL community. If you participate, you will find a friend and mentor who will help provide information, keep your spirits up, and help you find your way through the weeds. I personally have been blessed through the community, and my mission is to help bless others.
Thoughts on being an organizer.
Helping to organize and run a SQL Saturday is no small task. There are many moving parts, many small details that can be overlooked or forgotten. I was so very thankful to have friends who are also organizers who helped me along the way. I certainly learned a lot about event planning! More-so, I learned a lot about the value of kindness. They provided guidance and tips, and I too will do the same for others who want the help. Standing there Saturday morning before the doors opened I thought to myself “Here we are. All these months and we are finally here.” And honestly, I was afraid. What if no one showed up? What if the equipment failed? What if I had forgotten something? What if lunch doesn’t show?
And you know what happened? Attendees arrived, speakers arrived, everything worked, lunch arrived early, we had plenty of coffee, and everyone had a great time! It was not a huge event. In fact, it was a very small event. Something I noticed from being at both large and small events is that the quality of education is the same. It is the same because the presenters are all there for the same thing I am….to share their unique vision and experience in hopes that someone in the audience will gain new perspective and it will grow their career.
Final thoughts on volunteering.
I personally feel a sense of pride and accomplishment when I can help someone else with no expectation of reward. But there is a reward, there is always a reward! My reward was smiles from speakers and attendees, but mostly, knowing that we all touched the lives of all those who attended.
For more information on how to attend or volunteer for a SQL Saturday in your location, please visit the SQL Saturday website!
Don’t assume that your Backups are solid and let them sit on a shelf. Corrupt backups are recoverable, but worthless. Did we mention you can automate SQL Server Backup validation?
There are several methods for validating your Backups.
RESTORE – The most effective way to validate that your backups are good is to run a test Restore. If your Restore is successful, you have a solid backup. Make sure to run a test restore on your Full, Differential, Point in Time, and Transaction Logs! “Bonus points” if you automate refreshing non-production.
Backup with CHECKSUM –It may not be realistic to run regular test restores on every single database, this is where CHECKSUM is your friend. CHECKSUM is part of a backup operation which will instruct SQL Server to test each page being backed up with its corresponding checksum, making sure that no corruption has occurred during the read/write process. If a bad checksum is found, the backup will fail. If the backup completes successfully, there are no broken page checksums.
BEWARE though, this does not ensure that the database is corruption free, CHECKSUM only verifies that we are not backing up an already-corrupt database. (Later in this post we discuss checking data for corruption.) If it seems like too much trouble to write a CHECKSUM script every time you want to perform a backup, keep in mind that these can be automated as SQL Agent Jobs! A sample T-SQL script for using CHECKSUM is as follows:
Backup Database TestDB
To Disk='G:DBABackupsTestDBFull_MMDDYYYY.bak'
With CheckSum;
VERIFY – It is not wise to rely solely on CHECKSUM, a good addition is to use RESTORE VERIFYONLY. This will verify the backup header, and also that the backup file is readable. Note that much like CHECKSUM, this will check to see if there are errors during the read/write process of the backup; however, it will not verify that the data itself is valid or not corrupt. Despite the name “RESTORE VERIFONLY”, it does not actually restore the data. VERIFY too can be automated to perform each time your backups utilizing CHECKSUM run.
CHECKSUMon Restore – Databases where BACKUP WITH CHECKSUM have been performed can then be additionally verified as part of the restore process. This will check data pages contained in the backup file and compare it against the CHECKSUM used during the backup. Additionally, if available, the page checksum can be verified as well. If they match, you have a winner… More Details on CHECKSUM and BACKUP CHECKSUM
Restore Database TestDB;
From Disk='G:DBABackupsTestDBFull_MMDDYYYY.bak'
With VerifyOnly;
Data Validation Prior to Taking Backups
Keep in mind that if your data is corrupt prior to a backup, SQL Server can BACKUP that CORRUPTED DATA. The validation methods mentioned above guard you against corruption occurring during backups, not against corrupted data within the backup. For data validation prior to backups being run, it is suggested that DBCC CHECKDB be performed on each database on a regular basis.
DBCC CHECKDB – SQL Server is very forgiving and will usually backup and restore corrupted data. A best practice is to run a DBCC CHECKDB on your data to check for potential corruption. Running CHECKDB regularly against your production databases will detect corruption quickly. Thus providing a better chance to recover valid data from a backup, or being able to repair the corruption. CHECKDB will check the logical and physical integrity of the database by running these three primary checks*:
CHECKALLOC – checks the consistency of the database;
CHECKTABLE – checks the pages and structures of the table or indexed view; and
CHECKCATALOG – checks catalog consistency.
Automate Validation Steps
Corruption can happen at any time, most of the time it is related to a hardware issue. Automating the steps necessary to validate your data and backups will help ensure you have the best practices in place to efficiently recover from catastrophic data loss. Being able to backup and restore is not as important as being able to recover with valid data. Despite the above keys for validation, the only true way to verify that your backups are valid is to actually restore the database. It bears repeating: corrupt backups are recoverable, but worthless.
Some of my friends know I am a huge fan of the song “Havana” by Camila Cabello. They also know I like to remix songs and if I was to remix the song I would just change the word “Havana” to “Wheeling, WV” because half of my heart is in the Ohio Valley.
Why?
Wheeling is where I grew up as an adult and to this day it is one of the special happy places I like to visit. On April 28th Wheeling will host their the third SQL Saturday. Anyone can attend for FREE! l look forward to sharing my favorite city with the SQL Community and my SQL Family.
Free SQL Server Training on April 29th in Wheeling, WV
Procure SQL will be teaming up with the Wheeling Chapter of AITP (only IT group in Ohio Valley) to bring some expert SQL Server training from MVPs, MCTs, and community experts to the Ohio Valley. I hope Data Platform professionals in nearby cities like Columbus, Pittsburgh, Harrisburg, Cleveland and Washington DC join is un the fun as well.
Things to Do?
Check out this quick five-minute video to find out some of the great things you can do in Wheeling, WV and why I fell in love with Wheeling!
Food
Colemans Fish Market – It is #1 on TripAdvisor for a reason. Best-fried fish sandwiches. Ye Old Alpha – The Bridgeport brownie is legendary good. DeCarlos Pizza – Wheeling’s special version of Pizza. If you get it make sure to eat it quick. Locals typically will eat it on the hood of their cars. Undos – My personal favorite Italian food restaurant.
Places to See:
Good Mansion Wine – If you like wine, the selection here is fantastic. They will also have an open wine tasting event April 27th at 6 pm. If you are looking for something fun to do the night before the event I would recommend this. Suspension Bridge – If you like history. You have to check out one of the oldest suspension bridges in the USA. You can still walk and drive across it. Wheeling Artisan Center – Great small tour of the history of Wheeling, WV. Center Market – Historic part of town with a lot of shops and places to eat. Its an easy walk from the SQL Saturday venue. Oglebay Resort – Depending on the weather the driving range or ski lift will be open. Seriously, a great five-star resort with epic holiday events including Christmas lights, ogalbayfest, and 4th of July. Wheeling Island Casino – If you like to play cards and win money its a great location. Used to do it a lot on lunch breaks.
/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.png00JohnSterrett/wp-content/uploads/2024/05/Data-Architecture-as-a-Service-with-ProcureSQL.pngJohnSterrett2018-03-29 21:13:532018-03-29 21:13:53Why We’re Organizing SQL Saturday in Wheeling, WV on April 28th
The best way to learn is directly from industry professionals! We are thrilled to be helping the SQLSaturday Wheeling organizers put together a great SQLSaturday, April 28, 2018. The schedule is packed full of knowledgeable SQL professionals waiting to share their unique experiences and knowledge with you!
What is SQLSaturday?
SQLSaturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence and Analytics. Admittance to this event is free, and most costs are covered by donations and sponsorships. This event will be held on Apr 28 2018 at West Virginia Northern Community College, 1704 Market Street, Wheeling, West Virginia, 26003, United States. You can register here: SQLSatWheeling
* To register for a SQLSaturday, you must first become a PASS member. A PASS account is free and makes it convenient to manage your SQLSaturday user profile, update preferences, and keep track of your SQLSaturday registrations.
What topics will be discussed?
SQLSaturdays offer a wide variety of sessions from beginner to advanced. There truly is something for every data professional at every level of his or her career.
The topics covered are:
Analytics and Visualization
Application and Database Development
BI Information (Delivery, Architecture, Development, and Administration)
Cloud Development and Implementation
Enterprise Database Administration and Deployment
Professional Development
The full schedule for SQLSaturday Wheeling can be found here.
What else does a SQLSaturday offer?
Education is a very important element in growing your career. Another often over-looked important element to growing a successful career is networking! SQLSaturdays provide an excellent opportunity to meet new people, reconnect with old, and have a great time networking. There will be a networking event Saturday after the event, so you will have a chance to unwind, grab a bite to eat, and chat with your newfound SQL friends!