Why Adopting a Modern Data Warehouse Makes Business Sense

Most traditional data warehouses are ready for retirement. When they were built a decade or more ago, they were helpful as a mechanism to consolidate data, analyze it, and make business decisions. However, technology moves on, as do businesses, and in modern times, many data warehouses are showing their age. Most of them rely on on-premises technology that can be limited to scalability, and they also may contain old or incorrect rules and data sets, which can cause organizations to make misguided business assumptions and decisions. Traditional data warehouses can also impede productivity by being very slow when compared to current solutions – processes that may take hours with an old data warehouse take a matter of minutes with a modern one.

These are among the reasons why many organizations seek to migrate to a modern data warehouse solution – one that is faster, more scalable, and easier to maintain and operate than traditional data warehouses. Moving to a modern data warehouse might seem like a daunting process, but this migration can be dramatically eased with the help of a data-managed service provider (MSP) that can provide the recommendations and services required for a successful migration.

This blog post will examine the advantages of moving to a modern warehouse.

Addressing the speed problem

With older on-premises data warehouses, processing speed can be a major issue. For example, if overnight extraction, transformation, and loading (ETL) jobs go wrong, it can lead to a days-long project to correct.

Processing is much faster with a modern data warehouse. Fundamentally, modern systems are built for large compute and parallel processing rather than relying on slower, sequential processing. Parallel processing means you can do more things independently rather than doing everything in one sequence, which can be an enormous waste of time. This ability to do other things while conducting the original processing job has a significant positive impact on scalability and worker productivity.

As part of the transition to a modern data warehouse, users can move partially or entirely to the cloud.  One of the most compelling rationales, as with other cloud-based systems, is that cloud-based data warehouses are an operational cost rather than a capital expense. The reason for this is capital expenses involved with procuring hardware, licenses, etc. are outsourced to a third-party cloud services provider.

The other benefits of moving to the cloud are well understood but having effective in-house data expertise is needed for any environment – on-premises, cloud, or hybrid. This enables organizations to take full advantage of moving to a modern data warehouse. Among the benefits:

  • While it is true that functions like backups, updates, and scaling are just features in the cloud that can be clicked to activate, customers also must provide in-house data expertise to determine important things like recovery time objective (RTO) and recovery point objective (RPO). This cloud/on-premises collaboration is key to successfully recovering data, which is the rationale for doing backups in the first place.
  • Parallel processing makes the data warehouse much more available. Older data warehouses use sequential processing, which is much slower for data ingestion and integration. Regardless of the environment, data warehouses need to exploit parallel processing to avoid the speed problems inherent in older sequential systems.
  • Real-time analytics become more available with a cloud-based data warehouse solution. Just as with other technologies like artificial intelligence, using the latest technology is much easier when a provider has it available for customers.

The important thing to remember is that moving to a modern data warehouse can be done at any speed—it doesn’t have to be a big migration project. Many organizations wait for their on-premises hardware contracts and software licenses to expire before embarking on a partial or total move to the cloud.

Addressing the reporting problem

As part of the speed problem, reports often take a long time to generate from old data warehouses. When moving to a modern data warehouse, the reporting is changed to a semantic and memory model, which makes it much faster. For end-users, they can move from a traditional reporting model to interactive dashboards that are near real-time. This puts more timely and relevant information at their fingertips that they can use to make business decisions.

Things also get complicated when users run reports against things that are not a data warehouse. The most common situations are they’ll run a report against an operational data store, they’ll report against a copy of straight data from production enterprise applications, or they’ll download data to Excel and manipulate it to make it useful. All of these approaches are fragmented and do not give the complete and accurate reporting that comes from a centralized data warehouse.

The move to a modern data warehouse eliminates these issues by consolidating reporting around a single facility, reducing the effort and hours consumed by reporting while adopting a new reporting structure that puts more interactive information in decision-makers hands.

 

Addressing the data problem

As we said before, most OLAP data warehouses are a decade or more old. In some cases, they have calculations and logic that aren’t correct or were made on bad assumptions at the time of design. This means organizations are reporting on potentially bad data and, worse yet, making poor business decisions based on that data.

Upgrading to a modern data warehouse involves looking at all the data, calculations, and logic to make sure they are correct and aligned with current business objectives. This improves data quality and ends the “poor decisions on bad data” problem.

This is not to say that data warehouse practitioners are dropping the ball. It may be that the data was originally correct, but over time logic and calculations become outmoded due primarily to changes in the organization. Moving to a modern data warehouse involves adopting current design patterns and models and weeding out the data, calculations, and logic that are not correct.

This upgrade can be done with the assistance of a data MSP that can provide the counsel and services involved with reviewing and revising data and rules for a new data warehouse, provide the pros and cons of deployment models, and recommend the features and add-ons required to generate maximum value from a modernization initiative.

The big choice

CIOs and other decision-makers have a choice when it’s time to renew their on-premises data warehouse hardware contracts: “Do I bite the bullet and do the big capital spend to maintain older technology, or do I start looking at a modern option?”

A modern data warehouse provides a variety of benefits:

  • Much better performance through parallel processing.
  • Much faster and more interactive reporting.
  • Lower maintenance cost.
  • Quicker time to reprocess and recover from error.
  • An “automatic” review of business rules and logic to ensure correct data is being used to make business decisions.
  • Better information at end-users’ fingertips.

Moving to a modern data warehouse can be a gradual move to the cloud, or it can be a full migration. The key is to get the assistance of ProcureSQL who specializes in these types of projects to ensure that things go as smoothly and cost-effectively as possible.

Contact us to start a discussion to see if ProcureSQL can guide you along your data warehouse journey.

Please enable JavaScript in your browser to complete this form.
Modern Data Warehouse Questions?
Step 1 of 5
Name

This year, several data breaches were caused by multi-factor authentication NOT being enabled.

Enable Multi-Factor Authentication, Please!
Enable Multi-Factor Authentication, Please!

If you ever follow any of our tips, blogs, or videos, please follow this one and enable multi-factor authentication on all your applications and websites that you access.

If you are procuring a new application or service, now is also a great time to verify that it includes forcing multi-factor authentication.

Hello, Everyone; this is John Sterrett from Procure SQL. Today, we will discuss how you can validate SQL Server Backups with a single line of PowerShell.

Due to the recent global IT outage, I thought this would be an excellent time to focus on the last line of defense—your database backups. I have good news if you are not validating your SQL Server backups today.

DbaTools Is Your Friend

Did you know you can validate your backups with a single PowerShell line? This is just one of several amazing things you can do with dbatools in a single line of PowerShell.

John, what do you mean by validating SQL Server backups?

  • I mean, find your last backup
  • See if the backup still exist
  • Restore the previous backup(s)
  • Run an Integrity Check
  • Document the time it took along the way

Validating SQL Server Backups – How Do We Validate Our Backups?

DBATools has a module named Test-DbaLastBackup.

You could run it with the following command to run against all your databases using the instance name provided below.

$result = Test-DbaLastBackup -SqlInstance serverNameGoesHere

You could also have it run for a single database with a command similar to the one below.

$result = Test-DbaLastBackup -SqlInstance serverNameGoesHere -Database ProcureSQL

What happens with Test-DbaLastBackup?

Great question! If we learned anything from the recent global IT downtime, it’s to validate and test everything!

I love to see what’s happening under the hood, so I set up an extended event trace to capture all the SQL statements running. I can see the commands used to find the backups, the restore, the integrity check, and the dropping of the database created during the restore.

All the excellent things I will share are below.

Extended Event

The following is the script for the extended event. I run this to capture events created by my DBATools command in Powershell. Once I start the extended event trace, I run the PowerShell command to do a single check on a database, as shown above. I then stop the capture and review.

CREATE EVENT SESSION [completed] ON SERVER 
ADD EVENT sqlserver.sp_statement_completed(
  ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.query_hash,sqlserver.session_id,sqlserver.sql_text,sqlserver.username)
    WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[client_app_name],N'dbatools PowerShell module%'))),
ADD EVENT sqlserver.sql_batch_completed(
    ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.query_hash,sqlserver.session_id,sqlserver.sql_text,sqlserver.username)
    WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[client_app_name],N'dbatools PowerShell module%'))),
ADD EVENT sqlserver.sql_statement_completed(
    ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.query_hash,sqlserver.session_id,sqlserver.sql_text,sqlserver.username)
    WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[client_app_name],N'dbatools PowerShell module%')))
ADD TARGET package0.event_file(SET filename=N'completed',max_file_size=(50),max_rollover_files=(8))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO

Validate SQL Server Backups – Resulting Events / Statements

Here, you can see we captured many SQL Statements during this capture. Below, I will minimize and focus on key ones that prove what happens when you run this command for your database(s).

The query used to build up the backup set to find the last backup was too big to screenshot, so I included it below.

/* Get backup history */
SELECT
                        a.BackupSetRank,
                        a.Server,
                        '' as AvailabilityGroupName,
                        a.[Database],
                        a.DatabaseId,
                        a.Username,
                        a.Start,
                        a.[End],
                        a.Duration,
                        a.[Path],
                        a.Type,
                        a.TotalSize,
                        a.CompressedBackupSize,
                        a.MediaSetId,
                        a.BackupSetID,
                        a.Software,
                        a.position,
                        a.first_lsn,
                        a.database_backup_lsn,
                        a.checkpoint_lsn,
                        a.last_lsn,
                        a.first_lsn as 'FirstLSN',
                        a.database_backup_lsn as 'DatabaseBackupLsn',
                        a.checkpoint_lsn as 'CheckpointLsn',
                        a.last_lsn as 'LastLsn',
                        a.software_major_version,
                        a.DeviceType,
                        a.is_copy_only,
                        a.last_recovery_fork_guid,
                        a.recovery_model,
                        a.EncryptorThumbprint,
                        a.EncryptorType,
                        a.KeyAlgorithm
                    FROM (
                        SELECT
                        RANK() OVER (ORDER BY backupset.last_lsn desc, backupset.backup_finish_date DESC) AS 'BackupSetRank',
                        backupset.database_name AS [Database],
                        (SELECT database_id FROM sys.databases WHERE name = backupset.database_name) AS DatabaseId,
                        backupset.user_name AS Username,
                        backupset.backup_start_date AS Start,
                        backupset.server_name as [Server],
                        backupset.backup_finish_date AS [End],
                        DATEDIFF(SECOND, backupset.backup_start_date, backupset.backup_finish_date) AS Duration,
                        mediafamily.physical_device_name AS Path,
                        
                backupset.backup_size AS TotalSize,
                backupset.compressed_backup_size as CompressedBackupSize,
                encryptor_thumbprint as EncryptorThumbprint,
                encryptor_type as EncryptorType,
                key_algorithm AS KeyAlgorithm,
                        CASE backupset.type
                        WHEN 'L' THEN 'Log'
                        WHEN 'D' THEN 'Full'
                        WHEN 'F' THEN 'File'
                        WHEN 'I' THEN 'Differential'
                        WHEN 'G' THEN 'Differential File'
                        WHEN 'P' THEN 'Partial Full'
                        WHEN 'Q' THEN 'Partial Differential'
                        ELSE NULL
                        END AS Type,
                        backupset.media_set_id AS MediaSetId,
                        mediafamily.media_family_id as mediafamilyid,
                        backupset.backup_set_id as BackupSetID,
                        CASE mediafamily.device_type
                        WHEN 2 THEN 'Disk'
                        WHEN 102 THEN 'Permanent Disk Device'
                        WHEN 5 THEN 'Tape'
                        WHEN 105 THEN 'Permanent Tape Device'
                        WHEN 6 THEN 'Pipe'
                        WHEN 106 THEN 'Permanent Pipe Device'
                        WHEN 7 THEN 'Virtual Device'
                        WHEN 9 THEN 'URL'
                        ELSE 'Unknown'
                        END AS DeviceType,
                        backupset.position,
                        backupset.first_lsn,
                        backupset.database_backup_lsn,
                        backupset.checkpoint_lsn,
                        backupset.last_lsn,
                        backupset.software_major_version,
                        mediaset.software_name AS Software,
                        backupset.is_copy_only,
                        backupset.last_recovery_fork_guid,
                        backupset.recovery_model
                        FROM msdb..backupmediafamily AS mediafamily
                        JOIN msdb..backupmediaset AS mediaset ON mediafamily.media_set_id = mediaset.media_set_id
                        JOIN msdb..backupset AS backupset ON backupset.media_set_id = mediaset.media_set_id
                        JOIN (
                            SELECT TOP 1 database_name, database_guid, last_recovery_fork_guid
                            FROM msdb..backupset
                            WHERE database_name = 'CorruptionChallenge8'
                            ORDER BY backup_finish_date DESC
                            ) AS last_guids ON last_guids.database_name = backupset.database_name AND last_guids.database_guid = backupset.database_guid AND last_guids.last_recovery_fork_guid = backupset.last_recovery_fork_guid
                    WHERE (type = 'D' OR type = 'P')
                     AND is_copy_only='0' 
                    
                    AND backupset.backup_finish_date >= CONVERT(datetime,'1970-01-01T00:00:00',126)
                    
                     AND mediafamily.mirror='0' 
                    ) AS a
                    WHERE a.BackupSetRank = 1
                    ORDER BY a.Type;

The following is a screenshot of the results of validating a database with a good backup and no corruption.

What should I expect with a corrupted database?

Great question! I thought of the same one, so I grabbed a corrupt database from Steve Stedman’s Corruption Challenge and ran the experiment. I will admit my findings were not what I was expecting, either. This is why you shouldn’t take candy from strangers or run scripts without testing them in non-production and validating their results.

After restoring the corrupted database that had been successfully backed up, I performed a manual integrity check to validate that it would fail, as shown below.

Hopefully, you will have a process or tool to monitor your SQL Server error log and alert you of errors like these. If you duplicate this example, your process or tool will pick up these Severity 16 errors for corruption. I would validate that as well.

Validate SQL Server Backups – PowerShell Results

Okay, was I the only one who expected to see Failed as the status for the integrity check (DBCCResult)?

Instead, it’s blank, as I show below. So, when you dump these results back out, make sure to make your check for anything other than Success.

I submitted a bug to DbaTools and post back here with any updates.

Other Questions….

I had some other questions, too, which are answered on the official documentation page for the Test-DbaLastBackup command. I will list them below, but you can review the documentation to find the answers.

  • What if I want to test the last full backup?
  • What if I want to test the last full and last differential?
  • What if I wanted to offload the validation process to another server?
  • What if I don’t want to drop the database but use this to restore for non-production testing?
  • What if I wanted to do physical only for the integrity check?
  • What if I want to do performance testing of my restore process by changing the Max Transfer Size and Buffer Counts?

What questions did we miss that you would like answered? Let us know in the comments.

Procure SQL was at the Kansas City Developers Conference to help people procure the right Data Architecture partner.
John and Kon at Kansas City Developer Conference

Why Did You Sponsor Kansas City Developers Conference?

This is a great questions which leads to a good story. Back in 2012 Jeff Strass and Michael Eaton had a half day workshop on Going Independent. It was this workshop which helped push John to start Procure SQL. Its an honor to sponsor this event and it will always have a special place in the owners heart.

Kansas City Developers need help with your data?

If you need any help with SQL, NOSQL, Modern Data Warehousing, Reporting we would love to chat with you.

Database Management Myths for Developers

Today John & Kon gave our Database Management Myths for Developers talk. You can find the slides below.

“Every business workflow in every enterprise will be engineered with GenAI at its core” -ServiceNow’s Bill McDermott

Microsoft Build 2024 focused on transformative advancements in AI, cloud computing, and developer tools. This year’s event showcased Microsoft’s commitment to pushing the boundaries of what’s possible.

AI and Copilots were the overwhelming theme. Even though AI has hit mainstream for a while now, jumping on board now would still make you an early adopter and could give you some advantages within your market. With that being said, let’s delve into the key announcements and their implications for the data and application development space.

Copilots

Teams Copilot was introduced as a powerful enhancement for Microsoft Teams, designed to revolutionize the way teams collaborate. Leveraging advanced AI capabilities, Teams Copilot assists users by summarizing conversations, generating meeting agendas, and even drafting responses during discussions. This intelligent assistant integrates seamlessly within Teams, helping to streamline communication and enhance productivity by reducing the time spent on administrative tasks. With Teams Copilot, organizations can ensure that their teams are more focused on strategic initiatives, ultimately driving better outcomes and staying ahead of competitors​

You can now create and deploy custom AI agents with ease within Copilot Studio. Copilot Studio offers a robust set of tools for building intelligent agents that can automate complex tasks, streamline workflows, and enhance productivity. With the new agent capabilities, developers can design agents to interact seamlessly with various applications, providing users with context-aware assistance and real-time insights. These AI agents leverage advanced machine learning models and natural language processing to understand and respond to user inputs effectively. This allows businesses to create tailored solutions that can handle customer inquiries, manage routine tasks, and provide valuable data-driven insights, all while maintaining high levels of accuracy and efficiency.

Real-Time Intelligence in Microsoft Fabric

One of the most groundbreaking announcements was the introduction of Real-Time Intelligence within Microsoft Fabric. This end-to-end SaaS solution enables businesses to process high-volume, time-sensitive data at the point of ingestion, facilitating faster and more informed decision-making. Real-Time Intelligence is designed to support both low-code and code-rich experiences, making it a versatile tool for analysts and developers alike. For our data analytics team, this means we can build more responsive analytics solutions that provide immediate insights, enhancing our ability to drive strategic decisions based on your real-time data​.

Enhancements in GitHub Copilot
GitHub Copilot, already a game-changer for developers, received significant upgrades with the introduction of new extensions. These extensions, developed by Microsoft and third-party partners, integrate seamlessly with services like Azure, Docker, and Sentry. For our custom app development projects, this means we can leverage natural language capabilities within Copilot to manage Azure resources, troubleshoot issues, and streamline our development workflows. This integration not only boosts productivity but also enhances the efficiency of our development processes​.

Advances in Azure AI

Azure AI continues to evolve with the availability of GPT-4o, a multimodal AI model capable of processing text, images, and audio. Additionally, Microsoft introduced Phi-3-vision, a new model in the Phi-3 family, which is optimized for personal devices and offers powerful capabilities for text and image input. These models are accessible through Azure AI Studio, providing us with advanced tools to experiment and build innovative AI solutions. For our Data Analytics projects, these models can offer new ways to interact with and analyze data, enabling us to identify patterns and gain insights that can help us stay ahead of competitors. By leveraging these advanced AI tools, you can uncover hidden trends, make more informed decisions, and ultimately drive a greater strategic advantage against your competitors.

Smart Components

These components represent a significant leap forward in streamlining UI development within the .NET ecosystem. Smart Components are designed to automatically adapt to varying contexts and states, reducing the need for boilerplate code and extensive conditional logic. By leveraging advanced AI and machine learning, Smart Components can intelligently adjust their behavior and appearance based on real-time data and user interactions. This innovation simplifies the development process, enabling developers to create more dynamic and responsive applications with less effort. Smart Components can be particularly beneficial for building complex interfaces where different parts of the application need to interact seamlessly. They also enhance maintainability and scalability, as developers can rely on these components to handle many of the intricacies involved in state management and UI rendering.

Honorable Mentions

Additionally, .NET 9 Preview 4 was released, offering a glimpse into the future of the platform with numerous performance improvements, enhanced security features, and expanded support for cloud-native applications.

The announcement of C# 13 brought a host of new features aimed at making the language more expressive and user-friendly. Notable enhancements include improvements in pattern matching, interpolated string handlers, and extended lambda expressions, all designed to simplify coding and increase developer efficiency.

All of these advancements collectively underscore Microsoft’s dedication to evolving the data and .NET ecosystem, making it an even more robust and efficient environment for developers to build robust cutting-edge data solutions​.

WebNN (Web Neural Network API) was highlighted as a cutting-edge technology designed to bring advanced machine learning capabilities directly to web applications. WebNN allows developers to run neural network models efficiently in the browser, enabling real-time AI-powered experiences without relying heavily on server-side processing.

Conclusion

These features promise to revolutionize the way we interact with data, build applications, and drive business success. Staying up-to-date on all of these developments is crucial for any company aiming to maintain a competitive edge in today’s fast-paced digital landscape.

At Procure SQL, we are dedicated to helping businesses harness these cutting-edge technologies. Whether you need to integrate AI capabilities, enhance your data analytics, or develop custom applications using .NET, our expertise can guide you through the process. Let us assist you in leveraging these new and upcoming features to stay ahead of the game and achieve your strategic objectives. Contact us to learn more about how we can support your data journey.


Here are some news, videos, tips, and links the Procure SQL team would like to share!

Scan vs. Seek

The most straightforward example explains the difference between a scan and a seek in execution plans.

Data Engineering with Notebooks

Watch Justin’s seven-minute video on loading and transforming data in Microsoft Fabric.

Is Tableau Dead?

Yes and No. The future looks mixed.

Most Recent Issues

The work done by DBAs and Data professionals is all over the map.

Apple Electric Car

Apple pulled the plug on their Apple electric vehicle.

Should you always listen to data?

The answer is a resounding NO!

The Wheel of Misfortune

Skyscanner used this game to increase engineers’ confidence in incident management with Open Telemetry. Learn how to ingest your application’s telemetry data into Azure Monitor.

Performance Testing

Ensure a consistent and reliable user experience with Azure Load Testing.

Maximize Your Savings with SQL Server

Are you using these options to develop or test for free or with substantial cost savings?

Free Azure and SQL Server Training in Austin, Texas!

On Saturday, March 9, 2024, SQL Saturday will be coming to Austin, Texas. SQL Saturday is a free training day around SQL Server, Azure, and the Microsoft Data Platform. If you want lunch, it’s $20. We will also have two all-day deep dive training classes on performance tuning and Microsoft Analytics on Friday, March 8, 2024, for $125.

Need a Remote DBA or Data Architect?

Have you got questions? Need some help? Are you curious to know the cost of procuring a Remote Data Architect?

Checkout this quick video to see how you can start to load and transform your data with notebooks. Data Engineering with Microsoft Fabric becomes easier once you understand how to leverage notebooks to your advantage.

If you are in Austin Texas on March 8 & 9, 2024 don’t miss SQL Saturday Ausin where you can learn more about Microsoft Fabric, PowerBI, SQL Server and more.

Procure SQL - Data Architect as a Service - Weekly Newsletter


Here are some news, videos, tips, and links the Procure SQL team would like to share!

Near Zero Downtime Migrations

Near zero downtime migrations are why Azure SQL Databases can be a subscriber for transactional replication.

Habits of Effective Data Leaders

How many of these seven effective habits do you see at your job?

AI Coming for Your Job?

Maybe. Regardless, let your skills, quality of work, and service to others define who you are.

Air Canada Chatbot Lawsuit

Air Canada lost a lawsuit claiming the airline should not be liable for their chatbot’s misleading information.

Testing and Bug Fixes

We hope this is different from how you do testing or bug fixes.

Offload Workload to Availability Group Replicas

Learn about temporary statistics on your secondary replicas. See how to enable Query Store for secondary replicas with SQL Server 2022.

Serverless for Hyperscale in Azure SQL Database

These are things to know before you jump into Serverless for Hyperscale. Serverless auto-pausing and resuming in Hyperscale are not currently available. The provisioned compute tier may be less expensive if CPU or memory usage is high enough and sustained long enough.

Free Azure and SQL Server Training in Austin, Texas!

On Saturday, March 9, 2024, SQL Saturday will be coming to Austin, Texas. This is a free training day around SQL Server, Azure, and the Microsoft Data Platform. If you would like lunch to be provided, it’s $20. We will also have two all-day deep dive training classes on performance tuning and Microsoft Analytics on Friday, March 8, 2024, for $125.

Need a Remote DBA or Data Architect?

Have you got questions? Need some help? Are you curious to know the cost of procuring a Remote Data Architect?

Procure SQL - Data Architect as a Service - Weekly Newsletter


Here is some news, videos, tips, and links the Procure SQL team would like to share!

Procure SQL made it out to their first trade show of 2024! Justin, Kon and John were at SQL Saturday Atlanta BI on February 10th.

Justin Cunningham gave a talk on Data Catalog: Visualizing Your Data Sprawl. John Sterrett gave a talk on Things to Know Before Going Independent.

The team is excited to be back on April 20th for SQL Saturday Atlanta. You can also catch them at SQL Saturday Austin on March 10th.

Procure SQL Sponsored SQL Saturday Atlanta BI on February 10, 2024. Justin Cunningham talked about Data Sprawl and Managing Your Metadata.
Justin Cunningham talked about Data Sprawl and Managing Your Metadata.
Procure SQL Team sponsored SQL Saturday Atlanta BI on February 10, 2024. Kon Melamud, John Sterrett, and Justin Cunningham had a great time meeting everyone.
Procure SQL Team sponsored SQL Saturday Atlanta BI on February 10, 2024. Kon Melamud, John Sterrett, and Justin Cunningham had a great time meeting everyone.

Someone’s Dream Job

Dream of being a researcher for the Microsoft data systems? Good, they’re hiring.

NASA’s Computer Glitch

ever wonder what it’s like to troubleshoot 1970’s tech that’s 15 billion miles away…

Data Sprawl

Interesting editorial about managing your metadata. This challenge gets harder when only 3% of company’s data meets data quality standards.

Power BI Desktop Projects

Martin Schoombee shares how DevOps and Report sharing gets easier with Power BI Desktop Projects. Power BI Desktop projects are going to open up many possibilities.

Microsoft Analytics Overview in Five Minutes

Justin created this video about the personas and tools behind Microsoft’s new shiny analytics tools.

Developers, Developers, Developers…

It’s time to make data the first choice in the technology stack, not an afterthought. Developers’ words not ours. We do agree with them though. 🙂

Working With Others

Aaron Bertrand has a very simple but great tip. Leave it better than you found it.

Free Azure and SQL Server Training in Austin, Texas!

On Saturday, March 9, 2024, SQL Saturday will be coming to Austin, Texas. This is a free training day around SQL Server, Azure, and the Microsoft Data Platform. If you would like lunch to be provided, it’s $20. We will also have two all-day deep dive training classes on performance tuning and Microsoft Analytics on Friday, March 8, 2024, for $125.

Need a Remote DBA or Data Architect?

Have you got questions? Need some help? Are you curious to know the cost of procuring a Remote Data Architect?