Microsoft Fabric Mirroring changes the game with Data Ingestion giving you near real-time data with a no-code framework

Microsoft Fabric Mirroring changes the game with Data Ingestion, giving you near real-time data with a no-code framework.

Microsoft’s Fabric Mirroring will change how you perform data ingestion. If you are using products to automate batch processes for data dumping, did you know that Fabric Mirroring might remove the need for these tools and provide you with near real-time access to the data as it changes in the source systems?

Suppose you have not yet heard of the medallion architecture. In that case, it involves utilizing bronze, silver, and gold layers to describe the data processing processes from intake into your data hub to consumption from your reporting applications of choice. This multi-layered approach existed before I started my analytics career in the early 2000s. Think of it simply as bronze being your unprocessed data, silver being somewhat cleaned and organized data processed from your bronze layer, and gold being your aggregated and optimized data ready for prime-time business insights.

It’s essential to understand the evolution of data management. From the ’90s to the early 2000s, the process of getting data from each application (referred to as a spoke) into your data repository (data hub) was complex. In the Microsoft world, multiple SSIS packages or other processes were used to pull data into tables with varchar(max); this was typically a batch process that ran on a schedule, leading to potential issues.  There were so many SSIS packages that we needed an automation language to build them all, rather than doing them individually.

Many businesses’ analytics projects struggle to quickly integrate the correct data into their hub so that data transformations and validations can be effective. If you get this wrong, there is no point in collecting $200 and passing Go. Your data analytics project might end up going straight to jail.

How can we load data quickly and successfully?

I am introducing you to a no-code, near-real-time option for loading your data into your data lake (data hub) within Fabric. This new feature is known as Fabric Mirroring.

While I love the functionality of Fabric Mirroring, I am not a fan of the name. Many people with SQL Server experience think this is similar to Database Mirroring because these names are similar.

In my opinion, Fabric mirroring is similar to implementing Change Data Capture (CDC) on your SQL Server databases. CDC feeds data into a real-time streaming tool like Apache Kafka to copy data from your spoke (SQL Server application database) into your hub (Data Lake).

The benefit here is twofold. First, you don’t have to manage the Change Data Capture or Kafka implementations. Second, and most importantly, this is more than just an SQL Server solution. In the future, you can use Fabric Mirroring to ingest data from all your sources (spokes) into your data hub in near real-time, with minimal to no code required.

For example, here is how to use Fabric Mirroring to import Dynamics 365 or Power Apps data into Fabric. You can do the same for Azure Cosmos Database and Snowflake. SQL Server is coming soon.

Currently, the following databases are available:

Platform Near real-time replication Type of mirroring
Microsoft Fabric mirrored databases from Azure Cosmos DB (preview) Yes Database mirroring
Microsoft Fabric mirrored databases from Azure Databricks (preview) Yes Metadata mirroring
Microsoft Fabric mirrored databases from Azure Database for PostgreSQL flexible server (preview) Yes Database mirroring
Microsoft Fabric mirrored databases from Azure SQL Database Yes Database mirroring
Microsoft Fabric mirrored databases from Azure SQL Managed Instance (preview) Yes Database mirroring
Microsoft Fabric mirrored databases from Snowflake Yes Database mirroring
Microsoft Fabric mirrored databases from SQL Server (preview) Yes Database mirroring
Open mirrored databases Yes Open mirroring
Microsoft Fabric mirrored databases from Fabric SQL database (preview) Yes Database mirroring

Now I know I can use Fabric Mirroring to help me get near real-time data into my hub with no code required. Why else should Fabric Mirroring be a game-changer for my analytics projects?

The Fabric Mirror enables us to accomplish a lot more in less time.

Suppose you have an SLA for getting data into a data warehouse in 24 hours. Processing through all the layers took you 20 hours (12 hours into bronze, 6 hours from bronze to silver, and 6 hours from silver to gold). If you now had near real-time, say 90 seconds, to get changes into bronze, that gives you an extra 11 hours and 59 minutes to improve data quality, data validation, and other processes upstream.

Centralized Data Management

Having a single hub that the applications (spokes) automatically send data to, a centralized database, and the clients and tools used, eliminates the need to install additional software. You now transition from pulling data from the spokes with batch processing to pushing data from the spokes in near real-time. It also simplifies data governance and enhances security because combining this with Preview lets you see which spokes the data goes into.

For example, you must comply with GDPR, and Sarah in the UK has now requested that her data be removed. You can now easily find the data in the spokes from the hub to determine what data needs to be purged quickly.

Simplified Data Ingestion.

Instead of mixing and matching different data sources, your delta tables will be created across your Cosmos Databases, Azure SQL databases, Dynamics 365, and other future fabric mirroring sources. You no longer need to worry about which sources are in Excel, CSV, flat file, JSON, etc. They are all in the same format, ready for you to do your transformations, data validation, and apply any business rules required for your silver level.

Improved Query Performance

Those who know me know that I love discussing query performance tuning. I am passionate about making databases go just as fast as your favorite F1 race car. I also know that you have at least one group of people running reporting queries against your line-of-business application database or an availability group replica. This leads to increased locks that slow down the original purpose of your application databases. These locks are now removed, and these reports can be sent against your data hub.

The mirrored data is also stored in an analytics-ready format, such as delta tables, which enhances query performance across various tools within Microsoft Fabric, including Power BI.

What if you cannot use Fabric Mirroring?

The sources for Microsoft Fabric to date are limited. If I had on-premise data sources or other sources that are not ready for Fabric Mirroring, I would still encourage this architecture approach of using change data capture, where available, to lead to streaming your data into your data hub of choice.

About ProcureSQL

ProcureSQL is the industry leader in providing data architecture as a service, enabling companies to harness their data and grow their business. ProcureSQL is 100% onshore in the United States and supports the four quadrants of data, including application modernization, database management, data analytics, and data visualization. ProcureSQL serves as a guide, mentor, leader, and implementer, providing innovative solutions to drive better business outcomes for all businesses. Click here to learn more about our service offerings.

In 2023, Microsoft announced its new platform, Microsoft Fabric, an innovative product that combined Microsoft Power BI, Azure Synapse Analytics, Azure Data Factory, Fabric SQL Databases, and Real-Time Intelligence into one platform.

Over 21,000 companies use Microsoft Fabric, and the success stories paint a promising future.

If your team hasn’t switched to Fabric yet, now is a great time to do so, as the transition has huge potential upside.

John Sterrett from ProcureSQL attend the 2025 Microsoft Fabric Conference

John Sterrett, ProcureSQL CEO, is attending the 2025 Microsoft Fabric Conference Partner Only Event.

The first significant benefit of the transition is a simplified work environment for all involved. Everything is integrated into one platform, eliminating headaches associated with handoffs and siloed workflow.

Data warehousing, data engineering, AI-powered analytics, data science, and business intelligence are now housed in one platform. A simpler platform means faster results for your company’s needs.

Moreover, as different teams collaborate, Fabric provides compliance features such as data tracking, version control, and role-based access, enabling multiple teams to work together simultaneously without compromising the integrity of your company’s data.

There is now an incredible amount of potential at their fingertips.

With Microsoft Fabric, forecasting future performance and AI-driven analysis now gives your teams a competitive edge.

This enables your business to transition from a purely reactive model to a proactive one, where you can stay one step ahead of what’s to come.

In terms of cost, you’ll be pleased to know that, despite all these new additions, the pricing model for Fabric remains scalable and flexible, depending on how you utilize it.

Microsoft provides a Fabric Capacity Estimator, allowing you to take full advantage of the new platform by understanding the up-front cost.

If you have already been using products from Azure and Microsoft, switching to Fabric is a no-brainer.

Transitioning to Microsoft Fabric

One of Fabric’s most valuable and convenient aspects is that its data and analytics can be shared across all teams for all to examine, including with your business’s non-tech parts, which are the decision-makers and subject matter experts. The data is easily interpreted, so minimal training is needed even if your users are not tech-savvy.

On top of that, with the involvement of AI, the data allows you to see patterns ahead of time so that everyone is on board if any anomalies or spikes in activity come up.

Transitioning to a new platform can be incredibly challenging, but here at Procure SQL, we’re here to help.

We’re ready to help your organization make this transition smoothly and with immediate impact.

Whether you’re still using Microsoft Power BI or a mix of analytics tools, our team can guide you through a phased implementation that minimizes disruption and maximizes value from the start.

Don’t wait to catch up; let ProcureSQL help you lead the way. Contact us today to get started on your Microsoft Fabric journey.

How can we help you today? - Step 1 of 3
How can we help you today?
Name

With the release candidate of SQL Server 2025, which came out last week, I want to discuss a valuable feature you will not see in the Microsoft press release: SQL Server 2025 Developer Standard Edition.

Microsoft is finally addressing a long-standing headache for database professionals. They finally included a Developer Standard edition in SQL Server 2025, fixing the mismatch between development and production environments. The new Standard Developer edition allows teams to build, test, and validate their database solutions using a free, fully licensed copy of the Standard edition for the first time!

SQL Server 2025 Developer Standard Edition eliminates costly licensing for non-production use while ensuring feature parity with production.

Previously, organizations used the Developer edition, functionally equivalent to the Enterprise edition, for development and testing. If you also used the enterprise edition in production, this wasn’t a problem. Problems occur when you attempt to save money by using developer edition (enterprise edition) features in development or testing, while using the standard edition in production. This mismatch often led to surprises during deployment, such as features that worked in development but failed in production due to missing or restricted capabilities in the Standard edition. Or worse, code that works and returns the same results, but has abnormal performance because enterprise edition features cause a change in the execution plans.

For example, Intelligent Query Processing batch mode for row store is a feature only available in Enterprise and Developer editions. This feature cannot be used in Standard Edition environments, leading to cases where performance is good in development and testing with the same data and transactional load footprint as production, but yields worse performance in production when utilizing Standard Edition.

In the past, we would have had to use the Developer edition, which opened this window for utilizing enterprise features in dev and test. With SQL Server 2025, you can select the Standard Developer edition or Enterprise Developer edition during the installation, ensuring your development environment mirrors production as closely as possible. This is especially valuable for teams whose production workloads run on the Standard edition.

Standard Developer edition gives you the ability to develop and test only against the standard features. You can pick enterprise or standard editions of Developer edition with SQL Server 2025

With SQL Server 2025 you can pick enterprise or developer for your free development edition.

With SQL Server performance, edition matters. Below is a chart showing that the majority of the performance-based features are Enterprise edition-only features. For two reasons, this article will focus on Online index rebuilds and batch mode for row store queries.

A breakdown of SQL Server 2025 performance features by edition so you can see which features are enterprise only. If you couldn't tell its most of them. You can now use standard developer edition to match production if you are going to use standard edition in production.

A breakdown of SQL Server 2025 performance features by edition lets you see which features are enterprise-only. If you couldn’t tell, it’s most of them. 🙂

Error Example: Online Index Rebuilds

To illustrate the practical impact, consider the scenario where a developer attempts to use the ALTER INDEX ... REBUILD WITH (ONLINE = ON)command. This command works flawlessly in a Developer (Enterprise) environment, allowing users to rebuild indexes without downtime. However, if the production environment is Standard, the same command will fail with an error, since online index rebuilds are not supported in the Standard edition.

Standard developer edition allows you to test your code against standard edition only features so your index rebuild online will fail as it's an enterprise edition only feature.

Online index rebuild online fails on standard developer edition but will succeed with the enterprise developer edition.

While this is not too difficult to catch in testing, you may be surprised at how often it is missed.

Let’s look at one more example that doesn’t cause an error but changes the performance and execution plans between the standard and enterprise editions. Because the developer edition before SQL Server 2025 used enterprise features, you would benefit from batch mode for your row store queries without knowing it.

SQL 2025 Standard Developer Edition: Different Plan and Performance

We will examine an example using the SQL Server Standard Developer Edition and the SQL Server Enterprise Developer Edition.

USE WideWorldImporters;
GO

SELECT 
    ol.OrderID,
    ol.StockItemID,
    ol.Description,
    ol.OrderLineID,
    o.Comments,
    o.CustomerID
FROM 
    Sales.OrderLines ol
INNER JOIN 
    Sales.Orders o ON ol.OrderID = o.OrderID
WHERE 
    ol.StockItemID = 168
GO

With the SQL Server Enterprise Developer Edition, we use an Adaptive Join to counteract filters with low and high numbers of rows.

enterprise developer edition gets you the adaptive join as this is an enterprise edition feature to use batch mode for row mode queries.

The enterprise developer edition includes the adaptive join, an enterprise edition feature that allows you to use batch mode for row mode queries.

With the SQL Server Standard Developer edition feature in SQL Server 2025, we observe the same execution plan in development, testing, and production when using the Standard edition for production. In this case, we don’t have batch mode, and you will see we use a hash join, which is not ideal for a small number of records for our filter.

Standard developer edition doesn't use the adaptive join because its an enterprise edition only feature.

The takeaway is that features can change functionality and how you get your data. This example would be more complex to catch in your development pipeline, most likely leading to a bad taste in your mouth about development and testing being fast, but seeing negative performance when you release changes to production.

SQL Server 2025’s Standard Developer edition is a vital tool for any organization that values consistency and reliability across its database environments. Using the much more affordable standard edition of SQL Server empowers developers to test confidently, knowing that what works in development will also work in production. No more unpleasant feature surprises at go-live.

If you like our blog posts, subscribe to our newsletter. We will share all kinds of great stuff for FREE! If you’d like to chat about this feature or anything else database-related, contact us!


Join Our Newsletter




















The post SQL Server 2025 Standard Developer Edition appeared first on SQL Server Consulting & Remote DBA Service.

I attended the Microsoft Fabric conference for the first time last week. I wanted to provide a guide that CIOs and CEO’s could leverage to understand how they could utilize these new announcements at the 2025 Fabric Conference to obtain a competitive advantage. To be transparent, I was skeptical because Microsoft consistently changes or rebrands its analytics platform every three to five years. We have gone from Parallel Data Warehouses (PDW) to Analytics Platform Services (APS), Azure Services, Azure SQL Data Warehouse, and Azure Synapse Analytics, bringing us to Microsoft Fabric.

John Sterrett from ProcureSQL attend the 2025 Microsoft Fabric Conference

John Sterrett from ProcureSQL attends the 2025 Microsoft Fabric Conference.

To my surprise, after this conference, I have gone from seeing Fabric as Microsoft’s current take on Analytics to how it will stand out as an analytics platform of choice for people who want a simple, quick, and easy way to do analytics with the tools they already love using.

Artificial Intelligence (AI) will only be as practical as the quality of your data. Garbage in still equals garbage out, or as I like to call it, building a trusted dumpster fire. Preparing your data for AI will be the key to success with your AI Projects. Microsoft clearly understands this by focusing on preparing your data for AI with fabric mirroring, fabric databases, and SQL Server 2025. My takeaway is that you won’t have to get ready if you stay ready.

Copilot for all Fabric SKUs

Microsoft is committed to giving more people access to its AI tools as a commitment to this. In the coming weeks, users on F2 fabric compute and above can utilize Copilot. Additionally, you can use Fabric Copilot capacity, a new feature that simplifies setup, user management, and access to Copilot across different tiers.

Why Fabric Mirroring Is A Game Changer

Those following us aren’t new to the concept and advantages of fabric mirroring. One of the biggest mistakes we see that multiplies the odds of your analytics projects failing is incorrectly landing your data into your analytics platform of choice. Either the data is missing, has been transformed incorrectly, or is no longer being received.

Microsoft provides a feature called “mirroring” to help solve the problem of getting your data into your landing zone. With Azure SQL Databases and fabric databases, it’s as easy as a few clicks. Coming soon, you will have similar experiences with PostgreSQL in Azure, Oracle, SQL Server in VMs, and on-premises. What about other apps/data stores? Open mirroring is coming soon, and you can leverage it to get your other data into the Fabric landing zone.

Multi-Cloud Shortcuts

Microsoft has partnered with Snowflake to provide iceberg-formatted data across Fabric, eliminating data movement and duplication. You can use a shortcut to point directly to an Iceberg table written using Snowflake in Azure. Snowflake has also added the ability to write Iceberg tables directly into OneLake.

Apache Iceberg tables can be used with Fabric due to a feature called metadata virtualization. Behind the scenes, this feature utilizes Apache XTable.

The key takeaway is that users can now work on the same data using both Snowflake and Fabric, without requiring data movement or duplication. Letting your data professionals utilize the tools they use best is a huge win.

Fabric Databases

Microsoft Fabric Databases is the new kid on the block, and it’s already seeing traction as the first fully SaaS-ifyed database offering. Fabric databases are built for ease of use as a unified data platform. You can create databases in just a few clicks and have zero maintenance to worry about, as Microsoft fully manages the databases. Fabric database data is automatically mirrored into OneLake for analytics.

The key takeaway is that you can utilize Microsoft Fabric for application development and eliminate the need for a database infrastructure as a service MSP/partner. You can eliminate this cost as you should always get exponential value from your data MSP (what we built our practice focusing on), not just body for monitoring or keeping the lights up and running.

SQL Server 2025

Microsoft announced some updates to SQL Server 2025 at the keynote and in other breakout sessions. While it is still in private preview, it was easy to see how anyone who could write T-SQL could leverage models and vectors without needing extensive knowledge of vectors or algorithms. GraphQL will enable developers to access API endpoints and consume data, similar to most other APIs. JSON will be treated as a first-class citizen, with its data type and indexes, to help developers access their JSON data quickly and easily.

With SQL Server 2025, you can mirror your data to Microsoft Fabric with Zero ETL, zero code, our OneLake, and near real-time mirroring at no additional cost, without requiring change data capture. This will help reduce your total cost of ownership. There will be no additional compute costs for Availability Groups; continue to utilize your Fabric compute.

The key takeaway is that Microsoft continues investing in making SQL

Server more accessible from the ground to the cloud. SQL Server will continue to make it easier to help you utilize your data inside and outside the relational platform.

Other notable features

Autoscale Billing for Spark optimizes Spark job costs by offloading your data’s extraction, load, and transformation to a serverless billing model.

Command-line interface Fabric CLI is now in preview. Built on fabric APIs, it is designed for automation. There will be less clicky-clicky and more scripts that you can version control.

API and Terraform Integration Automate key aspects of your fabric platform now by utilizing Terraform. If you have used it with Azure, get ready to use it with Fabric as well.

CI/CD enhancements. With Fabric’s git integration, multiple developers can frequently make incremental workspace updates. You could also utilize variable libraries and delivery pipelines to help get your changes vetted and tested quickly through your various testing environments.

User Data Functions Fabric user data functions is a platform that allows you to host and run applications on Fabric. Data engineers can write custom business logic and embed it into the fabric ecosystem.

Statistics That Caught My Attention

  • Microsoft Fabric supports over 19,000 organizations, including 74% of Fortune 500 companies.
  • Power BI has over 275k users, including 95% of Fortune 500 companies
  • 45k consultants trained, 23k partner certifications in its first year
  • One billion new apps will be built in the next five years.
  • 87% of leaders believe AI will give their organization a competitive edge
  • 30,000+ fabric certifications completed in twelve months

I will be back next year and will provide you with another write-up, similar to the one I produced this week, in case you are unable to attend.

About ProcureSQL

ProcureSQL is the industry leader in providing data architecture as a service, enabling companies to harness their data and grow their business. ProcureSQL is 100% onshore in the United States and supports the four quadrants of data, including application modernization, database management, data analytics, and data visualization. ProcureSQL works as a guide, mentor, leader, and implementer to provide innovative solutions to drive better business outcomes for all businesses. Click here to learn more about our service offerings.

Do you have questions about leveraging AI, Microsoft Fabric, or the Microsoft Data Platform? You can chat with me for free one-on-one, or contact the team. We would love to share our knowledge and experience with you.

In this six-minute video, Allen Kinsel shares multiple ways to quickly determine whether SQL Server is the root cause or a symptom of your performance problem.

 

Demo Code


/* See all requests */
select * from
sys.dm_exec_requests r
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) AS st

/*Filter out the noise */
select * from
sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st

See how ProcureSQL helps companies streamline business decisions with data lakes and pipelines.

See how ProcureSQL helps companies streamline business decisions with data lakes and pipelines.

AI analytics today allow us to break down and analyze all parts of a business. Today, we will quickly talk about using a data lake and data pipelines to help streamline data analytics.

Every internal and external interaction can be scrutinized and perfected to create a well-oiled, efficient machine.

Consumer data, inventory management, market trends, and shipping logistics are just a few complicated processes AI can optimize and streamline.

It can drill down to the smallest detail and show the relationship between hundreds of thousands of different variables interacting with each other.

Current estimates indicate that AI can analyze millions of rows of data nearly instantly, whereas an IT team would take hours or even days to review just thousands of rows.

But even this comes with its problems.

When you are dealing with millions of rows of data, you need to be able to properly send it from the source, store it for later analysis, and make sure the data you are sending is error-free.

This can seem like a daunting task at first, but that is where modern technology comes into play.

Not only can this data be transferred and stored without any hassle, but it can also be verified to ensure there are no errors that could cause issues in analytics.

Data Pipelines

Let’s say you are an e-commerce business and need to send your data for analysis.

The data collected could be from customer purchases, website clicks, email engagement, inventory systems, customer feedback, social media engagement, and more.

In the past, data was sent in batches, which could take hours or even days to process. Furthermore, as the data volume exploded, batch processing could not keep up with the volume.

Data pipelines enable you to send all this data in a constant, raw, and unorganized manner without worrying about sorting, processing, or the volume being sent.

No longer do you have to wait until the workday begins to manually send the data, cutting out the possibility of human error in the process.

You no longer have to limit how much data you send or what format it might be in. Data pipelines offer scalability without the risk of sending too much data at one time.

AI analytics are now smart enough to be able to take this unorganized live data, make sense of it, and give you instant business solutions.
Some problems cannot wait to be solved, and being able to act immediately or having the AI act on your behalf makes a huge difference.

Last but not least, you save yourself an incredible amount of money in the process.

Because data pipelines are cloud-based, you do not need to build infrastructure or hire engineers to maintain them.

Plus, you are only paying for data sent through the pipeline. There is no need to worry about the power and maintenance costs of running a physical server 24/7.

Data Lakes

Once the data passes through the data pipeline, it is stored in a data lake.

These data lakes serve as massive data storage centers that can accommodate any data imaginable.

Before, storage was made for traditional data that could be converted into numbers, such as sales figures. However, with data lakes, companies can send over data such as videos, social media posts, etc.

On top of that, storage was expensive and took processing time to get it back to the user.

Data lakes eliminate all those problems, as AI can analyze and sort any file. Cloud systems enable nearly infinite storage capacity through services like Amazon AWS, Microsoft Azure, or Google Cloud.

As discussed before, data pipelines make transferring data to the data lake instantaneous and streamlined.

The massive storage capacity gives you a significant edge in analysis. The more datasets you add to the lake, the better you can train your AI and machine learning models, resulting in more accurate analysis and solutions.

That is millions of lines of unorganized, wildly varying data read and analyzed in real-time, making it thousands of times more efficient than previous methods.

Ensuring Accurate Data and Giving Your Business AI Solutions

Throughout this entire process in the pipeline and in the lake, AI is working its magic behind the scenes.

As soon as it enters the pipeline, data is scrutinized to check for any errors before it reaches the data lake. Duplicates or missing values can occur, and AI can instantly check for any mistakes.

Furthermore, it runs compliance checks to make sure that the data sent does not break any privacy laws (GDPR, HIPAA, etc.).

This process occurs not only in the pipeline but also in the data lake, and after the data is retrieved, to ensure there are no errors or compliance issues in the data.

The best part is that this is done in the blink of an eye and is fully automated, leaving no room for error.

Once everything is given the green light, your data can be analyzed for immediate business solutions.

Such solutions include addressing warehouse inventory management, analyzing the relationship between marketing emails and revenue, or optimizing shipping logistics.

The possibilities are endless. And with the modern innovations of data pipelines and lakes, it has never been better, now offering scalability, automation, and efficiency like never before.

Click here now and get in contact with ProcureSQL today to let AI jumpstart your business solutions.

 

For every company in America, there are thousands of different variables that can positively or negatively affect its finances. Today we will focus on leveraging Decomposition Trees, Anomaly Detection, Key Influencers Charts to turn your data into strategic insights.

Labor costs, logistics, customer retention, partnerships and more all affect the day-to-day grind to keep a business afloat. And all of them come with their own unique problems to solve.

There are thousands of variables out there. And it’s important to understand how each engages with one another in order to optimize efficiency.

This is where AI and machine learning come in — particularly in the case of data visualization.

On a simple level, data visualization illustrates stats such as sales per month, KPI goals to hit, and revenue coming from different sections of a company.

And while this may show you what happened, it doesn’t paint the whole picture.

When you add AI and machine learning to these visualizations, you get a better idea of why exactly your figures are the way they are.

And that’s what I want to show you today.

There are dozens of tools out there that allow executives to make critical decisions including Tableau, Qlik, and Looker.

Today. I’m going to focus particularly on Microsoft Power BI and illustrate how you can apply AI and machine learning to its already existing visuals.

The AI Models

Time Series Models

Let’s say you’re looking at a sales chart from the year broken down by month. A normal line or bar chart will show you how much revenue came in monthly and will stop at the last month’s sales.

This is where AI changes everything.

Time series models allow you to predict future sales based on historical data and factors.

It uses a slew of different variables that would influence a metric like sales revenue and can give you an idea of where your future revenue will land in the coming months.

Data like seasonality, market trends and sales growth all affect revenue. And understanding these variables ahead of time can impact how to make crucial decisions.

This is within certainty, and the Time Series Model will consider that when predicting the future.

For instance, in a line graph, the area around your future line will be shaded to show where your revenue could end up.

Knowing this can help you avoid overspending or underutilizing the resources that you have at your disposal.

Regression Analysis

Next, let’s say that you want to answer the question of how much money is to be spent on marketing, and how this money spent affects sales.

Usually this can be easily seen on something like a scatterplot or bubble chart.

But with so many data points, it can be hard to see exactly where the optimal level would be.

With regression models, it would be able to give you further analysis into this, as it would analyze your data, and allow you to plot a proper regression trend line. This would give you an idea of exactly how much one variable, like marketing spending, affects another variable like sales revenue.

Furthermore, just like time series models, you can use regression models to predict future outcomes and have an idea of where you will end up depending on what actions you take.

The difference is that time series models depend on ordered data and time. Regression models instead focus on independent variables. This then allows you to see the result of variables that you haven’t tried yet, such as increasing marketing spending even more than you have in the past.

Clustering Analysis

Businesses often deal with a wide range of clients that can differ based on their behavior, demographics or purchase history.

Taking these different variables into consideration, you can use clustering to spot trends in graphs that may not be apparent at first glance.

On a scatterplot, for instance, clustering analysis would be able to circle and identify similar customers and show you how they trend, like the visual below:

The different circled groups would represent whatever variable links the grouped individuals together such as age or buying frequency. In this way, you could target heavy buyers with more marketing or cater your product to a certain age demographic if you see that one is buying more than another.

Unique AI-Powered Visuals

Our AI tools are best used when they complement key metrics that need to be considered. With Power BI, you can use these visuals to better help paint a picture of why things are happening.

On top of adding this AI-driven data to normal charts, Power BI offers some unique AI-powered visuals that can help influence optimal decision making.

Key Influencers Chart

Let’s say, for instance, you are trying to understand what causes the most customer complaints.

Using a key influencers chart gives you insight into that question along with exactly how much each variable is responsible.

As you can see in the chart below, the number one influence is the type of customer, followed by a slew of different variables like theme type, company size and more:

Decomposition Trees, Anomaly Detection, Key Influencers Charts in Power BI can turn your data into strategic insights.

What’s even better is that you can drill down into each topic. In doing so on the number one influencer, Role in Org is consumer, you can see that for this company, administrator is right behind consumer, and publisher is well below the other two:

Key Influencers Charts, Decomposition Trees, and Anomaly Detection in Power BI can turn your data into strategic insights.

This kind of insight presented in a quick, easy-to-understand manner is huge when it comes to decision making and can give you a big advantage.

Of course, this is just one of the questions you can answer with a key influencers visual. Backorder likelihood, sales revenue, credit risk and more can be answered with the power of AI.

Decomposition Trees

Imagine that you are the head of a medical equipment company, and you are trying to tackle bringing down your percentage of backorders.

Decomposition trees exist so you can find the root of a problem by drilling down on a huge amount of data and categories.

Take the image below as an example:

You can see from the image above that items who have intermittent demands are most likely on backorder from warehouse #0477.

On top of that, if you click a different route, the tree will recalculate your path and create a new set of data for you to look at.

As you can see in the image below, the cardiovascular products are much more delayed from Distribution Center A than B:

Knowing where exactly a problem lies allows you to make quick and efficient decisions. For instance, if you were partnered with one of these warehouses and found that it accounted for 80% of your backorders in one of your products, it would make sense to find a new partner that is more efficient in handling that product.

Anomaly Detection

When looking at sales from the year, there can sometimes be strange unexplained spikes in revenue. When this occurs, it is best to know exactly why this happened so proper action can be taken.

This is where AI comes in with its anomaly detection abilities.

Take a look at the image below:

Anomaly Detection in Power BI helps you get actionable insights with your data!

Anomaly Detection in Power BI helps you get actionable insights with your data!

That spike in September should be a red flag. And should trigger someone looking into why such a rise in revenue happened on that day.

Using Power Bi’s anomaly finder tool, you can add variables for it to take into account and have it give out a possible explanation for the strange day in sales.

In this case, the number one cause for this anomaly had to do with the ‘Region – West’ category. This could mean that either a client from the West region made a large purchase or that many individual customers did on that day.

Using this anomaly feature gives you a good starting point on finding out exactly why one day was very much unlike the others.

Tap Into AI’s Power Today

As we can see, the different Machine Learning and AI visualizations allow you gain an incredible amount of insight. And with this knowledge at your side, your ability to answer critical questions improves massively.

Its ability to identify problems, and identify unseen trends are unmatched. AI is turning industries upside down with its insight, and if you haven’t yet tapped into its potential yet, you’re missing out.

Contact us to start a discussion to see if ProcureSQL can guide you along Machine Learning and AI journey.

 

Microsoft Entra Authentication is A Superior Alternative to SQL Server Authentication

Securing data access is paramount for organizations of any size. Nobody wants to be the following data security leak that goes viral. Adopting robust authentication methods that enhance security, streamline user experience, and simplify management is crucial for decision-makers. Today, I write about how you could utilize Microsoft Entra ID to improve your database security footprint.

ProcureSQL recommends that Microsoft Entra ID replace traditional SQL authentication for Azure SQL Databases and Azure Managed Instances. Microsoft Entra ID offers benefits that address the shortcomings of SQL authentication.

What is SQL Server Authentication?

At the beginning of SQL Server, there was Windows Authentication and SQL Server Authentication. SQL Server Authentication is known as SQL Authentication. SQL Authentication allows users to connect with a username and password. SQL Authentication was helpful in environments where users were not part of a Windows domain or when applications needed to connect without using Windows credentials.

The Pitfalls of SQL Server Authentication

Here is why SQL authentication is inadequate:

Security Vulnerabilities

SQL authentication relies on username and password combination stored within the instance. This approach presents several security risks:

Password Attacks

SQL-authenticated accounts are susceptible to brute-force and dictionary attacks. If you have weak passwords, you rotate them infrequently; the bad guys can break through eventually.

Credential Storage

Passwords are often stored in connection strings or configuration files, increasing the risk of exposure.

Limited Password Policies

Most people don’t even implement SQL Server’s native password policy enforcement for SQL-authenticated accounts. Regardless, it is less robust than that of modern identity management systems.

Management Overhead

Decentralized Account Management

Every Azure Managed Instance or Azure SQL database requires separate account management. Managing all these accounts per instance or database increases the administrative burdens and the risk of inconsistencies.

Password Rotation Challenges

Implementing regular password changes across multiple databases and all their applications is complex and error-prone.

Wouldn’t it be nice if password rotation was in a single place?

The Microsoft Entra ID Authentication Advantage

Microsoft Entra authentication addresses these issues and significantly improves several key areas:

Enhanced Security

Centralized Identity Management

Microsoft Entra ID is a central repository for user identities, eliminating the need for separate database-level accounts per instance or database. This centralization reduces the attack surface and simplifies security management.

Robust Password Policies

Entra ID enforces strong password policies, including complexity requirements and regular password rotations. It also maintains a global banned password list, automatically blocking known weak passwords.

Multi-Factor Authentication (MFA) Support

The last thing we want to see is another data breach due to MFA not being enabled. Microsoft Entra authentication seamlessly integrates with Microsoft Entra MFA, adding an extra layer of security. Users can be required to provide additional verification, such as a phone call, text message, or mobile app notification.

Advanced Threat Protection

Microsoft Entra ID includes sophisticated threat detection capabilities that identify and mitigate suspicious login attempts and potential security breaches.

Improved Access Management

Role-Based Access Control (RBAC)

Entra ID allows for granular permission management through Azure RBAC, enabling administrators to assign specific database roles and permissions to users and groups.

Group Memberships

Administrators can create groups, automating access management as users join, move within, or leave the organization. Is it ideal to deactivate a user’s Entra ID account only and deactivate access everywhere when they leave?

Conditional Access Policies

Entra ID supports conditional access, allowing organizations to define conditions under which access is granted or denied. Examples can include users, device compliance, or network location.

Seamless Integration with Azure Services

Microsoft Entra authentication works harmoniously with other Azure services. Use managed identities for your service resources to simplify access management across the Azure ecosystem. Microsoft Entra Managed Identities eliminates the application needing a password similar to the Group Managed Service Accounts (gMSA) in Active Directory on-premise.

Streamlined User Experience

Single Sign-On (SSO)

Users can access Azure SQL databases using their organizational Microsoft Entra credentials, eliminating the need to remember multiple credentials.

Self-Service Password Reset

Entra ID offers self-service password reset capabilities to reduce the burden on IT helpdesks and the response to resolution time, improving user productivity.

Reduced Password Fatigue

Centralizing authentication simplifies password management for all users. Centralizing authentication results in better password management and reduced risk of using the same or similar passwords.

Compliance and Auditing

Comprehensive Audit Logs

By logging authentication events, Microsoft Entra ID offers improved visibility into user access patterns and potential security incidents.

Regulatory Compliance

Entra password authentication helps organizations meet regulatory requirements, such as GDPR, HIPAA, and PCI DSS, by providing strong authentication and detailed audit trails.

Integration with Azure Policy

Organizations can enforce compliance at scale by defining and implementing Azure Policies that govern authentication methods and access controls.

Implementation Considerations

While the benefits of Microsoft Entra Authentication are clear, decision-makers should consider the following when planning a migration:

Hybrid Environments

For organizations with on-premises Active Directory, Microsoft Entra Connect can synchronize identities, enabling a smooth transition

Application Compatibility

Ensure all applications connecting to Azure SQL databases support Microsoft Entra Authentication methods.

Training and Change Management

Plan for user education and support to ensure a smooth transition from SQL Authentication to Entra password authentication.

Gradual Migration

Consider a phased approach, migrating critical databases first and gradually expanding to the entire environment.

Final Thoughts

As information technology leaders, moving from SQL Authentication to Microsoft Entra Authentication for Azure SQL databases and Managed Instances is strategic. This transition addresses the security vulnerabilities and management challenges of SQL Authentication and paves the way for a more secure, compliant, and user-friendly database access experience. Adopting Microsoft Entra Authentication for Azure SQL databases is not just a best practice—it’s necessary for forward-thinking IT leaders committed to safeguarding their organization’s digital future in Azure.

About ProcureSQL

ProcureSQL is the industry leader in providing data architecture as a service to enable companies to harness their data to grow their business. ProcureSQL is 100% onshore in the United States and supports the four quadrants of data, including application modernization, database management, data analytics, and data visualization. ProcureSQL works as a guide, mentor, leader, and implementor to provide innovative solutions to drive better business outcomes for all businesses. Click here to learn more about our service offerings.

Every year, hundreds of billions of packages are shipped around the world. And in 2023 alone, that number broke the record at 356 billion shipped worldwide.

 

That’s a 100% increase since 2016. The industry will only get bigger and more complex, as estimates show that by 2028, nearly 500 billion packages will be shipped globally.

 

As supply chains become more bloated with delivery demand, they become more complex, multiplying efficiency problems.

 

This is where Machine Learning and Artificial Intelligence (AI) step in.

Companies worldwide, including many Fortune 500 names, have turned to Machine Learning and Artificial Intelligence to fine-tune their logistics and, most importantly, increase revenue and reduce costs

If you are not using Machine Learning, you are falling behind.

 

Companies worldwide, including many Fortune 500 names, have turned to Machine Learning and Artificial Intelligence to fine-tune their logistics and, most importantly, increase revenue and reduce costs.

 

When shipping out millions of packages per year, saving 5% in cost at every step is a massive boost to profits.

 

The numbers don’t lie either. Companies have already shown that AI is helping them meet their revenue goals. According to McKinsey, 61% of manufacturing executives report decreased costs, and 53% report increased revenues as a direct result of introducing AI into their supply chain.

Three Common  Machine Learning Models

Several kinds of Machine Learning models have been designed to tackle different problems. Today, I will focus on three types of modeling systems: Time Series Models, Regression Models, and Classification Models.

 

Each offers something unique, and all can lead to efficiency improvements in the supply chain in different ways…

 

Time Series Forecasting Models

 

Time series forecasting is as simple as it gets when you want to improve efficiency with Machine Learning. The model takes historical data such as sales, seasonality, and trends and uses it to make predictions for the future.

 

For instance, let’s say you’re in the clothing apparel business and need to stock your warehouses full of winter gear for the upcoming season. Time Series forecasting would allow you to look at previous years’ sales and show you exactly when to stock winter gear for the upcoming cold months. This means you won’t stock too much wrong gear too early and take up precious space.

 

Likewise, it can keep you prepared for any major spikes in a single item based on sales trends. If you notice that in November, sales of winter apparel increase by 40%, you can ensure you aren’t caught off guard and avoid the risk of being understocked.

 

In doing so, you also keep customers happy, knowing they can always get the item they need. Plus, you’re meeting the demand of what they will buy. Very few are buying sandals in January. But if you find out that data shows that it’s still 10% of your sales, you might as well keep selling them and not lose out on that profit.

 

Regression Models

 

A regression model is an AI tool that finds the relationships between two variables and how one affects the other.

 

In our supply chains, regression models are well-equipped to aid in predicting delivery times of goods. The reason is that there are so many variables that go into transportation such as distance, weather conditions, and traffic.

 

Machine Learning and AI can look at the historical data of all of these variables and give you a leg up on the proper route to send your material from point A to point B. If you can avoid traffic, reroute around bad weather, and fully optimize routes taken, then you save on gas and time — reducing overall costs in the process.

 

Much like time series forecasting, regression models also allow you to predict demand on how big of a staff you need. If you’re in charge of a warehouse, for instance, variables such as the number of orders, seasonality, and even the day of the week affect exactly how many staff you need to pick items, box up, and ship out orders.

 

Furthermore, these factors affect how large of a fleet you need to be able to deliver your items. Having two trucks does no good when your demand requires eight. Likewise, having 100 workers at a warehouse doesn’t make sense when you only need 20 working that day.

 

Classification Models

 

Finally, let’s talk about classification models — AI tools designed to factor in many different variables and allow you to assess a wide variety of things from risk assessment to how to market to certain customers.

 

For instance, when it comes to risk assessment, you could use a classification model to let you know which items are particularly prone to getting damaged. Factors that you could put in are the packaging used to handle the item, travel conditions (if it is going to be on a bumpy road or in extreme temperatures), and distance (how many different handlers are going to touch it).

 

If you know all of these factors ahead of time, and allow AI to assess it as a high or low-risk item, then you can take precautions so it doesn’t arrive damaged. You can beef up the packaging, adhere to strict orders, or arrange for special transportation to ensure that nothing goes wrong.

 

On top of that, when delivering items, you can use classification models to determine if a package is at a high or low risk of arriving late. Factoring in traffic, weather, distance, holiday seasonality, and package weight all are factors that you can use to give an estimate of when something can be delivered. This keeps customers and clients happier, as they’ll have a better understanding of when exactly they will receive their items.

 

Finally, you can even group your customers into different categories to give you a better idea of who to target with internal marketing. For instance, those with high spending habits or those that always pick next-day delivery would be more worth your while to target with marketing efforts than those that spend less or select standard delivery.

 

Machine Learning Can Improve Your Business Today

 

As we can see, the different Machine Learning and AI models out there offer a huge variety of insight across the board when it comes to supply chains.

 

The best part is that the examples I mentioned are only the tip of the iceberg when it comes to providing solutions to supply chains and logistics.

 

Machine Learning and AI have changed the game from inventory management to quality control to delivery efficiency and more. Its ability to fine-tune processes and optimize efficiency gives companies a leg up on their competitors. And every year, AI and Machine learning models get better and better.

 

With companies from Walmart to Nike to FedEx and more adopting Machine Learning and AI into their supply chains, it only makes sense that other companies mimic their success and do the same.

Contact us to start a discussion about whether ProcureSQL can guide you along your Machine Learning and AI journey.