Setting page visibility and the active page are often overlooked last steps when publishing a Power BI report. It’s easy to forget the active page since it’s just set to whatever page was open when you last saved the report. But we don’t have to settle for manually checking these things before we deploy to a new workspace (e.g., from dev to prod). If our report is in PBIR format, we can run Fabric notebooks to do this for us. This is where Semantic Link Labs helps us.

You can download my notebook here. I’ll walk through the steps in this post.

Code Walkthrough

First, we must install semantic-link-labs. If you already have an environment with this library installed, you can use that and skip this step.

%pip install semantic-link-labs

Next, we need to import some modules.

# Imports
import sempy_labs as labs
from sempy_labs import report
import ipywidgets as widgets
from IPython.display import display

Then we can get to work. First, I’m capturing the following information using widgets: workspace ID, report ID, and page name.

w_workspace = widgets.Text( description = 'Workspace ID',style={'description_width': 'initial'}) 
w_report = widgets.Text(description = 'Report ID', style={'description_width': 'initial'})
w_activepage = widgets.Text(description = 'Active Page Name', style={'description_width': 'initial'})
display(w_workspace)
display(w_report)
display(w_activepage)

Running the code above will create 3 widgets. You will need to enter the required information into the widgets.

Fabric notebook widgets that capture workspace ID, report ID, and active page name

You could use variables in a cell to collect the required information. I’m using widgets to make it clear what information needs to be entered.

Once you have filled in the textboxes, you can run the last 2 cells. The fourth code cell is where I’m actually making the changes to the report.

var_reportname = labs.resolve_report_name(w_report.value, workspace=None)

var_rptw = labs.report.ReportWrapper(report=w_report.value, workspace=w_workspace.value,readonly=False)

var_rptw.set_active_page(w_activepage.value)
var_rptw.hide_tooltip_drillthrough_pages()
var_rptw.save_changes()

var_rptw.list_pages()

First, I use the report ID entered into the widget to get the report name.

Then I create my report wrapper (var_rptw). This object will be used with all the subsequent functions.

Next I set the active page to the page name entered into the w_activepage widget using the set_active_page() function. Then I call hide_tooltip_drillthrough_pages().

Each page has associated metadata that indicates whether it is a tooltip page and whether it is a drillthrough target page. I believe the tooltip page is determined by the page information setting labeled “Allow use as tooltip”.

Power BI page formatting options showing the Page information section containing the page name and the "allow use as tooltip" setting.
The Allow use as tooltip setting is in the Page information section

For drillthrough pages, I believe the presence of a field in the Drill through field well on the page is what causes it to be flagged as a drill through page.

The Visualizations pane in Power BI showing a field populated in the drillthrough field pane.
The drill through fields are in the Visualizations pane when the page is selected.

Calling the set_active_page() and hide_tooltip_drillthrough_pages() functions changes the metadata for the report object, but we have to save the report changes back to the report in the target workspace, for the changes to take effect. This is why we call var_rptw.save_changes().

Once we save the changes, we get a response back that lists the changes made to the report.

🟢 The 'Main' page has been set as the active page in the 'DataSavvyReport2' report within the 'SempyLabsTest' workspace. 🟢 The 'Tooltip' page has been set to 'hidden' in the 'DataSavvyReport2' report within the 'SempyLabsTest' workspace. 🟢 The 'Drillthrough' page has been set to 'hidden' in the 'DataSavvyReport2' report within the 'SempyLabsTest' workspace. 🟢 The report definition has been updated successfully.
The output from calling save_changes() lists which page was set to active, which pages have been hidden, and a confirmation that the report definition was saved.

Calling list_pages() produces a pandas DataFrame with metadata for each page. We can refer to the Hidden, Active, Type, and Drillthough Target Page columns to confirm the desired changes.

As a final confirmation, we can also view the Power BI report from within the notebook. That is what I’m doing with the launch_report() function. It provides a read-only view of the report in the notebook cell output..

Power BI report embedded in a notebook

More posts about Semantic Link Labs

So far, I’ve been exploring the report and admin subpackages of Semantic Link Labs. Below are some other blog posts I’ve written about them.

Finding fields used in a Power BI report in PBIR format with Semantic Link Labs

Get Power BI Report Viewing History using Semantic Link Labs

Want to learn more about Power BI Automation with Semantic Link Labs? Join our webinar this month covering this exact topic.

About ProcureSQL

ProcureSQL is the industry leader in providing data architecture as a service, enabling companies to harness their data and grow their business. ProcureSQL is 100% onshore in the United States and supports the four quadrants of data, including application modernization, database management, data analytics, and data visualization. ProcureSQL works as a guide, mentor, leader, and implementer to provide innovative solutions to drive better business outcomes for all businesses. Click here to learn more about our service offerings.

Do you have questions about leveraging AI, Microsoft Fabric, or the Microsoft Data Platform? You can chat with us for free one-on-one, or contact the team. We would love to share our knowledge and experience with you.

 

Earlier this month, we hosted T-SQL Tuesday and chose the topic of Growing the Younger Data Community and Speakers.  Today, I bring you a summary recap of everyone who shared their thoughts on this subject with links to their full content.

First, if you are not familiar with T-SQL Tuesday, it is a monthly blog party created by Adam Machanic in 2009. Now managed by Steve Jones.

 

T-SQL Tuesday Recap

Joe Fleming shares his ideas on how we can watch for those wallflowers, make them feel welcome, and help them create the next generation of in-groups.

Rob Farley shares his thoughts on mentoring various people, and some of that involves presenting. Still, more of it is about establishing them as experts, helping them learn what they want to know, and encouraging them to take steps to achieve their goals. If someone feels comfortable in their own skin as an expert, knowing that they are genuinely good at what they do, then they will often start to realize that they belong on the other side of the room.

Andy Yun flips the script and asks the readers to get involved. You do not have to be an expert to help. You can help by encouraging someone to share their ideas and become more involved in the data community.

Steve Jones shares his story of helping by finding people who inspire and educate their local community.

Andy Levy shared his experience meeting Courtney Woolum at the SQLPASS Summit in 2023 with Steve Clement. They blogged about their experience and the importance of the hallway track (walking around and meeting people) at conferences.

Mala Mahadevan shares what has worked over the past two decades and what has changed. I think she is spot on in finding what already exists and participating in whatever way you can. Real growth comes from real human connection.

Robert Douglas shares his thoughts on building a legacy and how that involves presenting to the targeted audience of people at the start of their story.

TSQLTuesday #188 - Growing the Younger SQL Community and Speakers

TSQLTuesday #188 – Growing the Younger SQL Community and Speakers

This month, I am hosting T-SQL Tuesday for the very first time. T-SQL Tuesday is a monthly blog party created by Adam Machanic in 2009. I want to give Steve Jones a shout-out for allowing me to host this month’s edition.

Each month, a new topic is chosen and published on the first Tuesday of the month, and contributors post their takes on the following Tuesday. Anyone can participate by sharing their thoughts on their preferred forum. Please publish your post by July 8th at midnight CDT. Please leave a comment on this blog post with the link to your response so that we can include your thoughts in the roundup. Doing so will provide everyone with a centralized location to find all the responses and let me know which ones to include in the recap.

Growing the Younger Data Community and Speakers

ProcureSQL wouldn’t exist today if it weren’t for being involved in the community. Straight out of college, Dolph Santorine dragged me along to the local AITP monthly meetings in Wheeling, WV.  This led me to start a SQL Server user group in Wheeling and host SQL Saturday events in both Wheeling and Austin. During SQL Saturday Austin in May, I had a great conversation with Steve about our thoughts on the state of the SQL community post-COVID. We both noticed that the average speaker age wasn’t getting any younger. This leads me to ask this month’s question.

What are you doing, or what can we do to encourage younger people to get involved in the SQL community while increasing the number of younger speakers?

Anything is fair game. For example, here were some things I was thinking:

  • Involving the local colleges in event planning for SQL Saturdays
  • Bringing interns and younger co-workers to user group meetings
  • Hosting lightning talks, where speakers focus on a single topic for five to ten minutes.
  • Mentoring a speaker through building and delivering their first presentation
  • Allowing a new speaker to co-present with you
  • Doing a one-on-one review, giving a critique on how they can improve their session
  • Create a budget for your young speakers to speak at events.
  • Hosting a track or event, only allowing new speakers the opportunity to present and share their knowledge
  • Making sure new local speakers can speak at your event, if that means saying no to MVPs and Microsoft Employees.

You Never Will Know the Impact You Will Generate

As the host this month, I will go first. I want to share two brief stories about helping new speakers and the impact it had on them.

My first big presentation was at SQL Saturday DC, many years ago. If you are familiar with Amateur Night at the Apollo, I would have given myself the hook.

It might have been my last presentation if it weren’t for Allen White taking the time to sit with me one-on-one after my session and go through the things I did well and the areas where I could improve, ensuring my presentation was better the next time I gave it. ProcureSQL would most likely never have existed without Allen taking the time to help make me a better speaker.  It’s fantastic to look back at how fifteen minutes had such a significant impact on my career.  I would have never spoken at PASS Summit or become a Data Platform MVP. I definitely wouldn’t have been focused on helping new speakers as well.

Later on, I worked at RDX for Kon Melamud. One of the most intelligent people I’ve ever met. He had never given a community session before, even though he worked down the street from the Pittsburgh SQL User Group meeting location. One month, I was the speaker, and I talked him into going with me and standing next to me as I gave the presentation. I told him I would do the presentation. When I am done with a section, I will ask him to share his thoughts and experience working with over 100 different customers. He was extremely nervous, and this was the perfect way to introduce him to the community. Doing so got him started and, over time, encouraged him to establish a budget and allow others at RDX to speak at community events. Today, Kon is the CTO at ProcureSQL, but more importantly, my best friend. Our relationship wouldn’t have grown without our involvement in the community together.

Instructions

  • Now that you have the topic, let’s recap the instructions:
    • Schedule your post to publish on Tuesday, July 8th.
    • Please include the T-SQL Tuesday image.

  • Please post a link to your blog post in the comments of this post, so I have an easy way to find it and include it in the recap.
  • Post it to social media if you can, and include the #tsqltuesday hashtag.
  • Link back to this blog post so that everyone can find a recap of all the blog posts on this topic.
  • Watch for my wrap-up the following week!

 

Microsoft Fabric Mirroring changes the game with Data Ingestion giving you near real-time data with a no-code framework

Microsoft Fabric Mirroring changes the game with Data Ingestion, giving you near real-time data with a no-code framework.

Microsoft’s Fabric Mirroring will change how you perform data ingestion. If you are using products to automate batch processes for data dumping, did you know that Fabric Mirroring might remove the need for these tools and provide you with near real-time access to the data as it changes in the source systems?

Suppose you have not yet heard of the medallion architecture. In that case, it involves utilizing bronze, silver, and gold layers to describe the data processing processes from intake into your data hub to consumption from your reporting applications of choice. This multi-layered approach existed before I started my analytics career in the early 2000s. Think of it simply as bronze being your unprocessed data, silver being somewhat cleaned and organized data processed from your bronze layer, and gold being your aggregated and optimized data ready for prime-time business insights.

It’s essential to understand the evolution of data management. From the ’90s to the early 2000s, the process of getting data from each application (referred to as a spoke) into your data repository (data hub) was complex. In the Microsoft world, multiple SSIS packages or other processes were used to pull data into tables with varchar(max); this was typically a batch process that ran on a schedule, leading to potential issues.  There were so many SSIS packages that we needed an automation language to build them all, rather than doing them individually.

Many businesses’ analytics projects struggle to quickly integrate the correct data into their hub so that data transformations and validations can be effective. If you get this wrong, there is no point in collecting $200 and passing Go. Your data analytics project might end up going straight to jail.

How can we load data quickly and successfully?

I am introducing you to a no-code, near-real-time option for loading your data into your data lake (data hub) within Fabric. This new feature is known as Fabric Mirroring.

While I love the functionality of Fabric Mirroring, I am not a fan of the name. Many people with SQL Server experience think this is similar to Database Mirroring because these names are similar.

In my opinion, Fabric mirroring is similar to implementing Change Data Capture (CDC) on your SQL Server databases. CDC feeds data into a real-time streaming tool like Apache Kafka to copy data from your spoke (SQL Server application database) into your hub (Data Lake).

The benefit here is twofold. First, you don’t have to manage the Change Data Capture or Kafka implementations. Second, and most importantly, this is more than just an SQL Server solution. In the future, you can use Fabric Mirroring to ingest data from all your sources (spokes) into your data hub in near real-time, with minimal to no code required.

For example, here is how to use Fabric Mirroring to import Dynamics 365 or Power Apps data into Fabric. You can do the same for Azure Cosmos Database and Snowflake. SQL Server is coming soon.

Currently, the following databases are available:

Platform Near real-time replication Type of mirroring
Microsoft Fabric mirrored databases from Azure Cosmos DB (preview) Yes Database mirroring
Microsoft Fabric mirrored databases from Azure Databricks (preview) Yes Metadata mirroring
Microsoft Fabric mirrored databases from Azure Database for PostgreSQL flexible server (preview) Yes Database mirroring
Microsoft Fabric mirrored databases from Azure SQL Database Yes Database mirroring
Microsoft Fabric mirrored databases from Azure SQL Managed Instance (preview) Yes Database mirroring
Microsoft Fabric mirrored databases from Snowflake Yes Database mirroring
Microsoft Fabric mirrored databases from SQL Server (preview) Yes Database mirroring
Open mirrored databases Yes Open mirroring
Microsoft Fabric mirrored databases from Fabric SQL database (preview) Yes Database mirroring

Now I know I can use Fabric Mirroring to help me get near real-time data into my hub with no code required. Why else should Fabric Mirroring be a game-changer for my analytics projects?

The Fabric Mirror enables us to accomplish a lot more in less time.

Suppose you have an SLA for getting data into a data warehouse in 24 hours. Processing through all the layers took you 20 hours (12 hours into bronze, 6 hours from bronze to silver, and 6 hours from silver to gold). If you now had near real-time, say 90 seconds, to get changes into bronze, that gives you an extra 11 hours and 59 minutes to improve data quality, data validation, and other processes upstream.

Centralized Data Management

Having a single hub that the applications (spokes) automatically send data to, a centralized database, and the clients and tools used, eliminates the need to install additional software. You now transition from pulling data from the spokes with batch processing to pushing data from the spokes in near real-time. It also simplifies data governance and enhances security because combining this with Preview lets you see which spokes the data goes into.

For example, you must comply with GDPR, and Sarah in the UK has now requested that her data be removed. You can now easily find the data in the spokes from the hub to determine what data needs to be purged quickly.

Simplified Data Ingestion.

Instead of mixing and matching different data sources, your delta tables will be created across your Cosmos Databases, Azure SQL databases, Dynamics 365, and other future fabric mirroring sources. You no longer need to worry about which sources are in Excel, CSV, flat file, JSON, etc. They are all in the same format, ready for you to do your transformations, data validation, and apply any business rules required for your silver level.

Improved Query Performance

Those who know me know that I love discussing query performance tuning. I am passionate about making databases go just as fast as your favorite F1 race car. I also know that you have at least one group of people running reporting queries against your line-of-business application database or an availability group replica. This leads to increased locks that slow down the original purpose of your application databases. These locks are now removed, and these reports can be sent against your data hub.

The mirrored data is also stored in an analytics-ready format, such as delta tables, which enhances query performance across various tools within Microsoft Fabric, including Power BI.

What if you cannot use Fabric Mirroring?

The sources for Microsoft Fabric to date are limited. If I had on-premise data sources or other sources that are not ready for Fabric Mirroring, I would still encourage this architecture approach of using change data capture, where available, to lead to streaming your data into your data hub of choice.

About ProcureSQL

ProcureSQL is the industry leader in providing data architecture as a service, enabling companies to harness their data and grow their business. ProcureSQL is 100% onshore in the United States and supports the four quadrants of data, including application modernization, database management, data analytics, and data visualization. ProcureSQL serves as a guide, mentor, leader, and implementer, providing innovative solutions to drive better business outcomes for all businesses. Click here to learn more about our service offerings.

In 2023, Microsoft announced its new platform, Microsoft Fabric, an innovative product that combined Microsoft Power BI, Azure Synapse Analytics, Azure Data Factory, Fabric SQL Databases, and Real-Time Intelligence into one platform.

Over 21,000 companies use Microsoft Fabric, and the success stories paint a promising future.

If your team hasn’t switched to Fabric yet, now is a great time to do so, as the transition has huge potential upside.

John Sterrett from ProcureSQL attend the 2025 Microsoft Fabric Conference

John Sterrett, ProcureSQL CEO, is attending the 2025 Microsoft Fabric Conference Partner Only Event.

The first significant benefit of the transition is a simplified work environment for all involved. Everything is integrated into one platform, eliminating headaches associated with handoffs and siloed workflow.

Data warehousing, data engineering, AI-powered analytics, data science, and business intelligence are now housed in one platform. A simpler platform means faster results for your company’s needs.

Moreover, as different teams collaborate, Fabric provides compliance features such as data tracking, version control, and role-based access, enabling multiple teams to work together simultaneously without compromising the integrity of your company’s data.

There is now an incredible amount of potential at their fingertips.

With Microsoft Fabric, forecasting future performance and AI-driven analysis now gives your teams a competitive edge.

This enables your business to transition from a purely reactive model to a proactive one, where you can stay one step ahead of what’s to come.

In terms of cost, you’ll be pleased to know that, despite all these new additions, the pricing model for Fabric remains scalable and flexible, depending on how you utilize it.

Microsoft provides a Fabric Capacity Estimator, allowing you to take full advantage of the new platform by understanding the up-front cost.

If you have already been using products from Azure and Microsoft, switching to Fabric is a no-brainer.

Transitioning to Microsoft Fabric

One of Fabric’s most valuable and convenient aspects is that its data and analytics can be shared across all teams for all to examine, including with your business’s non-tech parts, which are the decision-makers and subject matter experts. The data is easily interpreted, so minimal training is needed even if your users are not tech-savvy.

On top of that, with the involvement of AI, the data allows you to see patterns ahead of time so that everyone is on board if any anomalies or spikes in activity come up.

Transitioning to a new platform can be incredibly challenging, but here at Procure SQL, we’re here to help.

We’re ready to help your organization make this transition smoothly and with immediate impact.

Whether you’re still using Microsoft Power BI or a mix of analytics tools, our team can guide you through a phased implementation that minimizes disruption and maximizes value from the start.

Don’t wait to catch up; let ProcureSQL help you lead the way. Contact us today to get started on your Microsoft Fabric journey.

How can we help you today? - Step 1 of 3
How can we help you today?
Name

With the release candidate of SQL Server 2025, which came out last week, I want to discuss a valuable feature you will not see in the Microsoft press release: SQL Server 2025 Developer Standard Edition.

Microsoft is finally addressing a long-standing headache for database professionals. They finally included a Developer Standard edition in SQL Server 2025, fixing the mismatch between development and production environments. The new Standard Developer edition allows teams to build, test, and validate their database solutions using a free, fully licensed copy of the Standard edition for the first time!

SQL Server 2025 Developer Standard Edition eliminates costly licensing for non-production use while ensuring feature parity with production.

Previously, organizations used the Developer edition, functionally equivalent to the Enterprise edition, for development and testing. If you also used the enterprise edition in production, this wasn’t a problem. Problems occur when you attempt to save money by using developer edition (enterprise edition) features in development or testing, while using the standard edition in production. This mismatch often led to surprises during deployment, such as features that worked in development but failed in production due to missing or restricted capabilities in the Standard edition. Or worse, code that works and returns the same results, but has abnormal performance because enterprise edition features cause a change in the execution plans.

For example, Intelligent Query Processing batch mode for row store is a feature only available in Enterprise and Developer editions. This feature cannot be used in Standard Edition environments, leading to cases where performance is good in development and testing with the same data and transactional load footprint as production, but yields worse performance in production when utilizing Standard Edition.

In the past, we would have had to use the Developer edition, which opened this window for utilizing enterprise features in dev and test. With SQL Server 2025, you can select the Standard Developer edition or Enterprise Developer edition during the installation, ensuring your development environment mirrors production as closely as possible. This is especially valuable for teams whose production workloads run on the Standard edition.

Standard Developer edition gives you the ability to develop and test only against the standard features. You can pick enterprise or standard editions of Developer edition with SQL Server 2025

With SQL Server 2025 you can pick enterprise or developer for your free development edition.

With SQL Server performance, edition matters. Below is a chart showing that the majority of the performance-based features are Enterprise edition-only features. For two reasons, this article will focus on Online index rebuilds and batch mode for row store queries.

A breakdown of SQL Server 2025 performance features by edition so you can see which features are enterprise only. If you couldn't tell its most of them. You can now use standard developer edition to match production if you are going to use standard edition in production.

A breakdown of SQL Server 2025 performance features by edition lets you see which features are enterprise-only. If you couldn’t tell, it’s most of them. 🙂

Error Example: Online Index Rebuilds

To illustrate the practical impact, consider the scenario where a developer attempts to use the ALTER INDEX ... REBUILD WITH (ONLINE = ON)command. This command works flawlessly in a Developer (Enterprise) environment, allowing users to rebuild indexes without downtime. However, if the production environment is Standard, the same command will fail with an error, since online index rebuilds are not supported in the Standard edition.

Standard developer edition allows you to test your code against standard edition only features so your index rebuild online will fail as it's an enterprise edition only feature.

Online index rebuild online fails on standard developer edition but will succeed with the enterprise developer edition.

While this is not too difficult to catch in testing, you may be surprised at how often it is missed.

Let’s look at one more example that doesn’t cause an error but changes the performance and execution plans between the standard and enterprise editions. Because the developer edition before SQL Server 2025 used enterprise features, you would benefit from batch mode for your row store queries without knowing it.

SQL 2025 Standard Developer Edition: Different Plan and Performance

We will examine an example using the SQL Server Standard Developer Edition and the SQL Server Enterprise Developer Edition.

USE WideWorldImporters;
GO

SELECT 
    ol.OrderID,
    ol.StockItemID,
    ol.Description,
    ol.OrderLineID,
    o.Comments,
    o.CustomerID
FROM 
    Sales.OrderLines ol
INNER JOIN 
    Sales.Orders o ON ol.OrderID = o.OrderID
WHERE 
    ol.StockItemID = 168
GO

With the SQL Server Enterprise Developer Edition, we use an Adaptive Join to counteract filters with low and high numbers of rows.

enterprise developer edition gets you the adaptive join as this is an enterprise edition feature to use batch mode for row mode queries.

The enterprise developer edition includes the adaptive join, an enterprise edition feature that allows you to use batch mode for row mode queries.

With the SQL Server Standard Developer edition feature in SQL Server 2025, we observe the same execution plan in development, testing, and production when using the Standard edition for production. In this case, we don’t have batch mode, and you will see we use a hash join, which is not ideal for a small number of records for our filter.

Standard developer edition doesn't use the adaptive join because its an enterprise edition only feature.

The takeaway is that features can change functionality and how you get your data. This example would be more complex to catch in your development pipeline, most likely leading to a bad taste in your mouth about development and testing being fast, but seeing negative performance when you release changes to production.

SQL Server 2025’s Standard Developer edition is a vital tool for any organization that values consistency and reliability across its database environments. Using the much more affordable standard edition of SQL Server empowers developers to test confidently, knowing that what works in development will also work in production. No more unpleasant feature surprises at go-live.

If you like our blog posts, subscribe to our newsletter. We will share all kinds of great stuff for FREE! If you’d like to chat about this feature or anything else database-related, contact us!


Join Our Newsletter




















The post SQL Server 2025 Standard Developer Edition appeared first on SQL Server Consulting & Remote DBA Service.

If you're looking to master SQL Server, Power BI, or Microsoft Fabric, attending the SQL Saturday event in Austin, Texas, is one of the smartest moves you can make for your career. Note: Austin, Texas, is hosting their event on May 2nd and 3rdHere's why:  

Free, High-Quality Training

SQL Saturday events are renowned for offering a full day of technical sessions that are entirely free of charge (pay for lunch). Whether a beginner or an experienced professional, you will find sessions tailored to your skill level, led by Microsoft employees, industry experts, and Microsoft MVPs passionate about sharing their knowledge. This includes all-day hands-on workshops (usually a paid add-on) and in-depth explorations of the latest features of SQL Server, Power BI, and Microsoft Fabric, ensuring you stay current with the rapidly evolving Microsoft data platform.  

Free, High-Quality Training

SQL Saturday events are renowned for offering a full day of technical sessions that are entirely free of charge (pay for lunch). Whether a beginner or an experienced professional, you will find sessions tailored to your skill level, led by Microsoft employees, industry experts, and Microsoft MVPs passionate about sharing their knowledge. This includes all-day hands-on workshops (usually a paid add-on) and in-depth explorations of the latest features of SQL Server, Power BI, and Microsoft Fabric, ensuring you stay current with the rapidly evolving Microsoft data platform.  

Learn from The Experts!

Austin Texas SQL Saturday session
Speakers at SQL Saturday events are practitioners who solve real business problems with these technologies on a daily basis. You will gain practical insights, best practices, and tips you can immediately apply to your job to add value instantly. You will see how other companies and consultants leverage SQL Server, Power BI, and Microsoft Fabric to drive their business success.

Networking Opportunities in Austin, Texas

Experts go to and share their knowledge at SQL Saturdays because of their desire to connect, share, and learn together. These connections lead to mentorship, job opportunities, and lasting professional relationships. SQL Saturdays are more than just technical content. It is a community gathering. You will connect with fellow data professionals, speakers, and recruiters. The supportive, grassroots atmosphere makes it easy for newcomers to feel at home and get involved. You never know, your next boss might be sitting next to you in a session.

Career and Community Growth

Attending SQL Saturday is a proven way to invest in your professional development. My company, ProcureSQL, is a living example. We would not exist without the technical and professional development at SQL Saturdays. It is a key reason why we continue to invest time and money to help these events succeed. John Sterrett teaching performance tuningYou will sharpen your technical skills and gain exposure to leadership and volunteering opportunities that can accelerate your career. Additionally, you will become part of a global network of data professionals passionate about learning and sharing knowledge. In short, if you want to learn SQL Server, Power BI, or Microsoft Fabric, SQL Saturday offers an unbeatable combination of free training, expert guidance, and community support. Do not miss your chance to level up. Join us at SQL Saturday Austin on May 2nd and 3rd, 2025.  
PS: If you cannot attend SQL Saturday in Austin and still would like help with your Microsoft Data Platform problems, I am happy to chat one-on-one. The post "Austin, Texas: Best Microsoft Technical Training Opportunity" appeared first on SQL Server Consulting & Remote DBA Service.

Modernizing Legacy Applications

Older legacy systems and legacy applications are crucial to a business’s functionality. They’re the backbone that holds a business together, whether they are a bank’s mainframe holding customer accounts, a factory’s SAP system handling inventory and payroll, or a retail company’s POS system for in-store sales.

As technology advances, these applications must be upgraded or replaced to keep up with the competition.

Legacy Applications Data things to know.

Legacy Applications Data: things to know.

Upgrading your legacy systems will make your company more efficient, resilient to threats, and scalable. This results in a significant bump in cost savings from reduced hardware and maintenance expenses, plus a better user experience for both employees using the system and customers.

To upgrade, though, you need to make a big decision. Do you migrate your entire system or upgrade/modernize the one you currently have?

Each has significant pros and cons, and it’s key to understand both before you make your decision.

Legacy Application Migrations

First, let’s talk about migrating your legacy applications.

Migrating means moving your data to a new environment, such as a cloud platform or a new system. Having a data architect on hand for this engagement helps reduce risk and improves the rate of a successful migration.

The immediate benefit of doing this is getting a fresh start with all the benefits the new platform has to offer.

It’s a complete overhaul that will jumpstart your company and allow it to benefit from cutting-edge technology immediately.

Because it’s a complete overhaul, though, there can be some pitfalls that you must be aware of.

First and foremost, migrating your data and systems is a considerable project. Old data often needs to be reformatted for new systems, as it was written in old code or is for outdated relational databases. Your data migration can be accomplished by refactoring or completely rewriting the data, which may take longer.

Add to this the investment in new software and training, and you have a more expensive, long-term project ahead of you.

Fortunately, the long-term benefits of migrating your system outweigh any cost you may have upfront.

The return on investment from a core business system migration has been shown to pay itself off quickly after adoption. For instance, organizations that migrated to Atlassian’s cloud technology reported an ROI of 155% over three years, with an initial payback period of only six months.

Furthermore, with detailed planning, you can prevent business disruptions and data loss during migration.

To prevent business disruption, the new system should be rolled out gradually during non-peak hours to minimize the impact. Several layers of data backup and heightened security now exist to guarantee the safety of customer and company data. A data architect is crucial to executing your go-live and rollback scenarios smoothly.

Application Modernization for Legacy Applications

When it comes to modernization for legacy applications, you’ll have a quicker turnaround to upgrade your system without a complete overhaul and the full benefits of migration.

Think of migration as buying a new car, while modernization is tuning up your old one.

Because of the project’s smaller scale, modernization mitigates any setbacks that may accompany a migration.

You’re keeping your old system, so you’ll save money on new software and training because it is familiar. Modernization focuses on small, gradual changes that allow you to keep your systems running while improvements are made.

The major drawback is that modernization has a smaller return on investment than migration. You won’t have access to all the benefits a totally new platform can offer.

In addition, as your system becomes increasingly outdated, you may eventually need to migrate to a new system. Modernization can delay this, but migration may one day become inevitable.

Making Your Decision Between the Two

Regardless of your choice in managing your legacy applications, data architects or database administrators should be part of your team to ensure your selection is executed correctly.

To make your decision between migration and modernization, see where you lie on the following two questions:

  1. What are your current needs? If your current system can’t keep up with your growth or you eventually want to scale, migration would be your way forward. Modernization is your choice if your current situation works but needs minor improvements.
  2. What’s your long-term plan and budget? Migration will require more upfront costs, but it has proven to return much more on its investment than modernization. Modernization is less cost-heavy upfront, can be implemented faster, and will deliver short-term improvements.

If you are unsure of any of these questions and need better insight into the best path forward, ProcureSQL can help. We are happy to chat with you one-on-one, or you can contact us today by clicking here to get started on your path to migration or modernization.

I attended the Microsoft Fabric conference for the first time last week. I wanted to provide a guide that CIOs and CEO’s could leverage to understand how they could utilize these new announcements at the 2025 Fabric Conference to obtain a competitive advantage. To be transparent, I was skeptical because Microsoft consistently changes or rebrands its analytics platform every three to five years. We have gone from Parallel Data Warehouses (PDW) to Analytics Platform Services (APS), Azure Services, Azure SQL Data Warehouse, and Azure Synapse Analytics, bringing us to Microsoft Fabric.

John Sterrett from ProcureSQL attend the 2025 Microsoft Fabric Conference

John Sterrett from ProcureSQL attends the 2025 Microsoft Fabric Conference.

To my surprise, after this conference, I have gone from seeing Fabric as Microsoft’s current take on Analytics to how it will stand out as an analytics platform of choice for people who want a simple, quick, and easy way to do analytics with the tools they already love using.

Artificial Intelligence (AI) will only be as practical as the quality of your data. Garbage in still equals garbage out, or as I like to call it, building a trusted dumpster fire. Preparing your data for AI will be the key to success with your AI Projects. Microsoft clearly understands this by focusing on preparing your data for AI with fabric mirroring, fabric databases, and SQL Server 2025. My takeaway is that you won’t have to get ready if you stay ready.

Copilot for all Fabric SKUs

Microsoft is committed to giving more people access to its AI tools as a commitment to this. In the coming weeks, users on F2 fabric compute and above can utilize Copilot. Additionally, you can use Fabric Copilot capacity, a new feature that simplifies setup, user management, and access to Copilot across different tiers.

Why Fabric Mirroring Is A Game Changer

Those following us aren’t new to the concept and advantages of fabric mirroring. One of the biggest mistakes we see that multiplies the odds of your analytics projects failing is incorrectly landing your data into your analytics platform of choice. Either the data is missing, has been transformed incorrectly, or is no longer being received.

Microsoft provides a feature called “mirroring” to help solve the problem of getting your data into your landing zone. With Azure SQL Databases and fabric databases, it’s as easy as a few clicks. Coming soon, you will have similar experiences with PostgreSQL in Azure, Oracle, SQL Server in VMs, and on-premises. What about other apps/data stores? Open mirroring is coming soon, and you can leverage it to get your other data into the Fabric landing zone.

Multi-Cloud Shortcuts

Microsoft has partnered with Snowflake to provide iceberg-formatted data across Fabric, eliminating data movement and duplication. You can use a shortcut to point directly to an Iceberg table written using Snowflake in Azure. Snowflake has also added the ability to write Iceberg tables directly into OneLake.

Apache Iceberg tables can be used with Fabric due to a feature called metadata virtualization. Behind the scenes, this feature utilizes Apache XTable.

The key takeaway is that users can now work on the same data using both Snowflake and Fabric, without requiring data movement or duplication. Letting your data professionals utilize the tools they use best is a huge win.

Fabric Databases

Microsoft Fabric Databases is the new kid on the block, and it’s already seeing traction as the first fully SaaS-ifyed database offering. Fabric databases are built for ease of use as a unified data platform. You can create databases in just a few clicks and have zero maintenance to worry about, as Microsoft fully manages the databases. Fabric database data is automatically mirrored into OneLake for analytics.

The key takeaway is that you can utilize Microsoft Fabric for application development and eliminate the need for a database infrastructure as a service MSP/partner. You can eliminate this cost as you should always get exponential value from your data MSP (what we built our practice focusing on), not just body for monitoring or keeping the lights up and running.

SQL Server 2025

Microsoft announced some updates to SQL Server 2025 at the keynote and in other breakout sessions. While it is still in private preview, it was easy to see how anyone who could write T-SQL could leverage models and vectors without needing extensive knowledge of vectors or algorithms. GraphQL will enable developers to access API endpoints and consume data, similar to most other APIs. JSON will be treated as a first-class citizen, with its data type and indexes, to help developers access their JSON data quickly and easily.

With SQL Server 2025, you can mirror your data to Microsoft Fabric with Zero ETL, zero code, our OneLake, and near real-time mirroring at no additional cost, without requiring change data capture. This will help reduce your total cost of ownership. There will be no additional compute costs for Availability Groups; continue to utilize your Fabric compute.

The key takeaway is that Microsoft continues investing in making SQL

Server more accessible from the ground to the cloud. SQL Server will continue to make it easier to help you utilize your data inside and outside the relational platform.

Other notable features

Autoscale Billing for Spark optimizes Spark job costs by offloading your data’s extraction, load, and transformation to a serverless billing model.

Command-line interface Fabric CLI is now in preview. Built on fabric APIs, it is designed for automation. There will be less clicky-clicky and more scripts that you can version control.

API and Terraform Integration Automate key aspects of your fabric platform now by utilizing Terraform. If you have used it with Azure, get ready to use it with Fabric as well.

CI/CD enhancements. With Fabric’s git integration, multiple developers can frequently make incremental workspace updates. You could also utilize variable libraries and delivery pipelines to help get your changes vetted and tested quickly through your various testing environments.

User Data Functions Fabric user data functions is a platform that allows you to host and run applications on Fabric. Data engineers can write custom business logic and embed it into the fabric ecosystem.

Statistics That Caught My Attention

  • Microsoft Fabric supports over 19,000 organizations, including 74% of Fortune 500 companies.
  • Power BI has over 275k users, including 95% of Fortune 500 companies
  • 45k consultants trained, 23k partner certifications in its first year
  • One billion new apps will be built in the next five years.
  • 87% of leaders believe AI will give their organization a competitive edge
  • 30,000+ fabric certifications completed in twelve months

I will be back next year and will provide you with another write-up, similar to the one I produced this week, in case you are unable to attend.

About ProcureSQL

ProcureSQL is the industry leader in providing data architecture as a service, enabling companies to harness their data and grow their business. ProcureSQL is 100% onshore in the United States and supports the four quadrants of data, including application modernization, database management, data analytics, and data visualization. ProcureSQL works as a guide, mentor, leader, and implementer to provide innovative solutions to drive better business outcomes for all businesses. Click here to learn more about our service offerings.

Do you have questions about leveraging AI, Microsoft Fabric, or the Microsoft Data Platform? You can chat with me for free one-on-one, or contact the team. We would love to share our knowledge and experience with you.

In this six-minute video, Allen Kinsel shares multiple ways to quickly determine whether SQL Server is the root cause or a symptom of your performance problem.

 

Demo Code


/* See all requests */
select * from
sys.dm_exec_requests r
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) AS st

/*Filter out the noise */
select * from
sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st