Az SQL DB performance issues

Hemanth Kumar Kotha 0 Reputation points
2026-02-03T12:18:06.17+00:00

Query latency differs across environments (Dev vs UAT) – same stored procedure, different performance

I’m facing a performance issue where the same stored procedure and identical query logic behave differently across environments.


Problem Summary

@Erland Sommarskog @Alberto Morillo

Dev environment

Stored procedure executes quickly

No noticeable latency issues

UAT environment

The same stored procedure takes significantly longer

Execution plan differs from Dev

Key observation

When I restore the same BACPAC as a brand-new database in UAT, the stored procedure starts performing normally again

No code changes are required for it to work fast in the newly restored database

Question

Why does the same stored procedure show high latency in UAT, but works fine when the same BACPAC is restored as a new database?

What could be the possible:

Root causes

Workarounds

Reverse-engineering or diagnostic steps to identify and permanently fix this issue in the existing UAT database?


Additional context

Query text and stored procedure definition are identical

Schema is deployed via DACPAC/BACPAC

Issue appears to be environment-specific, not code-specific

  • Recreating the database resolves the issue temporarily

This is the scenario: we have 2 Azure SQL databases, one across each environment (Dev and Prod). The tier of the Dev database is lower than that of the Prod database.

In Prod, an Azure Data Factory job is scheduled every 5 minutes that syncs data from an on-premise MySQL database into the Azure SQL database (via incremental load and merge operations) whereas the data in Dev is on an adhoc basis.

The amount of data across both Dev and Prod databases are the same.

Issue: the stored procedure in Dev take ~5 seconds, whereas the same stored procedure in Prod take ~2 minutes.

Note: we tried disabling the Prod job of every 5 minutes and test out the stored procedure performance, but the performance was the same - ~2 minutes.

We have already tried these steps :

Compared the execution plans across both the databases and it was different. So try executing the stored procedure with "Option recompile" Restarted the Azure SQL database Republished the DACPAC Rebuilt indexes Updated the statistics But we still have the same issue.

We then restored the .BACPAC of the prod database as another database and the performance improved.

We want to know what is that got resolved by restoring the .BACPAC file.

Because now there is another stored procedure with a similar issue with the only change in the above scope is the execution plan is same still there is performance diff. Database with higher tier is having an issue

Azure SQL Database
{count} votes

2 answers

Sort by: Most helpful
  1. Q&A Assist
    2026-02-03T12:18:21.66+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The performance issue you're experiencing with the stored procedure in the UAT environment compared to the Dev environment can be attributed to several factors related to query execution plans and database state. Here are some possible root causes, workarounds, and diagnostic steps:

    Root Causes

    1. Different Execution Plans: The SQL Query Optimizer may generate different execution plans for the same stored procedure based on the database state, statistics, or available resources. If the execution plan in UAT is suboptimal, it can lead to increased latency.
    2. Stale Statistics: If the statistics in the UAT environment are outdated or not representative of the underlying data distribution, the optimizer may make poor cardinality estimates, resulting in inefficient query plans.
    3. Resource Availability: Differences in server resources (CPU, memory, I/O) between the Dev and UAT environments can influence the execution plan chosen by the optimizer. For example, if UAT has fewer resources, it may lead to slower performance.
    4. Database Compatibility Level: If the compatibility level of the UAT database differs from that of the Dev database, it could affect the behavior of the query optimizer and the execution plans generated.

    Workarounds

    1. Update Statistics: Regularly update statistics in the UAT environment to ensure that the optimizer has accurate information about data distribution. This can be done using the UPDATE STATISTICS command.
    2. Recompile the Stored Procedure: You can force the stored procedure to recompile using the WITH RECOMPILE option or by altering the procedure to include this option. This may help generate a more optimal execution plan based on current statistics.
    3. Use Query Hints: If you identify a specific execution plan that performs better, you can use query hints to influence the optimizer's choices without changing the stored procedure code.

    Diagnostic Steps

    1. Compare Execution Plans: Capture and compare the actual execution plans of the stored procedure in both environments. Look for differences in the operators used and the estimated vs. actual row counts.
    2. Check for Missing Indexes: Use tools like the Database Advisor or query execution plans to identify any missing indexes that could improve performance in the UAT environment.
    3. Monitor Resource Usage: Analyze the resource utilization during the execution of the stored procedure in both environments to identify any bottlenecks in the UAT environment.
    4. Database Compatibility Level: Verify and compare the database compatibility levels between Dev and UAT to ensure they are aligned.

    By following these steps, you should be able to identify the underlying causes of the performance differences and implement solutions to improve the stored procedure's execution time in the UAT environment.


    References:

    0 comments No comments

  2. Erland Sommarskog 132K Reputation points MVP Volunteer Moderator
    2026-02-03T22:36:28.0666667+00:00

    For reference, this was the answer I posted to your other post about the same question:

    The only suggestion I have directly with the code is that

    SQL

    DECLARE group_cursor CURSOR FOR 
    SELECT DISTINCT [group] FROM @FilterTable;
    
    
    

    Should be, out of principle:

    SQL

    DECLARE group_cursor CURSOR STATIC LOCAL FOR 
    SELECT DISTINCT [group] FROM @FilterTable;
    

    Else, set up an extended-events session that you filter for the spid you run the procedure from. In the session, include the event sp_statement_completed. Use Watch Live Data to see the output, and add Duration to the column. That should help you to narrow down which is the slow statement.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.