Executing same simple select statement or stored procedure on SQL Azure takes long time or times-out





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







0















I have two SQL Server Azure instances with Standard S2: 50 DTUs. When I run simple select statements on two instances, one of them takes more time than other or times out. Slower one have more records in tables in slower instance.



Both the instances have same table schema. Number of records in tables present in slower instances, LogEvidence table have 1324928 and LogItem table have 649391. Number of records in tables present in faster instances, LogEvidence table have 89504 and LogItem table have 89496.



Below is the simple select statement



select count(*) from logitem 


Above simple select statement takes 0s on faster instance and on slower instance it takes 138s. And if I execute any stored procedure, slower instance takes more times or times out.



Time taken by both instances should be almost same.










share|improve this question































    0















    I have two SQL Server Azure instances with Standard S2: 50 DTUs. When I run simple select statements on two instances, one of them takes more time than other or times out. Slower one have more records in tables in slower instance.



    Both the instances have same table schema. Number of records in tables present in slower instances, LogEvidence table have 1324928 and LogItem table have 649391. Number of records in tables present in faster instances, LogEvidence table have 89504 and LogItem table have 89496.



    Below is the simple select statement



    select count(*) from logitem 


    Above simple select statement takes 0s on faster instance and on slower instance it takes 138s. And if I execute any stored procedure, slower instance takes more times or times out.



    Time taken by both instances should be almost same.










    share|improve this question



























      0












      0








      0








      I have two SQL Server Azure instances with Standard S2: 50 DTUs. When I run simple select statements on two instances, one of them takes more time than other or times out. Slower one have more records in tables in slower instance.



      Both the instances have same table schema. Number of records in tables present in slower instances, LogEvidence table have 1324928 and LogItem table have 649391. Number of records in tables present in faster instances, LogEvidence table have 89504 and LogItem table have 89496.



      Below is the simple select statement



      select count(*) from logitem 


      Above simple select statement takes 0s on faster instance and on slower instance it takes 138s. And if I execute any stored procedure, slower instance takes more times or times out.



      Time taken by both instances should be almost same.










      share|improve this question
















      I have two SQL Server Azure instances with Standard S2: 50 DTUs. When I run simple select statements on two instances, one of them takes more time than other or times out. Slower one have more records in tables in slower instance.



      Both the instances have same table schema. Number of records in tables present in slower instances, LogEvidence table have 1324928 and LogItem table have 649391. Number of records in tables present in faster instances, LogEvidence table have 89504 and LogItem table have 89496.



      Below is the simple select statement



      select count(*) from logitem 


      Above simple select statement takes 0s on faster instance and on slower instance it takes 138s. And if I execute any stored procedure, slower instance takes more times or times out.



      Time taken by both instances should be almost same.







      sql stored-procedures azure-sql-database






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 10 at 21:40









      marc_s

      584k13011241270




      584k13011241270










      asked Jan 3 at 2:31









      DheerajDheeraj

      95




      95
























          3 Answers
          3






          active

          oldest

          votes


















          0














          Those simple queries perform big scans on the table and involve reading all rows. If the table has a clustered index you don't have to perform a SELECT COUNT(*) to know the number of records the table has. The following query should to that faster:



          SELECT OBJECT_NAME(ps.object_id) , i.name , row_count 
          FROM sys.dm_db_partition_stats AS ps INNER JOIN sys.indexes AS i
          ON ps.index_id = i.index_id AND ps.object_id = i.object_id
          WHERE i.name like '%logitem%'


          If the table does not have an Id please add an autoid on the table and make it the clustered index.



          You can also try to add a useless WHERE clause like below to the query, and you may get a better performance.



          SELECT count(*) 
          FROM logitem
          WHERE id > 0


          Where Id is the autoid column.






          share|improve this answer

































            0














            I had some experience with azure, and from your description I think there is one of following things you can do:




            1. Since you are using only count, then indexes play no role. Though I understand other answer says to use where id>0, but azure should count 1M rows without 30 second timeout. But for other queries you need Indexes, or Azure will fail.


            2. Check if your server is not under maintenance, it is low chance but it does happen with us, we are on s4 and occasionally our server just get slow, but after 10-30 minute it works fine. Maybe the actual hardware get in some process that slows it down.


            3. This is most important reason for slow execution, especially if you have lot of write and delete happen on your server. Check the database size. Azure database got fragmented too quickly, we have to optimize it's data fragmentation every 10 days, if your bacpac size is 100MB and your database size in Azure shows like 5-6 GB, then it definitely need optimization as lot of fragments were generated. MSDN has given some queries to recreate indexes and remove fragmentation, I don't remember them URL, but simple google search will bring that. It should speed things up.


            4. Azure has feature that auto generate indexes, check if both table share same indexes, maybe your faster version has some index Azure create by itself.







            share|improve this answer































              0














              You should step back and ponder your assumption:
              1. "performance should be about the same" - you have more data in one case vs. the other. In the limit, you should expect the performance of the second one to potentially be somewhat slower than the original one.



              Now, let's go into the "why" it can be slower and how you can investigate each case:
              Step 1: Look at the query plans for each case and see what you have. Likely, you will have something like:
              StreamAgg <- Clustered Index Scan
              (if you have other b-tree indexes, you might scan one of them and it might be faster since the index would not be as wide and thus the index will have fewer pages to scan)



              Step 2: You can look at the actual execution times and resource use for each query to see why they are different. One way to do this is to run "set statistics time on", then "set statistics io on", then run your query. it will dump out extra information into SSMS when you run the query from there. (You can read about this here: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-statistics-io-transact-sql?view=sql-server-2017)



              If you review the output from each one, you may find reasons for why the performance is different. One possible explanation is that the amount of memory is limited in an S2 and you are just at the boundary for where all the pages fit in memory vs. not for the two examples. In that case, doing a count(*) query would need to cycle through all the pages and do much more IO than in the smaller case where they might all be in memory already.



              Step 3: You can also potentially examine the query store to get insight into why one case is fast and one case is not. An overview of how to use it is here:
              https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
              Note: it is on-by-default in SQL Azure so you can just go look at the time window when you ran the queries to get insight into what was happening at that time in your database.



              Finally, you might consider ways to make the query go faster if you need it to be faster.
              * creating a narrow b-tree index on the table may help for that one query (count(*) doesn't return any columns and just needs a count of rows from some non-filtered index).
              * you could use a Columnstore (which requires an S3 or above for memory reasons). This kind of column-oriented index is optimized for this kind of query and would be much faster as the size of the table increases in the future.



              Hope that help






              share|improve this answer
























                Your Answer






                StackExchange.ifUsing("editor", function () {
                StackExchange.using("externalEditor", function () {
                StackExchange.using("snippets", function () {
                StackExchange.snippets.init();
                });
                });
                }, "code-snippets");

                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "1"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: true,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: 10,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                draft saved

                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54015601%2fexecuting-same-simple-select-statement-or-stored-procedure-on-sql-azure-takes-lo%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                3 Answers
                3






                active

                oldest

                votes








                3 Answers
                3






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                0














                Those simple queries perform big scans on the table and involve reading all rows. If the table has a clustered index you don't have to perform a SELECT COUNT(*) to know the number of records the table has. The following query should to that faster:



                SELECT OBJECT_NAME(ps.object_id) , i.name , row_count 
                FROM sys.dm_db_partition_stats AS ps INNER JOIN sys.indexes AS i
                ON ps.index_id = i.index_id AND ps.object_id = i.object_id
                WHERE i.name like '%logitem%'


                If the table does not have an Id please add an autoid on the table and make it the clustered index.



                You can also try to add a useless WHERE clause like below to the query, and you may get a better performance.



                SELECT count(*) 
                FROM logitem
                WHERE id > 0


                Where Id is the autoid column.






                share|improve this answer






























                  0














                  Those simple queries perform big scans on the table and involve reading all rows. If the table has a clustered index you don't have to perform a SELECT COUNT(*) to know the number of records the table has. The following query should to that faster:



                  SELECT OBJECT_NAME(ps.object_id) , i.name , row_count 
                  FROM sys.dm_db_partition_stats AS ps INNER JOIN sys.indexes AS i
                  ON ps.index_id = i.index_id AND ps.object_id = i.object_id
                  WHERE i.name like '%logitem%'


                  If the table does not have an Id please add an autoid on the table and make it the clustered index.



                  You can also try to add a useless WHERE clause like below to the query, and you may get a better performance.



                  SELECT count(*) 
                  FROM logitem
                  WHERE id > 0


                  Where Id is the autoid column.






                  share|improve this answer




























                    0












                    0








                    0







                    Those simple queries perform big scans on the table and involve reading all rows. If the table has a clustered index you don't have to perform a SELECT COUNT(*) to know the number of records the table has. The following query should to that faster:



                    SELECT OBJECT_NAME(ps.object_id) , i.name , row_count 
                    FROM sys.dm_db_partition_stats AS ps INNER JOIN sys.indexes AS i
                    ON ps.index_id = i.index_id AND ps.object_id = i.object_id
                    WHERE i.name like '%logitem%'


                    If the table does not have an Id please add an autoid on the table and make it the clustered index.



                    You can also try to add a useless WHERE clause like below to the query, and you may get a better performance.



                    SELECT count(*) 
                    FROM logitem
                    WHERE id > 0


                    Where Id is the autoid column.






                    share|improve this answer















                    Those simple queries perform big scans on the table and involve reading all rows. If the table has a clustered index you don't have to perform a SELECT COUNT(*) to know the number of records the table has. The following query should to that faster:



                    SELECT OBJECT_NAME(ps.object_id) , i.name , row_count 
                    FROM sys.dm_db_partition_stats AS ps INNER JOIN sys.indexes AS i
                    ON ps.index_id = i.index_id AND ps.object_id = i.object_id
                    WHERE i.name like '%logitem%'


                    If the table does not have an Id please add an autoid on the table and make it the clustered index.



                    You can also try to add a useless WHERE clause like below to the query, and you may get a better performance.



                    SELECT count(*) 
                    FROM logitem
                    WHERE id > 0


                    Where Id is the autoid column.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Jan 3 at 13:11

























                    answered Jan 3 at 13:00









                    Alberto MorilloAlberto Morillo

                    7,09011018




                    7,09011018

























                        0














                        I had some experience with azure, and from your description I think there is one of following things you can do:




                        1. Since you are using only count, then indexes play no role. Though I understand other answer says to use where id>0, but azure should count 1M rows without 30 second timeout. But for other queries you need Indexes, or Azure will fail.


                        2. Check if your server is not under maintenance, it is low chance but it does happen with us, we are on s4 and occasionally our server just get slow, but after 10-30 minute it works fine. Maybe the actual hardware get in some process that slows it down.


                        3. This is most important reason for slow execution, especially if you have lot of write and delete happen on your server. Check the database size. Azure database got fragmented too quickly, we have to optimize it's data fragmentation every 10 days, if your bacpac size is 100MB and your database size in Azure shows like 5-6 GB, then it definitely need optimization as lot of fragments were generated. MSDN has given some queries to recreate indexes and remove fragmentation, I don't remember them URL, but simple google search will bring that. It should speed things up.


                        4. Azure has feature that auto generate indexes, check if both table share same indexes, maybe your faster version has some index Azure create by itself.







                        share|improve this answer




























                          0














                          I had some experience with azure, and from your description I think there is one of following things you can do:




                          1. Since you are using only count, then indexes play no role. Though I understand other answer says to use where id>0, but azure should count 1M rows without 30 second timeout. But for other queries you need Indexes, or Azure will fail.


                          2. Check if your server is not under maintenance, it is low chance but it does happen with us, we are on s4 and occasionally our server just get slow, but after 10-30 minute it works fine. Maybe the actual hardware get in some process that slows it down.


                          3. This is most important reason for slow execution, especially if you have lot of write and delete happen on your server. Check the database size. Azure database got fragmented too quickly, we have to optimize it's data fragmentation every 10 days, if your bacpac size is 100MB and your database size in Azure shows like 5-6 GB, then it definitely need optimization as lot of fragments were generated. MSDN has given some queries to recreate indexes and remove fragmentation, I don't remember them URL, but simple google search will bring that. It should speed things up.


                          4. Azure has feature that auto generate indexes, check if both table share same indexes, maybe your faster version has some index Azure create by itself.







                          share|improve this answer


























                            0












                            0








                            0







                            I had some experience with azure, and from your description I think there is one of following things you can do:




                            1. Since you are using only count, then indexes play no role. Though I understand other answer says to use where id>0, but azure should count 1M rows without 30 second timeout. But for other queries you need Indexes, or Azure will fail.


                            2. Check if your server is not under maintenance, it is low chance but it does happen with us, we are on s4 and occasionally our server just get slow, but after 10-30 minute it works fine. Maybe the actual hardware get in some process that slows it down.


                            3. This is most important reason for slow execution, especially if you have lot of write and delete happen on your server. Check the database size. Azure database got fragmented too quickly, we have to optimize it's data fragmentation every 10 days, if your bacpac size is 100MB and your database size in Azure shows like 5-6 GB, then it definitely need optimization as lot of fragments were generated. MSDN has given some queries to recreate indexes and remove fragmentation, I don't remember them URL, but simple google search will bring that. It should speed things up.


                            4. Azure has feature that auto generate indexes, check if both table share same indexes, maybe your faster version has some index Azure create by itself.







                            share|improve this answer













                            I had some experience with azure, and from your description I think there is one of following things you can do:




                            1. Since you are using only count, then indexes play no role. Though I understand other answer says to use where id>0, but azure should count 1M rows without 30 second timeout. But for other queries you need Indexes, or Azure will fail.


                            2. Check if your server is not under maintenance, it is low chance but it does happen with us, we are on s4 and occasionally our server just get slow, but after 10-30 minute it works fine. Maybe the actual hardware get in some process that slows it down.


                            3. This is most important reason for slow execution, especially if you have lot of write and delete happen on your server. Check the database size. Azure database got fragmented too quickly, we have to optimize it's data fragmentation every 10 days, if your bacpac size is 100MB and your database size in Azure shows like 5-6 GB, then it definitely need optimization as lot of fragments were generated. MSDN has given some queries to recreate indexes and remove fragmentation, I don't remember them URL, but simple google search will bring that. It should speed things up.


                            4. Azure has feature that auto generate indexes, check if both table share same indexes, maybe your faster version has some index Azure create by itself.








                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Jan 3 at 13:24









                            Sumit GuptaSumit Gupta

                            1,65642435




                            1,65642435























                                0














                                You should step back and ponder your assumption:
                                1. "performance should be about the same" - you have more data in one case vs. the other. In the limit, you should expect the performance of the second one to potentially be somewhat slower than the original one.



                                Now, let's go into the "why" it can be slower and how you can investigate each case:
                                Step 1: Look at the query plans for each case and see what you have. Likely, you will have something like:
                                StreamAgg <- Clustered Index Scan
                                (if you have other b-tree indexes, you might scan one of them and it might be faster since the index would not be as wide and thus the index will have fewer pages to scan)



                                Step 2: You can look at the actual execution times and resource use for each query to see why they are different. One way to do this is to run "set statistics time on", then "set statistics io on", then run your query. it will dump out extra information into SSMS when you run the query from there. (You can read about this here: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-statistics-io-transact-sql?view=sql-server-2017)



                                If you review the output from each one, you may find reasons for why the performance is different. One possible explanation is that the amount of memory is limited in an S2 and you are just at the boundary for where all the pages fit in memory vs. not for the two examples. In that case, doing a count(*) query would need to cycle through all the pages and do much more IO than in the smaller case where they might all be in memory already.



                                Step 3: You can also potentially examine the query store to get insight into why one case is fast and one case is not. An overview of how to use it is here:
                                https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
                                Note: it is on-by-default in SQL Azure so you can just go look at the time window when you ran the queries to get insight into what was happening at that time in your database.



                                Finally, you might consider ways to make the query go faster if you need it to be faster.
                                * creating a narrow b-tree index on the table may help for that one query (count(*) doesn't return any columns and just needs a count of rows from some non-filtered index).
                                * you could use a Columnstore (which requires an S3 or above for memory reasons). This kind of column-oriented index is optimized for this kind of query and would be much faster as the size of the table increases in the future.



                                Hope that help






                                share|improve this answer




























                                  0














                                  You should step back and ponder your assumption:
                                  1. "performance should be about the same" - you have more data in one case vs. the other. In the limit, you should expect the performance of the second one to potentially be somewhat slower than the original one.



                                  Now, let's go into the "why" it can be slower and how you can investigate each case:
                                  Step 1: Look at the query plans for each case and see what you have. Likely, you will have something like:
                                  StreamAgg <- Clustered Index Scan
                                  (if you have other b-tree indexes, you might scan one of them and it might be faster since the index would not be as wide and thus the index will have fewer pages to scan)



                                  Step 2: You can look at the actual execution times and resource use for each query to see why they are different. One way to do this is to run "set statistics time on", then "set statistics io on", then run your query. it will dump out extra information into SSMS when you run the query from there. (You can read about this here: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-statistics-io-transact-sql?view=sql-server-2017)



                                  If you review the output from each one, you may find reasons for why the performance is different. One possible explanation is that the amount of memory is limited in an S2 and you are just at the boundary for where all the pages fit in memory vs. not for the two examples. In that case, doing a count(*) query would need to cycle through all the pages and do much more IO than in the smaller case where they might all be in memory already.



                                  Step 3: You can also potentially examine the query store to get insight into why one case is fast and one case is not. An overview of how to use it is here:
                                  https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
                                  Note: it is on-by-default in SQL Azure so you can just go look at the time window when you ran the queries to get insight into what was happening at that time in your database.



                                  Finally, you might consider ways to make the query go faster if you need it to be faster.
                                  * creating a narrow b-tree index on the table may help for that one query (count(*) doesn't return any columns and just needs a count of rows from some non-filtered index).
                                  * you could use a Columnstore (which requires an S3 or above for memory reasons). This kind of column-oriented index is optimized for this kind of query and would be much faster as the size of the table increases in the future.



                                  Hope that help






                                  share|improve this answer


























                                    0












                                    0








                                    0







                                    You should step back and ponder your assumption:
                                    1. "performance should be about the same" - you have more data in one case vs. the other. In the limit, you should expect the performance of the second one to potentially be somewhat slower than the original one.



                                    Now, let's go into the "why" it can be slower and how you can investigate each case:
                                    Step 1: Look at the query plans for each case and see what you have. Likely, you will have something like:
                                    StreamAgg <- Clustered Index Scan
                                    (if you have other b-tree indexes, you might scan one of them and it might be faster since the index would not be as wide and thus the index will have fewer pages to scan)



                                    Step 2: You can look at the actual execution times and resource use for each query to see why they are different. One way to do this is to run "set statistics time on", then "set statistics io on", then run your query. it will dump out extra information into SSMS when you run the query from there. (You can read about this here: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-statistics-io-transact-sql?view=sql-server-2017)



                                    If you review the output from each one, you may find reasons for why the performance is different. One possible explanation is that the amount of memory is limited in an S2 and you are just at the boundary for where all the pages fit in memory vs. not for the two examples. In that case, doing a count(*) query would need to cycle through all the pages and do much more IO than in the smaller case where they might all be in memory already.



                                    Step 3: You can also potentially examine the query store to get insight into why one case is fast and one case is not. An overview of how to use it is here:
                                    https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
                                    Note: it is on-by-default in SQL Azure so you can just go look at the time window when you ran the queries to get insight into what was happening at that time in your database.



                                    Finally, you might consider ways to make the query go faster if you need it to be faster.
                                    * creating a narrow b-tree index on the table may help for that one query (count(*) doesn't return any columns and just needs a count of rows from some non-filtered index).
                                    * you could use a Columnstore (which requires an S3 or above for memory reasons). This kind of column-oriented index is optimized for this kind of query and would be much faster as the size of the table increases in the future.



                                    Hope that help






                                    share|improve this answer













                                    You should step back and ponder your assumption:
                                    1. "performance should be about the same" - you have more data in one case vs. the other. In the limit, you should expect the performance of the second one to potentially be somewhat slower than the original one.



                                    Now, let's go into the "why" it can be slower and how you can investigate each case:
                                    Step 1: Look at the query plans for each case and see what you have. Likely, you will have something like:
                                    StreamAgg <- Clustered Index Scan
                                    (if you have other b-tree indexes, you might scan one of them and it might be faster since the index would not be as wide and thus the index will have fewer pages to scan)



                                    Step 2: You can look at the actual execution times and resource use for each query to see why they are different. One way to do this is to run "set statistics time on", then "set statistics io on", then run your query. it will dump out extra information into SSMS when you run the query from there. (You can read about this here: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-statistics-io-transact-sql?view=sql-server-2017)



                                    If you review the output from each one, you may find reasons for why the performance is different. One possible explanation is that the amount of memory is limited in an S2 and you are just at the boundary for where all the pages fit in memory vs. not for the two examples. In that case, doing a count(*) query would need to cycle through all the pages and do much more IO than in the smaller case where they might all be in memory already.



                                    Step 3: You can also potentially examine the query store to get insight into why one case is fast and one case is not. An overview of how to use it is here:
                                    https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
                                    Note: it is on-by-default in SQL Azure so you can just go look at the time window when you ran the queries to get insight into what was happening at that time in your database.



                                    Finally, you might consider ways to make the query go faster if you need it to be faster.
                                    * creating a narrow b-tree index on the table may help for that one query (count(*) doesn't return any columns and just needs a count of rows from some non-filtered index).
                                    * you could use a Columnstore (which requires an S3 or above for memory reasons). This kind of column-oriented index is optimized for this kind of query and would be much faster as the size of the table increases in the future.



                                    Hope that help







                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered Jan 3 at 13:32









                                    Conor Cunningham MSFTConor Cunningham MSFT

                                    1,243149




                                    1,243149






























                                        draft saved

                                        draft discarded




















































                                        Thanks for contributing an answer to Stack Overflow!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function () {
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54015601%2fexecuting-same-simple-select-statement-or-stored-procedure-on-sql-azure-takes-lo%23new-answer', 'question_page');
                                        }
                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        'app-layout' is not a known element: how to share Component with different Modules

                                        android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

                                        WPF add header to Image with URL pettitions [duplicate]