Rahman Agoro, the frontline SQL DBA

Just another WordPress.com weblog

Archive for the ‘SQL 2008’ Category

Database ‘DatabaseName’ is in transition. Try the statement later Msg 952, Level 16, State 1, Line 1

Posted by rahmanagoro on April 4, 2016


I bumped onto this error message when our monitoring tool started to report errors with a SQL agent job. On taking a close look at the error, I found that whenever I tried to connect to the database, I simply got the error:

Msg 952, Level 16, State 1, Line 1
Database 'DatabaseName' is in transition. Try the statement later.

Digging deeper into the issue and also the error information, i made some interesting discoveries. It appears that we have a blocking chain going on within the database, our environment is  such that database owners can choose to detach databases etc.

I can the following code to check for blocking:

select
 l.resource_type,
 l.request_mode,
 l.request_status,
 l.request_session_id,
 r.command,
 r.status,
 r.blocking_session_id,
 r.wait_type,
 r.wait_time,
 r.wait_resource,
 request_sql_text = st.text,
 s.program_name,
 most_recent_sql_text = stc.text
from sys.dm_tran_locks l
left join sys.dm_exec_requests r
on l.request_session_id = r.session_id
left join sys.dm_exec_sessions s
on l.request_session_id = s.session_id
left join sys.dm_exec_connections c
on s.session_id = c.session_id
outer apply sys.dm_exec_sql_text(r.sql_handle) st
outer apply sys.dm_exec_sql_text(c.most_recent_sql_handle) stc
where l.resource_database_id = db_id('DatabaseName')

It turned out that one user wanted to detach the database whilst another user just started to issue a query on the same database, this was the reason for the error and the change of the transition state. One way to avoid such issue before detaching a database is to take the following steps.

  • Personally, I do like using code/T-SQL to run commands rather then the good old GUI, I can see what i am doing and I can also see if my actions are getting stuck, rather than wondering if my Pc is slow etc.
  • Secondly, when detaching a database, if other connections can be rolled back. The code below can be issued.
ALTER DATABASE DatabaseName SET offline with rollback immediate

The issue

I could see the blocking chain, from the SQL query above which checks the locks. Multiple sessions wanted to connect to a database that was about to change its transition to offline and then detached.

What I simply did was kill one of the sessions which was causing the blocking.

 

 

 

Posted in Management, Scripts, SQL 2008, SQL Administration | Leave a Comment »

Error: 17053, Severity: 16, State: 1 what does it mean ?

Posted by rahmanagoro on October 25, 2012


E:\DATA\Database.ndf: Operating system error 112(failed to retrieve text for this error. Reason: 15100) encountered.

I have seen this error message on one of our production systems, and wondering what the issue is. This error is being caused by the fact that one is trying to expand the size of the ndf file, and there just isn’t enough disk space on the actual drive or mount point.

To recreate the problem in SQL 2008, create an existing database. And then expand the size of one of the files to exceed the free available capacity on the server.

Posted in Management, SQL 2008, Tips | Leave a Comment »

Statistics again and we see same query with two different plans

Posted by rahmanagoro on October 25, 2012


Following on from my earlier post on the importance of keeping statistics up to date, I have come across another issue recently and this time, I have the luxury of actually showing the query plans too. Please note that I have used SQL Sentry plan explorer which you can download for free from http://www.sqlsentry.com

I had a call from one of my clients today complaining about bad performance on their database. The irony of this particular case is that I can see it myself by actually running the query on the database server and I could see that its taking a lot of time to complete for a fairly trivial query that shouldn’t really take that long to complete. I was lucky in this particular case that I also had a UAT system to run the query on as well, as it was a report which wasn’t really changing data, it made rerunning the report slightly less intrusive.

I proceeded onto UAT, ran the query and he pesto it completed within 9 seconds as opposed to 3 hours which it was taking on production. In my normal fashion, I started taking a look at the query plans and one thing which became obvious is the fact that the query plans are somewhat different on the 2 systems. Although the way the joins in the query have been written isn’t the best I have seen in my career but these things do happen. I looked through one of the tables and used the SQL Sentry query plan too to observe that the estimated rows on the table was somewhat wrong. I can also see that the wait stats on the query was CXPACKET waits, the query uses parallelism but I can’t really say it’s a bad thing.

The section of the plan below shows the table with the right number of row estimates for the statistics.

The query below shows the same query but with the statistics somewhat wrong.

Although looking at the query, the optimizer also recommended that an index be created on the table, I was hesitate to create the index as the query was working fine before, so why would it start to behave badly all of a sudden. After updating the statistics on the table, cleared the procedure cache and reran the query everything was fine once more. This just proves the importance of having up to date statistics and also the fact that when the optimizer recommends that you create indexes, it’s not always the answer to every performance problem.

Posted in Management, SQL 2008, SQL Administration, Tips | Leave a Comment »

Statistics and the effect on query performance

Posted by rahmanagoro on October 25, 2012


I have always known about issues with table statistics on a SQL server database, but actually seeing it cause performance problems is a bit of a different experience. I got called from the support teams saying that a report which they normally run on the database has been running for 3 hours now, and shows no sign of completing.

I then logged onto the system, ran some diagnostic queries to pull up the query plan for the query running and then suddenly, something didn’t quite make sense to me. One of the tables which I’m familiar with seems to be returning an estimated number of rows of one. At this stage, I knew this wasn’t right as I’m quite familiar with the table and I know that it contains millions and millions of rows.

Even looking at a section of the query plan, I just knew it wasn’t right at all.

Straight away all the tables which had an estimate of one, I updated the statistics with full scan, and re-ran the query again, and this time it completed in around 6 minutes. One thing to learn in this post is that when you see estimated number of rows of 1, and you know that the table certainly has more than one row, its usually a pointer to show that the statistics are out of date. After updating the statistics we can see from the screenshot below that the pipes which moves from object to object is a lot more bigger in size, this means that the number of rows being worked on it significantly more.

Conclusion

 

·         Always check statistics on the database.

·         Ensure that auto update statistics is on for the database, unless you choose to manually run this yourself or the database is very large and manual statistics update is essential.

·         Watch out for one row estimates on the query plan especially for large tables when you know the number of rows that ought to be returned is more than one.

·         Update statistics full scan may not be suitable for every environment, normally a sampling rate would also work, but tests will need to be carried out to establish which sample rate is suitable.

 

 

Posted in Database development, SQL 2008, SQL Administration, Tips | Tagged: | Leave a Comment »

Configuring and Installing SSIS on a SQL 2008 R2 failover cluster infrastructure.

Posted by rahmanagoro on October 23, 2011


SSIS isn’t cluster aware out of the box, so there is some bit of config changes to make in order to make it work on a cluster.

C

Select just the Integration service.

On the Client Access point screen, type in the name and also the IP address as well looking at the requirement spreadsheet.

Click on Next

Since the SSIS service has been installed in its own resource group, we now need to set the dependencies.

Set the disk dependency for the SSIS service

Add the disk resource dependency

Requirement: Change SSIS configuration, custom configuration

SSIS Custom Configuration

Navigate to the path: C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn

Create a folder on the root of the SSIS cluster disk called SSIS_Config

Create a folder on the SSIS cluster disk called Packages

Copy the file from the location C:\Program Files\Microsoft SQL Server\100\DTS\Binn\MsDtsSrvr.ini.xml to S:\SSIS_Config\



Right Click the SSIS service and select properties and then go onto Registry replication and add the registry key below.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\SSIS\ServiceConfigFile\


Click on OK and then Apply.

Take the resource offline and bring it back online for the changes to take effect.



Perform the following steps for SSIS on the second node.

Change the MsDTSSrvr.ini.xml file on the location defined above and change the packages folder to point to S:\Packages

Make the registry entry changes on the second node as well to finish off the configuration.

Test the failover by failing over the SSIS resource group by NodeA to NodeB.

Copy a .dtsx file onto the S:\Packages folder and open up management studio.

In the illustration below, you can see that we have connected to the SSIS instance and we can see the packages.

102311_1839_Configuring28


Posted in Clustering, SQL 2008, Tips, Uncategorized | 5 Comments »

Drop SQL server replication after a server name change

Posted by rahmanagoro on August 4, 2011


After spending some time brushing up my photography skills, its back to the real world of work, coming in to work I had a request from a developer to setup SQL server replication on a publisher with one subscriber, I have a routine of powershell scripts that I normally run to do this. After completing the steps, I noticed some errors, upon investigating it became apparent that the identity of the server isn’t what it is expected to be. The servername is different for some reason.

After running the select command to view the server name,  I find that its different from the server in which I have logged in from.

I get the error

SELECT @@SERVERNAMEsp_dropserver WRONGSERVER
Msg 20582, Level 16, State 1, Procedure sp_MSrepl_check_server, Line 31
Cannot drop server WRONGSERVER because it is used as a Publisher in replication.

–Trying to drop SQL replication I get the following errors

 

exec master..sp_dropdistributor @no_checks = 1

Msg 208, Level 16, State 1, Procedure sp_MSrepl_dropsubscription, Line 103
Invalid object name 'dbo.syssubscriptions'.
Msg 266, Level 16, State 2, Procedure sp_MSrepl_dropsubscription, Line 103
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
Msg 266, Level 16, State 2, Procedure sp_dropsubscription, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
Msg 266, Level 16, State 2, Procedure sp_MSpublishdb, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
Msg 266, Level 16, State 2, Procedure sp_replicationdboption, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
Msg 266, Level 16, State 2, Procedure sp_dropdistributor, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.

In this case, I can tell that there has been an exception within the execution of the stored procedures. What needs doing is for the replication markers on the databases to be removed so that replication can be dropped cleanly on the database.

Step 1

 

-- Remove the replication marker on the databases

EXEC master..sp_MSforeachdb '
USE [?]
IF ''?'' <> ''master'' AND ''?'' <> ''model'' AND ''?'' <> ''msdb'' AND ''?'' <> ''tempdb''
BEGIN
    SELECT ''?''

 IF OBJECT_ID(''tempdb..#dbproperty'') is not null
 drop table #dbproperty

 create table #dbproperty (OptionName   varchar(20), CurrentSetting varchar(10))

 insert into #dbproperty
 exec sp_dboption ''?'',''published''

 IF EXISTS (Select * from #dbproperty where CurrentSetting = ''ON''  )
 exec sp_removedbreplication @dbname = ''?'', @type = ''tran''

 IF OBJECT_ID(''tempdb..#dbproperty'') is not null
 drop table #dbproperty

END
'

This will now drop SQL replication.


exec master..sp_dropdistributor @no_checks = 1

It should now work.

Change the servername now.

sp_dropserver WRONGSERVER

— Create new server name

sp_addserver ‘RIGHTSERVER’,local

Restart the SQL service on the publisher.

The correct new servername should now be reflected on the SQL instance.

Now proceed to setting up SQL replication.

 

Posted in Scripts, SQL 2008, SQL replication, Tips | 2 Comments »

SQL Agent alerts not being sent ! Its weird ! Here is the fix for [264] An attempt was made to send an email when no email session has been established

Posted by rahmanagoro on June 13, 2011


I recently setup some SQL agent alerts but I decided to test them out and make sure they actually work, guess what it wasnt actually working and I wasn’t getting any alerts.

In view of this, I then recreated the datbase mail settings on the database server, I have come across this problem before and the fix then was to restart the SQL agent service and the problem was fixed, but this one was slighly different, I have restarted a few times to avail.

One thing that I did see in the SQL agent error log is as follows.

[264] An attempt was made to send an email when no email session has been established

After checking online and not finding a solution that does fix my problem, I decided to do the following.

Open up management studio >> SQL server agent >> Right click properties >> Alert System

Then untick the Enable mail profile.

Restart SQL agent and click OK.

 

Repeat the steps

Open up management studio >> SQL server agent >> Right click properties >> Alert System

Then tick the Enable mail profile.

Restart SQL agent and click OK.

Posted in SQL 2008, SQL Administration, Tips | 3 Comments »

Setting up WMI query and getting error SQLServerAgent Error: WMI error: 0x80041003.

Posted by rahmanagoro on June 13, 2011


I was trying to setup database mirroring alerts on one of my production database servers and I looked into the article on the MS site, see http://technet.microsoft.com/en-us/library/cc966392.aspx on mirroring alerting best practices. I then proceeded in trying to setup same, I ran the script below.

USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'Mirroring_Alert',
  @enabled=1,
  @delay_between_responses=1800,
  @include_event_description_in=0,
  @wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER',
  @wmi_query=N'SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE State = 5 ',
  @job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Mirroring_Alert', @operator_name=N'DBA_GROUP', @notification_method = 1
GO

 

I then got the error message below.


Msg 22022, Level 16, State 1, Line 0
SQLServerAgent Error: WMI error: 0x80041003.
Msg 14511, Level 16, State 1, Procedure sp_verify_alert, Line 300
The @wmi_query could not be executed in the @wmi_namespace provided. Verify that an event class selected in the query exists in the namespace and that the query has the correct syntax.

 

I ran thesame code on my desktop and it was fine, but I was thinking what could be the problem here.

I checked everything to make sure that the syntax was right and there wasnt anything wrong, bumped across a few KB’s about thesame problem, but it was primarily being caused by server names being longer than 15 characters and this applied to SQL 2005, my SQL instance was SQL 2008 and I then thought what could be going wrong. After spending the hour battling this, and I have a rule that any problem taking longer than an hour to fix, I need to get my head cooled out, pop out for a 10 minute break and fresh air and then give it another go. It seem to work for me time and time again, I came back, thought !! Hang on a minute, lets check the SQL error log to see if there are any pointers to the root cause, on there I saw errors as follows.


Date  6/13/2011 11:45:33 AM
Log  SQL Server (Current - 6/13/2011 10:01:00 AM)

Source  Logon

Message
Login failed for user 'NT AUTHORITY\SYSTEM'. Reason: The account is disabled. [CLIENT: <local machine>]


Ahh, this could be it. When I setup SQL servers I often disable the builtin/Administrators login as this is a security risk. It looks like the WMI service is using the local system account and this is obviously disabled and thus causing the problem highlighted.

I changed the WMI service account such that its a domain account and permissioned it on SQL server and pesto !! The problem is solved.

Posted in Database Mirroring, Management, SQL 2008, SQL High Availability, Tips, WMI | 1 Comment »

Be Careful ! SSAS Analysis services cannot be uninstalled from a SQL cluster once installed

Posted by rahmanagoro on June 6, 2011


Read the rest of this entry »

Posted in Clustering, SQL 2008, Tips | 1 Comment »

SQL 2008 Management Datawarehouse Overview

Posted by rahmanagoro on April 13, 2011


 

With the release of SQL 2008, Microsoft has developed a warehouse framework for database administrators to have an idea of what is actually going in within SQL server. A lot of the times during my career, I have heard the phrase “The database was slow between 3AM and 4AM) This is especially true if you work within a global framework whereby the database server is serving requests 24/7. I would have users in the Asia pacific region using the database from around midnight GMT (UK time), the as they finish the UK users start around 8AM, and as they wrap up the US users kick in around 1PM BST. So it’s a never ending circle.

To buttress the point, management data warehouse gives me the high level and detailed view of what is happening on the database server, I like to use the waits and queues methodology to see what SQL server is waiting for, and to also keep tabs on my queues as well. If a process is queing up waiting for resources, then there must be a reason for it and as a database administrator, I want to check and understand why we have the waits and queues on the database server.

This post assumes that management data warehouse has already been installed on the database server, I am still running SQL 2008 RTM SP1 cumulative update 5 on my systems and till date, there is no way to remove management data warehouse once it has been installed. I installed management data warehouse on the dedicated DBA database on the production system, the database schema gets installed on the database as well.

To launch management data warehouse, follow the steps below.

Management data warehouse works by collecting information from a number of DMV’s and using connection points to upload the data onto the database and then store it within  data warehouse style tables.

 

Memory section

 

If you click on the memory graph, it drills down to show you the key memory counters. Here you can see things like page life expectancy, SQL server internal memory consumption by type

Reviewing waits, within management data warehouse you can check your wait statistics and further drill into what percentage the composition is.

Memory counters based on page life expentancy.

 

 

 

 

 

Disk IO statistics information can also be found below.

On the IO specific information, you can drill down on top queries based on duration, total IO, physical disk reads and also logical writes.

Posted in Management, SQL 2008, Tips, Uncategorized | Leave a Comment »