Rahman Agoro, the frontline SQL DBA

Just another WordPress.com weblog

Archive for the ‘SQL Administration’ Category

Database ‘DatabaseName’ is in transition. Try the statement later Msg 952, Level 16, State 1, Line 1

Posted by rahmanagoro on April 4, 2016


I bumped onto this error message when our monitoring tool started to report errors with a SQL agent job. On taking a close look at the error, I found that whenever I tried to connect to the database, I simply got the error:

Msg 952, Level 16, State 1, Line 1
Database 'DatabaseName' is in transition. Try the statement later.

Digging deeper into the issue and also the error information, i made some interesting discoveries. It appears that we have a blocking chain going on within the database, our environment is  such that database owners can choose to detach databases etc.

I can the following code to check for blocking:

select
 l.resource_type,
 l.request_mode,
 l.request_status,
 l.request_session_id,
 r.command,
 r.status,
 r.blocking_session_id,
 r.wait_type,
 r.wait_time,
 r.wait_resource,
 request_sql_text = st.text,
 s.program_name,
 most_recent_sql_text = stc.text
from sys.dm_tran_locks l
left join sys.dm_exec_requests r
on l.request_session_id = r.session_id
left join sys.dm_exec_sessions s
on l.request_session_id = s.session_id
left join sys.dm_exec_connections c
on s.session_id = c.session_id
outer apply sys.dm_exec_sql_text(r.sql_handle) st
outer apply sys.dm_exec_sql_text(c.most_recent_sql_handle) stc
where l.resource_database_id = db_id('DatabaseName')

It turned out that one user wanted to detach the database whilst another user just started to issue a query on the same database, this was the reason for the error and the change of the transition state. One way to avoid such issue before detaching a database is to take the following steps.

  • Personally, I do like using code/T-SQL to run commands rather then the good old GUI, I can see what i am doing and I can also see if my actions are getting stuck, rather than wondering if my Pc is slow etc.
  • Secondly, when detaching a database, if other connections can be rolled back. The code below can be issued.
ALTER DATABASE DatabaseName SET offline with rollback immediate

The issue

I could see the blocking chain, from the SQL query above which checks the locks. Multiple sessions wanted to connect to a database that was about to change its transition to offline and then detached.

What I simply did was kill one of the sessions which was causing the blocking.

 

 

 

Advertisements

Posted in Management, Scripts, SQL 2008, SQL Administration | Leave a Comment »

Statistics again and we see same query with two different plans

Posted by rahmanagoro on October 25, 2012


Following on from my earlier post on the importance of keeping statistics up to date, I have come across another issue recently and this time, I have the luxury of actually showing the query plans too. Please note that I have used SQL Sentry plan explorer which you can download for free from http://www.sqlsentry.com

I had a call from one of my clients today complaining about bad performance on their database. The irony of this particular case is that I can see it myself by actually running the query on the database server and I could see that its taking a lot of time to complete for a fairly trivial query that shouldn’t really take that long to complete. I was lucky in this particular case that I also had a UAT system to run the query on as well, as it was a report which wasn’t really changing data, it made rerunning the report slightly less intrusive.

I proceeded onto UAT, ran the query and he pesto it completed within 9 seconds as opposed to 3 hours which it was taking on production. In my normal fashion, I started taking a look at the query plans and one thing which became obvious is the fact that the query plans are somewhat different on the 2 systems. Although the way the joins in the query have been written isn’t the best I have seen in my career but these things do happen. I looked through one of the tables and used the SQL Sentry query plan too to observe that the estimated rows on the table was somewhat wrong. I can also see that the wait stats on the query was CXPACKET waits, the query uses parallelism but I can’t really say it’s a bad thing.

The section of the plan below shows the table with the right number of row estimates for the statistics.

The query below shows the same query but with the statistics somewhat wrong.

Although looking at the query, the optimizer also recommended that an index be created on the table, I was hesitate to create the index as the query was working fine before, so why would it start to behave badly all of a sudden. After updating the statistics on the table, cleared the procedure cache and reran the query everything was fine once more. This just proves the importance of having up to date statistics and also the fact that when the optimizer recommends that you create indexes, it’s not always the answer to every performance problem.

Posted in Management, SQL 2008, SQL Administration, Tips | Leave a Comment »

Statistics and the effect on query performance

Posted by rahmanagoro on October 25, 2012


I have always known about issues with table statistics on a SQL server database, but actually seeing it cause performance problems is a bit of a different experience. I got called from the support teams saying that a report which they normally run on the database has been running for 3 hours now, and shows no sign of completing.

I then logged onto the system, ran some diagnostic queries to pull up the query plan for the query running and then suddenly, something didn’t quite make sense to me. One of the tables which I’m familiar with seems to be returning an estimated number of rows of one. At this stage, I knew this wasn’t right as I’m quite familiar with the table and I know that it contains millions and millions of rows.

Even looking at a section of the query plan, I just knew it wasn’t right at all.

Straight away all the tables which had an estimate of one, I updated the statistics with full scan, and re-ran the query again, and this time it completed in around 6 minutes. One thing to learn in this post is that when you see estimated number of rows of 1, and you know that the table certainly has more than one row, its usually a pointer to show that the statistics are out of date. After updating the statistics we can see from the screenshot below that the pipes which moves from object to object is a lot more bigger in size, this means that the number of rows being worked on it significantly more.

Conclusion

 

·         Always check statistics on the database.

·         Ensure that auto update statistics is on for the database, unless you choose to manually run this yourself or the database is very large and manual statistics update is essential.

·         Watch out for one row estimates on the query plan especially for large tables when you know the number of rows that ought to be returned is more than one.

·         Update statistics full scan may not be suitable for every environment, normally a sampling rate would also work, but tests will need to be carried out to establish which sample rate is suitable.

 

 

Posted in Database development, SQL 2008, SQL Administration, Tips | Tagged: | Leave a Comment »

SQL Agent alerts not being sent ! Its weird ! Here is the fix for [264] An attempt was made to send an email when no email session has been established

Posted by rahmanagoro on June 13, 2011


I recently setup some SQL agent alerts but I decided to test them out and make sure they actually work, guess what it wasnt actually working and I wasn’t getting any alerts.

In view of this, I then recreated the datbase mail settings on the database server, I have come across this problem before and the fix then was to restart the SQL agent service and the problem was fixed, but this one was slighly different, I have restarted a few times to avail.

One thing that I did see in the SQL agent error log is as follows.

[264] An attempt was made to send an email when no email session has been established

After checking online and not finding a solution that does fix my problem, I decided to do the following.

Open up management studio >> SQL server agent >> Right click properties >> Alert System

Then untick the Enable mail profile.

Restart SQL agent and click OK.

 

Repeat the steps

Open up management studio >> SQL server agent >> Right click properties >> Alert System

Then tick the Enable mail profile.

Restart SQL agent and click OK.

Posted in SQL 2008, SQL Administration, Tips | 3 Comments »

Move System model database in SQL 2008 R2 Failover Cluster

Posted by rahmanagoro on April 12, 2011


Move System database on a SQL 2008 R2 Failover Cluster

In order to do this, run the following steps

  • Modify the location of the system databases
  • Open up the failover cluster manager.
  • Run the code below with the SQL resource online

Alter database msdb modify file (name=msdbdata,filename='F:\Database\msdbdata.mdf')

Alter database model modify file (name=modellog,filename='L:\Logs\modellog.ldf')

  • Bring the SQL resource offline

  • Confirmation from the failover cluser manager

  • The resource has now been taken offline


  • Now move the files onto the new destination, i.e the location of where the system data and log file should be.
  • Bring the SQL service and sQL agent resource online.

Posted in Clustering, Management, SQL 2008, SQL Administration, SQL High Availability, Tips | Leave a Comment »

SQL Failover Clustering Requirements

Posted by rahmanagoro on April 12, 2011


When installing a SQL failover cluster, its a complicated process and we DBA’s are always looking for ways to make our lives easier. Also as a SQL failover installation would require cooperation between the following groups of people.

  1. Database Administrators
  2. Windows Administrators
  3. Network Administrators
  4. Storage Administrators.

I find that creating a small little spreadsheet and sending it around for everyone to fill in their details or perhaps state your requirements often helps and avoids delays and ambiguity.

I have put together a spreadsheet with same information as it might be of use to others.

SQL Failover Cluster_Requirements_v1.0

Posted in Clustering, Management, SQL 2008, SQL Administration, SQL High Availability | Leave a Comment »

SQL 2008 Failover Cluster Installation Pre-Staging

Posted by rahmanagoro on April 12, 2011


As a DBA, I can find myself cross between being able to do my job and also some kind of wrangling with system administrators, they seem to believe that anything to do with installations ought to be the work of a system administrator. I do not believe that life has to operate in that fashion. I think DBA’s should be allowed to do what they are very good at and what they were hired for and system administrators should focus on core operating system tasks. I have encountered crossroads which has led to disagreements etc, once I had a system administrator saying that a project would be stalled unless he installed SQL 2008.

Anyway, away from the ranting. I had to install and configure a SQL 2008 R2 failover cluster the other day, and ran into some difficulties when it came to registering the virtual SQL network name in Active Directory, of course I am not a domain administrator. I do not need to be a domain administrator to do my job. On speaking to one of the system administrators, he seemed to think that it was the job of a system administrator to install SQL on a failover cluster and a DBA can sit back and watch. I disagree, I think DBA’s should work with system administrators and work together but yet focus on areas in which they specialise.

I managed to speak to the head of the windows administrators, who agreed to pre stage the windows cluster name for me in Active directory, so that the installation can go ahead. This was a sensible approach to me, after all system administrators have already clustered the underlying windows operating system and DBA’s can take a back seat when it comes to that, SA’s on the other hand should allow DBA’s focus on SQL server which is their area of specialism. The windows clustername and the SQL network name were pre-staged in active directory following the steps below.

The specific error message appearing is as follows

Cluster network name resource ‘SQL Network Name (SQLCLUSTER1)’ failed to create its associated computer object in domain ‘domain.local’ for the following reason: Unable to create computer account.
The text for the associated error code is: Access is denied.

Please work with your domain administrator to ensure that:
– The cluster identity ‘WIN-CLUSTER-1’ can create computer objects. By default all computer objects are created in the ‘Computers’ container; consult the domain administrator if this location has been changed.
– The quota for computer objects has not been reached.
– If there is an existing computer object, verify the Cluster Identity ‘WIN-CLUSTER-1$’ has ‘Full Control’ permission to that computer object
–2
Cluster resource ‘SQL Network Name (SQLCLUSTER1)’ in clustered service or application ‘SQLCLUSTER1-SQL’ failed.

–3
The Cluster service failed to bring clustered service or application ‘SQLCLUSTER1-SQL’ completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.

To prestage a cluster name account

  1. Make sure that you know the name that the cluster will have, and the name of the user account that will be used by the person who creates the cluster. (Note that you can use that account to perform this procedure.)
  2. On a domain controller, click Start, click Administrative Tools, and then click Active Directory Users and Computers. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.
  3. In the console tree, right-click Computers or the default container in which computer accounts are created in your domain. Computers is located in Active Directory Users and Computers/domain node/Computers.
  4. Click New and then click Computer.
  5. Type the name that will be used for the failover cluster, in other words, the cluster name that will be specified in the Create Cluster wizard, and then click OK.
  6. Right-click the account that you just created, and then click Disable Account. If prompted to confirm your choice, click Yes.

The account must be disabled so that when the Create Cluster wizard is run, it can confirm that the account it will use for the cluster is not currently in use by an existing computer or cluster in the domain.

  1. On the View menu, make sure that Advanced Features is selected.

When Advanced Features is selected, you can see the Security tab in the properties of accounts (objects) in Active Directory Users and Computers.

  1. Right-click the folder that you right-clicked in step 3 , and then click Properties.
  2. On the Security tab, click Advanced.

10.  Click Add, click Object Types and make sure that Computers is selected, and then click OK. Then, under Enter the object name to select, type the name of the computer account you just created, and then click OK. If a message appears, saying that you are about to add a disabled object, click OK.

11.  In the Permission Entry dialog box, locate the Create Computer objects and Read All Properties permissions, and make sure that the Allow check box is selected for each one.

12.  Click OK until you have returned to the Active Directory Users and Computers snap-in.

13.  If you are using the same account to perform this procedure as will be used to create the cluster, skip the remaining steps. Otherwise, you must configure permissions so that the user account that will be used to create the cluster has full control of the computer account you just created:

  1. On the View menu, make sure that Advanced Features is selected.
  2. Right-click the computer account you just created, and then click Properties.
  3. On the Security tab, click Add. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.
  4. Use the Select Users, Computers, or Groups dialog box to specify the user account that will be used when creating the cluster. Then click OK.
  5. Make sure that the user account that you just added is selected, and then, next to Full Control, select the Allow check box.

In our case, we specifically followed the step below.

Steps for prestaging an account for a clustered service or application

It is usually simpler if you do not prestage the computer account for a clustered service or application, but instead allow the account to be created and configured automatically when you run the High Availability wizard. However, if it is necessary to prestage accounts because of requirements in your organization, use the following procedure.

Membership in the Account Operators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at http://go.microsoft.com/fwlink/?LinkId=83477.

To prestage an account for a clustered service or application

  1. Make sure that you know the name of the cluster and the name that the clustered service or application will have.
  2. On a domain controller, click Start, click Administrative Tools, and then click Active Directory Users and Computers. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.
  3. In the console tree, right-click Computers or the default container in which computer accounts are created in your domain. Computers is located in Active Directory Users and Computers/domain node/Computers.
  4. Click New and then click Computer.
  5. Type the name that you will use for the clustered service or application, and then click OK.
  6. On the View menu, make sure that Advanced Features is selected.

When Advanced Features is selected, you can see the Security tab in the properties of accounts (objects) in Active Directory Users and Computers.

  1. Right-click the computer account you just created, and then click Properties.
  2. On the Security tab, click Add.
  3. Click Object Types and make sure that Computers is selected, and then click OK. Then, under Enter the object name to select, type the cluster name account, and then click OK. If a message appears, saying that you are about to add a disabled object, click OK.

10.  Make sure that the cluster name account is selected, and then, next to Full Control, select the Allow check box.

There is an excellent kb article which can be found on the link below.

http://technet.microsoft.com/en-us/library/cc731002(WS.10).aspx

 References: http://technet.microsoft.com/en-us/library/cc731002(WS.10).aspx

Posted in Clustering, SQL Administration, SQL High Availability | Leave a Comment »

Troubleshooting using default trace information

Posted by rahmanagoro on March 11, 2011


The default trace is built into SQL server, its one that often tells you operations that are happening on the database server which is often not provided within the SQL error logs.

I regularly query the default trace to pick up issues that might be arising on the database server, the scripts that I use to query the default trace can be found below. The default trace also allows me to create policies using policy based management within SQL 2008, we once had a deployment using Visual studio and for those who have used visual studio, it creatred databases as a project and this is used in storing database schema, database deployment, integration with SVN etc. At the end of the day, the repository was not upto date and Auto skrink was turned on with visual studio, I found this had been turned on with visual studio. This was revealed within the default trace and I was able to create a policy to check for this across the database estate.

To get the default trace file currently in use, run the query below.

SELECT *
FROM fn_trace_getinfo(default);

To group the various events to see what has been happening on the database server, you can run the query below.

--Auditing various event types in the trace file.

SELECT
trc_evnt.name, trc_evnt.category_id
,count(*) as occurence
FROM fn_trace_gettable('y:\databases\log_5170.trc', NULL) AS dflt_trc
INNER JOIN sys.trace_events AS trc_evnt
ON dflt_trc.EventClass = trc_evnt.trace_event_id
group by trc_evnt.name, trc_evnt.category_id
order by trc_evnt.name

You can change the default trace file to query the last five default traces, so for example y:\databases\log_5170.trc is the current trace, to check the one before this simply decrement the number by one, so run the query y:\databases\log_5169.trc.

I get information similar to the one below

Audit DBCC Event    8    9006
Audit Server Alter Trace Event    8    2
Hash Warning    3    2
Object:Created    5    644
Object:Deleted    5    201
Sort Warnings    3    1

I have used this method to troubleshoot various issues on the database such as the following:

missing statistics: I subesequently created and index or statistics on the column.

transaction logs growing every night: I changed the growth settings and expanded the size so as to remove the need for the growth.

Posted in Management, Scripts, SQL 2008, SQL Administration, Tips, Uncategorized | Leave a Comment »

SQL server DBA development code review checklist

Posted by rahmanagoro on October 4, 2010


As the database administrator, I often have to deal with production problems. Now some of these problems can be classed under different categories, they include the followins.

  • Peformance issues.
  • Issues as a result of a release.
  • Bugs within the SQL engine
  • Faulty SQL code
  • Proactive/reactive issues

The issue of performance problems and faulty SQL code can often be attributed to bad coding, whilst I know that a job of a DBA itself can be hard, the code review process is one which is often ignored in most shops. I don’t share thesame approach, I like to know what code is being written on the database server which I manage, afterall if there are problems, the database administrator will be the one to be called at 02:45AM in the early hours of the morning. I like to avoid such problems and also ensure that my environment is as stable as possible. The art of performance tuning is not one that can be learnt from text books or just reading from the internet, its one that you gain often from experience.  I often gain such experience by dealing with production issues, attenging seminars and of course a lot of reading, but I find that some of these problems can be avoided in the first place. Thats where DBA code reviews comes into play, imagine a situation where a developer was corrected as to how to best write a SQL query, another developer is making thesame mistake, so what do you then do. Do you go to every developer and start correcting them ?

At a previous job, what I used to do is organise presentations once a month with my development team, often speaking about issues that I have either noticed or topics that the development team want me to speak about. As for code reviews, I came up with a checklist of items that I often look out for. They are as follows.

  • Consistent naming convention should be used at all times, there are loads of naming conventions out there, pick one and stick to it.
  • All objects especially stored procedures should have comments, information can include author, date, description of the object, changes etc.
  • Error handling using TRY.. CATCH. TRY..CATCH was released in SQL 2005 and is by far the best error handling method built into SQL.
  • Use of schema names when referencing tables, specify Database.schemaname.objectname. This ensures that the SQL engine doesnt start looking for the schema which holds the object.
  • Avoid the use of Select *, all column names must be explicitly specified. This will avoid unnecessary IO.
  • Ensure that all tables have a primary key (preferably of integer type), and all foreign keys are indexes with a non clustered index.
  • Avoid the placing of duplicate indexes on tables. As simple as it might sound, I have seen this in various places.
  • Developers should make efforts to ensure that databases are normalised, a relationship diagram for the database would also be useful.
  • Avoid server side cursors and maintain the use of SET based operation.  If a cursor really needs to be used, use a WHILE loop instead and specify a primary key in your table, a very easy way to do this is using an identity column.
  • Temporary tables and table variable, as a general rule of thumb use a table variable for relatively small datasets, and use a temporary table for fairly large datasets. Temp tables maintain statistics and can have indexes, but often recompile on the other hand as a downside, table variables don’t maintain statistics but they don’t force a recompilation. This was demonstrated in the SQL BITS 2009 session.
  • When using nested stored procedures, ensure that the temporary table names are unique/different. I have seen this affect query plans in a very catastrophic manner.
  • Avoid the use of functions in Joins within SQL 2005/2008, from experience performance tends to suffer as the dataset grows mianly because the SQL optimizer doesn’t know which index to use. Use a CTE or a derived table instead.
  • Embrace the use functions like OBJECT_ID to check for the existence of an object within stored procedures/script etc. This option is better then querying sysobjects where it can cause locking.
  • When using ORDER BY within SQL, avoid using ORDER BY ORDINALS for example select name,surname from tblStudents order by 1,2. When using order by, ensure that the column named are specified, alias names should also be avoided as this doesn’t work in SQL 2008.
  • Stored procedures that take in different combination of parameters should use local variables in order to avoid parameter sniffing. When having branching IF statements within stored procedures that process complicated logic, each IF statement should call another sub procedure rather than a direct SQL statement. This will allow stored procedures to make use of the procedure cache, and prevent stored procedures generating inconsistent query plans due to parameters.
  • Tables that hold parameter information or lookups should have not null constraints defined.
  • Developers should make efforts in identifying heavy queries within their applications, and review accordingly on how to optimise the query if needed.
  • New tables to be sized appropriately with respect to the use of the table and the nature of the data it will store.
  • Avoid the use of optimiser hints, in SQL 2008 hints should only be used in exceptional circumstances i.e. maintaining a consistent planguide etc. At all times, the optimiser should be left to determine the best plan for all queries.
  • Use of SET NOCOUNT ON at the beginning of stored procedures, this improves performance by reducing network traffic.
  • Developers should be careful with the use of custom data types, native SQL data types should be used where possible at all times.

Posted in Database development, Professional Development, SQL Administration | Leave a Comment »

Granting execute permissions to a database login permanently using db_executor role

Posted by rahmanagoro on August 30, 2010


I often find that some logins will need execute permissions to objects on the database, in some cases a development environment might exists where users dont have GUI interface to interact with the database, i.e every operation is all done through stored procedures. In such cases, it might be acceptable to grant execute permissions to all objects on the database.  

In SQL server the role db_datareader has always existed, but how come we don’t have a role called db_executor. In this blog, I will show you how you can create a custom role which gives execute permissions on stored procedures, functions on the database.  

For the purpose of demonstration I run the following scripts with SA privileges on the database  

--Create a database and a login named test, grant the test login read only access to the DEMO database
CREATE DATABASE [DEMO] ON  PRIMARY
( NAME = N'DEMO', FILENAME = N'C:\Database\DEMO.mdf' , SIZE = 3072KB , FILEGROWTH = 1024KB )
 LOG ON
( NAME = N'DEMO_log', FILENAME = N'C:\Database\DEMO_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO

USE [master]
GO
CREATE LOGIN [test] WITH PASSWORD=N'test', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
USE [DEMO]
GO
CREATE USER [test] FOR LOGIN [test]
GO
USE [DEMO]
GO
EXEC sp_addrolemember N'db_datareader', N'test'
GO

Whilst still logged in as myself, create objects under the DEMO database. 
 


use DEMO
GO
create table tblUser
(
UserID int identity(1,1),
UserName varchar(30)
)

insert into tblUser (UserName)
	Select 'Tom'
union
	Select 'Jen'

GO

Create procedure spGetUserName
as
select UserID, UserName
from tblUser

GO
exec spGetUserName

Now log on as the test user to the DEMO database and run the following scripts.  

 
select * from tblUser 
1 Tom
2 Jen 

Try executing the stored procedure we get the error below 
 
Msg 229, Level 14, State 5, Procedure spGetUserName, Line 1 
The EXECUTE permission was denied on the object ‘spGetUserName’, database ‘DEMO’, schema ‘dbo’. 
 
I can either grant execute permissions on this single stored procedure, but imagine that we had over a thousand stored procedures. One easy way to do this is to create a role and grant execute permission on the actual schema itself. 

use DEMO
go
CREATE ROLE [db_executor]
GO
GRANT EXECUTE ON SCHEMA ::dbo TO db_executor
GO
--Then we’ll add the Test user in the database to the db_executor role in the database:
USE DEMO
GO
EXEC sp_addrolemember N'db_executor', N'test'
GO

After running the above, we now execute the stored procedure and it executes fine without any problems. 

execute spGetUserName 

1 Tom
2 Jen

Posted in Day to day queries, Scripts, Security, SQL 2008, SQL Administration | Leave a Comment »