:::: MENU ::::
Browsing posts in: SQL Server

Failover all mirrored databases as a group

Ok, when using plain old database mirroring, I know, it’s old and almost obsolete now, there is no group failover so that all databases can be failed over if one goes, what I did was set up an alert that checks if one database has failed over, and that would start a simple agent job that failed over the rest.

There are a lot of ways to configure this, read up on it at https://technet.microsoft.com/en-us/library/cc966392.aspx

We opted to use the option of checking if a manual failover had been done, that is mirroring session state change 7. If your database mirroring is set up with automatic failover, than also add a check for state 8.

I’ll just copy in an example below where we have 3 databases, named DB1, DB2 and DB3, yeah I know, awesome imagination right? :-)

Alert is checking for a manual failover, thus state 7.

Create the alert and job on test servers first, do a couple of manual failovers to see that all databases failover. They failover serially, in the order set in the agent job, so depending on their size and unrestored transaction log it can take a while.


-- First we create the alert on both servers holding the mirrored databases

USE [msdb]
GO

/****** Object: Alert [Failover Databases] Script Date: 02/17/2016 10:32:05 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Failover Databases',
@message_id=0,
@severity=0,
@enabled=1,
@delay_between_responses=0,
@include_event_description_in=0,
@category_name=N'[Uncategorized]',
@wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER',
@wmi_query=N'SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE State = 7 AND

(DatabaseName = ''DB1'' OR DatabaseName = ''DB2'' OR DatabaseName = ''DB3'')',
@job_id=N'f5e58bbe-b574-477f-a3ec-b596c6766a22'
GO

-- Next we create the job, also on both servers.

USE [msdb]
GO

/****** Object: Job [Failover Databases] Script Date: 02/17/2016 10:33:15 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]]] Script Date: 02/17/2016 10:33:15 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'Failover Databases',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Failover databases',
@category_name=N'[Uncategorized (Local)]',
@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Failover] Script Date: 02/17/2016 10:33:15 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Failover',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'IF EXISTS (SELECT 1 FROM sys.database_mirroring WHERE db_name(database_id) = N''DB1''

AND mirroring_role_desc = ''PRINCIPAL'')
ALTER DATABASE DB1 SET PARTNER FAILOVER
GO

IF EXISTS (SELECT 1 FROM sys.database_mirroring WHERE db_name(database_id) = N''DB2'' AND mirroring_role_desc =

''PRINCIPAL'')
ALTER DATABASE DB2 SET PARTNER FAILOVER
GO

IF EXISTS (SELECT 1 FROM sys.database_mirroring WHERE db_name(database_id) = N''DB3'' AND mirroring_role_desc =

''PRINCIPAL'')
ALTER DATABASE DB3 SET PARTNER FAILOVER
GO',
@database_name=N'master',
@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

Good luck!
//Richard


Powershell delete old backups/files

Can be run in a SQL Agent job as a powershell jobstep.

This script deletes bak and trn files in all subcatalogs under Y:\Backup that are older then 1 day. Modify after needs.

Get-ChildItem 'Y:\Backup' -include *.bak,*.trn -Recurse | Where {$_.creationtime -lt (Get-Date).AddDays(-1)} | Remove-Item -Force

//Richard


Nytt bolag på konsulthimlen!

Coduo AB

Har tillsammans med min Svåger Anders Björklund startat ett nytt konsultbolag, vi heter Coduo AB och spås en lysande framtid.

Jag kommer fortsatt jobba som DBA konsult, Anders är Java expert/programmer extraordinär, grym helt enkelt! Jag finns även fortsatt här på LinkedIn och Anders hittar ni här.

Vi kommer båda jobba för Coduo AB från och med juni 2015, nu söker vi andra trevliga människor som vill jobba med oss, vi söker anställa framförallt systemutvecklare med erfarenhet av Java eller .Net världen, eller om du har jobbat som DBA för SQL Server eller Oracle så är det mycket intressant också.

Coduo vill växa, men inte mycket, vill hålla bolaget ganska litet så det blir korta beslutsvägar och man ska kunna känna alla i företaget, dessutom blir det mycket enklare och roligare konferenser.

Kontakta mig på richard@coduo.se eller Anders på anders@coduo.se för mer information.

Väl mött!
//Richard


Powershell copy database backups

Needed to setup an automatic copy of latest SQL Server database backup from production database to test server. A twist was that the production database was setup with AlwaysOn database mirroring with backups taken on the secondary replica. So, how do we find the latest database backup? I solved it by querying both msdb’s and comparing the backup dates. Ugly but it works…

# Script to setup automatic copy of database backup from AlwaysOn
# production servers to test server
# I use it on a separate admin server
# Run this script as a Scheduled Task or on demand,
# the user need to be able to access the backup files remotely
# and also on the destination server catalog
# This is just an example script, needs to be edited to suit your environment
# It works by logging in to both AlwaysOn instances and
# checking the file path and time stamp
# for the latest backup of the database you want to copy.
# You need a simple user to log in to the production instances,
# no special rights granted,
# public can read this information in msdb, I use the login RemoteFilecopy
# I use diskshares for backups in the form of \\servername\drive$
# eg. \\192.168.0.2\z$
# edit to suit your need

########## Common variables for all copying ##########

$password = "YOUR PASSWORD FOR THIS LOGIN"
$db = "msdb"
$DestinationCatalog = "\\TESTSERVER\DISKSHARE$\DESTINATIONCATALOG\"

########## GET LATEST PRODUCTION DATABASE BACKUP ##########

$ServerInstance1 = "PRODUCTION_SERVER_1"
$ServerInstance2 = "PRODUCTION_SERVER_2"
$SourceDatabase = "DATABASE_NAME"

# Check latest backup from each server and get the latest one
$sqlstring = @"
SELECT TOP 1 bmf.physical_device_name [Filename], bs.backup_finish_date [BackupDate]
FROM msdb.dbo.backupmediafamily bmf
INNER JOIN msdb.dbo.backupset bs ON bmf.media_set_id = bs.media_set_id
WHERE bs.type = 'D'
AND bs.database_name = '$SourceDatabase'
ORDER BY bs.backup_finish_date DESC
"@

# I use a simple dummy date below just to have a non-null date to compare to
# If your backups are older than 2010, than you have other problems :-)

$Filename1 = (Invoke-Sqlcmd -Username $user -Password $password -Serverinstance $ServerInstance1 -Database $db -Query $sqlstring).Filename
$BackupDate1 = Get-Date "1/1/2010"
if($Filename1.length -gt 0 ) {
$BackupDate1 = (Invoke-Sqlcmd -Username $user -Password $password -Serverinstance $ServerInstance1 -Database $db -Query $sqlstring).BackupDate
}

$Filename2 = (Invoke-Sqlcmd -Username $user -Password $password -Serverinstance $ServerInstance2 -Database $db -Query $sqlstring).Filename
$BackupDate2 = Get-Date "1/1/2010"
if($Filename2.length -gt 0 ) {
$BackupDate2 = (Invoke-Sqlcmd -Username $user -Password $password -Serverinstance $ServerInstance2 -Database $db -Query $sqlstring).BackupDate
}

if ( $BackupDate1 -gt $BackupDate2 ) {
$SourceBaseCatalog = "\\PRODUCTION_SERVER_1\DISKSHARE$\"
$Filename = $SourceBaseCatalog + $Filename1.Remove(0,3)
}
if ( $BackupDate1 -lt $BackupDate2 ) {
$SourceBaseCatalog = "\\PRODUCTION_SERVER_2\DISKSHARE$\"
$Filename = $SourceBaseCatalog + $Filename2.Remove(0,3)
}
if ( $BackupDate1 -eq $BackupDate2 ) { Write-Host "Found no backup or same date, break!!"; Break; }

################### START COPYING BACKUP ################################

# Change from sqlps to normal filesystem usage
Set-Location C:

# Create destination filename, I use the original name plus "FULL_LATEST"
$DestinationFilename = "$DestinationCatalog" + $SourceDatabase + "_FULL_LATEST.bak"

# Check if file exists, exit otherwise
IF(!(Test-Path -path $Filename))a
{Write-Host "$Filename not found!"; exit}

# Check if destination catalogexists, exit otherwise
IF(!(Test-Path -path $DestinationCatalog)) {
Write-Host "$DestinationCatalog not found, exiting`r`n"
exit
}

# Copy file from PROD server to TEST server
Write-Host "Start copying file $Filename to $DestinationFilename"

# Uncomment line below to actually copy the file, but ONLY after testing, there are numerous possibilities to screw up here :-)
# Copy-Item -Path $Filename -Destination $DestinationFilename
########## COPY BACKUP DONE ##########


Database missing owner

Sometimes a database does not have an owner and when you right-click on it and try to choose Properties the GUI just pops up with a message saying that sorry, no, it can’t fullfill your request.

Just open a new query window, use the database and set owner, eg, set owner to sa:
sp_changedbowner 'sa'

And then you can right-click as usual.


Unused indexes

Indexes are only really ever useful for finding rows, when they are not being used they only take up space and take time to update, use something like below to find indexes that are never used for what they are good for.

SELECT o.name AS object_name,
i.name AS index_name,
i.type_desc,
u.user_seeks,
u.user_scans,
u.user_lookups,
u.user_updates
FROM sys.indexes i
JOIN sys.objects o ON i.object_id = o.object_id
LEFT JOIN sys.dm_db_index_usage_stats u ON i.object_id = u.object_id
AND i.index_id = u.index_id
AND u.database_id = DB_ID()
WHERE o.type <> 'S'
and isnull(u.user_updates,0) > 0
and i.type_desc <> 'HEAP'
and u.index_id is null or (u.user_updates > 0 and u.user_seeks < 10 and u.user_scans < 10 and u.user_lookups < 10) ORDER BY u.user_updates desc


When were the statistics updated?

Run the below script after adding your table name in “ON o.name = ‘YourTableName’ ”
Shows you time and date when index statistics were last updated.

SELECT 'Index Name' = i.name,
'Statistics Date' = STATS_DATE(i.object_id, i.index_id)
FROM sys.objects AS o WITH (NOLOCK)
JOIN sys.indexes AS i WITH (NOLOCK)
ON o.name = 'YourTableName'
AND o.object_id = i.object_id
ORDER BY STATS_DATE(i.object_id, i.index_id);


SPID -2 in the instance

Sometimes you come upon an instance with some sessions that have a sessionid < 0, they have -2.

A -2 spid is an orphaned session, that means a session that no longer have an outer connection, maybe a DTC transaction that went wrong or something else, whatever the reason the problem is that these will hang around for a long while and if they are locking up resources, those resources will never be open again until we either kill the -2 spid or restart the instance.

But we can’t use the kill command for this, it won’t work. And how do we find them?

First, finding them, run:

SELECT request_owner_guid 
FROM sys.dm_tran_locks 
WHERE request_session_id = -2 
and request_owner_guid <> '00000000-0000-0000-0000-000000000000'

they will probably need to be killed, but I leave the decision for you to consider, do your homework before just killing them, I’m showing you how, not saying you should do it blindly.

To kill them use the UOW number, the long alphanumeric number instead of sessionid, eg.:

KILL 'DEF12078-0199-1212-B810-CD46A73F2498'

Or if you have a lot:

SELECT 'kill ''' + convert(varchar(100), request_owner_guid) + '''' 
FROM sys.dm_tran_locks 
WHERE request_session_id = -2 
and request_owner_guid <> '00000000-0000-0000-0000-000000000000'
order by 1

After taking care of these, talk to the application/web group to find out how you can fix this problem.

//Richard


Find all logins with sysadmin role

Potentially a great big security problem is having more than just one or a few logins with sysadmin role, try to keep the list as short as possible, find all of them with this script:

SELECT	p.name AS [Name] ,r.type_desc,r.is_disabled,r.create_date , r.modify_date,r.default_database_name
FROM	sys.server_principals r
INNER JOIN sys.server_role_members m ON r.principal_id = m.role_principal_id
INNER JOIN sys.server_principals p ON p.principal_id = m.member_principal_id
WHERE	r.type = 'R' 
AND    r.name = N'sysadmin'

//R


Top 50 queries with sql text and plan

The script below shows you the top 50 queries with sql text and plan order by total worker time.

Total worker time is measured in cpu time it has used since compilation, so even a relatively simple plan that runs fast but very often can have a high total worker time.

select
eqp.query_plan,
highest_cpu_queries.total_worker_time,
db_name(q.dbid) as dbname,
q.objectid,
highest_cpu_queries.execution_count,
highest_cpu_queries.last_execution_time,
highest_cpu_queries.creation_time,
q.[text]
from
(select top 50
qs.plan_handle,
qs.total_worker_time,
qs.last_execution_time,
qs.execution_count,
qs.creation_time
from
sys.dm_exec_query_stats qs
order by qs.total_worker_time desc) as highest_cpu_queries
cross apply sys.dm_exec_sql_text(plan_handle) as q
cross apply sys.dm_exec_query_plan(plan_handle) eqp
order by highest_cpu_queries.total_worker_time desc

//R


Pages:1234567