AlwaysON RECOVERY PENDING SQL Server Bugs Enhancement Requests T-SQL Tuesday

by Muthukkumaran kaliyamoorthy Published on: January 11, 2017
Comments: No Comments
Tags:
Categories:AlwaysON, SQL party

This month’s T-SQL Tuesday topic is “SQL Server Bugs & Enhancement Requests” and hosted by my favorite and inspirational Brent Ozar.

 

 

Here is my bug report:

https://connect.microsoft.com/SQLServer/feedback/details/3022019 and it is still in active.

It happen one of my production database, when I tried to remove the database from alwaysON and other important databases went recovery pending and inaccessible. I have no idea what happened to all other databases, then I came to know it is a bug.

 

The issue is when the database removed from the primary replica, with the secondary disconnection the higher database IDs on the secondary went into “NOT SYNCHRONIZED and RECOVERY PENDING” state, but the lower database IDs are good and synchronous state only.

http://www.sqlserverblogforum.com/2016/08/alwayson-database-not-synchronized-and-recovery-pending/

Muthukkumaran kaliyamoorthy

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

More Posts - Website

VLDB very large database DBCC checkDB

by Muthukkumaran kaliyamoorthy Published on: December 31, 2016
Comments: No Comments
Tags: , ,
Categories:VLDB

Database corruption – DBCC checkDB for Very large database

We know SQL server data is stored in a filesystem storage. There has been always an (I/O) input and output interaction between SQL server and storage subsystem both in the memory and disk. IO subsystem plays a major role, 99% of the time database corruption can happen with IO subsystem (Such as in the controllers, disk and driver level etc.)

In this post, I am sharing few things.

1. How important is the CHECKDB. 2. How to fine tune and use the checkDB for VLDBs.

3. Methods of troubleshooting the corruption issues.

Storage / VM Admin:  Sent a graph states that, IOPS for the server is very peak weekly once between this day to day.

DBA Admin: Yes, we are running the checkDB job for VLDB weekly once between the days. It might cause, since it reads every allocated page in the database will take a lot of IOPS.

Storage / VM Admin: It is a huge spike to the VM machines, can you disable for next week, if it reduces the IOPS and you can run it monthly once.

DBA Admin: No, this is very important for the data consistency and integrity check.

Changed the checkDB to monthly once. All are going good, but there was a day, when the database reported a corruption.

Now, what: Restore the latest full backup with different database name, run the checkDB, surprise, that is also got corrupted. The corruption is severe either restore from backup or run repair allow data loss. The application will not work for repair allow data loss. We used another method of application that is a different story.

The point is checkDB is very important, run at least before taking a full backup, it gives us a minimum level of production from the corruption.

 

Best options for very large database – VLDB. The database is 10 TB+ and the checkDB is running more than two days, how to reduce the run time. I had this for one of my database. I used different approach and got some good point from Paul Randal.

Me: My 10TB database checkDB runs more than two days, it took 8+ hours if I excluded the non-clustered index. Hope, I can go with it, since I can recreate the NCI, if it gets corrupted.

A response from Paul Randal: Sure – you can do that, but you won’t know when your indexes are corrupt until queries start failing or getting wrong results. I don’t recommend it. Backup, copy, restore, checkdb is the way to do it, or split the checks up using DBCC CHECKTABLE.

 

1. Initially, I skipped the non-clustered index, it can be drop and create in case if it gets corrupted. – This will definitely reduce the run time. My case: From 2 days to 8 Hours.

2. Use Ola Hallengren checkDB script. It has more parameters you can use those.

If you have a 10 TB database with a table 500 GB, if it is not critical you can skip that, since it’s a very old dump data and can import from the original source file. (It is different case)

3. Split checkDB option is good for VLDBs.

It has two methods. 1. File and filegroup checkDB 2. Default, one MDF filegroup checkDB.

Method 1: File and file group, you can run “DBCC CHECKFILEGROUP”. It is easy one and you need to make sure the size of the files needs to run each day. If the size is not same for all the files, then plan it to combine and run accordingly.

Method 2: Single file VLDBs, use a split checkDB.

“Figure out your largest tables (by number of pages) and split the total number into 7 buckets, such that there are a roughly equal number of database pages in each bucket.”

Example: Find out the larger tables in the first list and the remaining tables in 2, 3,4,5,6 bucket and need to run dbcc per the above post. If you have a larger table, it has 100000 Pages, all bucket should almost equal to 100000 pages in each day.

4. One more interesting case by Argenis Fernandez. A non clustered index with sparse column, make checkDB run time worse.

5. A post by Aaron Bertrand covers with trace flag usages and more.

You can use and combine the 1 to 5 methods for the VLDBs, it will reduce the time. Test and make sure which option is good for your business and use that one, but do not leave the checkDB run.

 

Steps to identify the corruption:

Step 1: Run the DBCC checkDB at least weekly once in the agent job, this will report the corruption.

Step 2: Check the error logs daily, if you have a centralised server automate an email by checking an error log every one hour for a critical error report (OR) create an alert notification by using an operator.

Note the database name from the error message, if any corruption in the database.

Step 3: If you find any error, run consistence check with an option to get an exact corruption message – DBCC checkdb (‘DBname’) with no_infomsgs,all_errormsgs

The checkDB will give you the error message with hint that, what option can fix this corruption, it just a suggestion given by SQL server, some memory level corruption cases, a recycle fix it without an actual REPAIR_ALLOW_DATA_LOSS run. But, you should know, which case needs a reboot.

http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2130-corruption-can-be-fixed-by-restarting-sql-server/

https://www.sqlskills.com/blogs/paul/misconceptions-around-database-repair/

Step 4: If you have good experience in the error and you think you can fix without a data loss, you can try. Like a non-clustered index corruption – drop and create, some memory corruption – recycle of the SQL service etc.

There are cases, a recycle of SQL service fix the inconsistency.

http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2130-corruption-can-be-fixed-by-restarting-sql-server/

My case: I had a database shows online in the “sys.master_files” the data and log files are available in the physical filesystem. But, I cannot see any tables.

 

Msg 1823, Level 16, State 2, Line 1

A database snapshot cannot be created because it failed to start.

Msg 7928, Level 16, State 1, Line 1

The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.

Msg 5030, Level 16, State 12, Line 1

The database could not be exclusively locked to perform the operation.

Msg 7926, Level 16, State 1, Line 1

Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details.

Msg 9001, Level 21, State 1, Line 1

The log for database ‘DB’ is not available. Check the event log for related error messages. Resolve any errors and restart the database.

The operating system returned error 21(The device is not ready.) to SQL Server during a read at offset 0x00001e49c26000 in file ‘F:\Microsoft SQL Server\DATA\BI.mdf’. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

We had some glitches in the subsystem, after it was fixed and SQL service has been rebooted.

 

If you have no idea of the error or need some help from SQL database corruption masters, yes you can get a help from them –>https://twitter.com/#sqlhelp. I did many times.

Read Gail Shaw’s post http://www.sqlservercentral.com/articles/Corruption/65804/

All it matters, we should have good non-corrupted and up to date backup in hand for all the production servers.

We have identified the database corruption, what are the basic steps we can run. There are different levels of corruption happened, each needs its own steps. Let me put some basic steps that you can try it out.

1. Restore the database in a different server and storage subsystem and run the checkDB. For the VLDBs, this option does not work, since it needs a large additional storage and time taken for restore.

2. If you have up to date backup (Including a tail log) restore it, in the sequence.

3. No option, you do not have a backup and minimum levels of corruption fix are not supported, Then Last resort, use “Repair Allow Data Loss”, which will repair the database with data loss.

4. There are corruptions, that cannot fix by the repair allow data loss. So, the only good way is to restore a good backup. It is very important to back up the database after checkDB and do a restore test in frequent time.

 

How can we prevent database corruption? There is no way to prevent this, but we can do a proactive DBA work.

Run weekly checkDB, before a full backup

Do a rotational test restore for the database

Enable checksum

Schedule a DBCC CHECKDB :-)

 

Muthukkumaran kaliyamoorthy

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

More Posts - Website

AlwaysON database not synchronizing suspect mode

by Muthukkumaran kaliyamoorthy Published on: September 17, 2016
Comments: No Comments
Tags:
Categories:AlwaysON

 

I got a call the database, not online/available mode, we suspect there was a corruption. One of my AlwaysON secondary database went suspect mode, it’s because of the log file and drive was full. I tried to resume the database. ALTER DATABASE dbname SET HADR RESUME;.

It went “in recovery“ phase and fails. Since, it does not have even 1 MB space in the log file and drive to do a recovery phase.

The drive has a good capacity for log, even though there was a huge transaction from application and the important one transaction log backup failed for five hours. I have a transaction log backup every five minutes. Since it is a high load critical OLTP database. The log full caused by a transaction log backup failure and the active massive open transactions. We had issues with NetBackup storage.

Error 3414 is a generic error, it’s triggered by SQL, when the database is going to suspect mode. It does not mean that we had a corruption and need to run and follow the corruption procedure. All I want to say read the error log and understand the issue, work on accordingly. This was fixed after adding a storage space on the drive with HADR resume and “7407 transactions rolled forward in database”.

In the primary:

Error: 9002, Severity: 17, State: 2.

The transaction log for database ‘DB’is full due to ‘LOG_BACKUP’.

 

From MSDN: https://msdn.microsoft.com/en-us/library/ff877972.aspx

SUSPEND_FROM_REDO = An error occurred during the redo phase

SUSPEND_FROM_APPLY = An error occurred when writing the log to file (see error log)

In the secondary:

Error: 3414, Severity: 21, State: 1.

An error occurred during recovery, preventing the database ‘DBName’ (7:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.

For more on 3414: https://support.microsoft.com/en-in/kb/2015741

In our case, it is a different issue and error: 3414 is not applicable.

Errors from the error log

Error: 18210, Severity: 16, State: 1.

BackupVirtualDeviceFile::RequestDurableMedia: Flush failure on backup device ‘VNBU0-94692-3252-1473459870′. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).

Error: 17053, Severity: 16, State: 1.

L:\SQL_Log\DBNAME_LOG.LDF: Operating system error 112(There is not enough space on the disk.) encountered.

Error: 5149, Severity: 16, State: 3.

MODIFY FILE encountered operating system error 112(There is not enough space on the disk.) while attempting to expand the physical file ‘L:\SQL_Log\DBNAME_LOG.LDF’.

AlwaysOn Availability Groups data movement for database ‘DBName’ has been suspended for the following reason: “system” (Source ID 4; Source string: ‘SUSPEND_FROM_APPLY‘). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books Online.

Error: 3041, Severity: 16, State: 1.

ALTER DB param option: RESUME

AlwaysOn Availability Groups data movement for database ‘DBName’ has been resumed. This is an informational message only. No user action is required.

AlwaysOn Availability Groups connection with primary database established for secondary database ‘DBName’ on the availability replica ‘SERVER’ with Replica ID: {3284e6a0-ca68-41ed-92ca-759757477e54}. This is an informational message only. No user action is required.

Error: 35285, Severity: 16, State: 1.

The recovery LSN (256072:1024000:1) was identified for the database with ID 7. This is an informational message only. No user action is required.

 

Error: 3313, Severity: 21, State: 1.

During redoing of a logged operation in database ‘DBName’, an error occurred at log record ID (256072:1023498:1). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database.

AlwaysOn Availability Groups data movement for database ‘DBName’ has been suspended for the following reason: “system” (Source ID 2; Source string: ‘SUSPEND_FROM_REDO’). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books Online.

Error: 3414, Severity: 21, State: 1.

An error occurred during recovery, preventing the database ‘DBName’ (7:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.

Error: 926, Severity: 14, State: 1.

Database ‘DBName’ cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information.

 

ALTER DB param option: RESUME

AlwaysOn Availability Groups data movement for database ‘DBName’ has been resumed. This is an informational message only. No user action is required.

Nonqualified transactions are being rolled back in database DBName for an AlwaysOn Availability Groups state change. Estimated rollback completion: 100%. This is an informational message only. No user action is required.

State information for database ‘DBName’ – Hardended Lsn: ‘(256072:1024000:1)’    Commit LSN: ‘(0:0:0)’    Commit Time: ‘Jan  1 1900 12:00AM’

AlwaysOn Availability Groups connection with primary database terminated for secondary database ‘DBName’ on the availability replica ‘SERVER’ with Replica ID: {3284e6a0-ca68-41ed-92ca-759757477e54}. This is an informational message only. No user action is required.

State information for database ‘DBName’ – Hardended Lsn: ‘(256072:1024000:1)’    Commit LSN: ‘(0:0:0)’    Commit Time: ‘Jan  1 1900 12:00AM’

Starting up database ‘DBName’.

Recovery of database ‘DBName’ (7) is 0% complete (approximately 903 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.

Recovery of database ‘DBName’ (7) is 0% complete (approximately 902 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.

Recovery of database ‘DBName’ (7) is 0% complete (approximately 902 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.

 

After adding a storage space on the partition/drive. The database went throw the recovery phase and back in online ALTER DATABASE dbname SET HADR RESUME;.

AlwaysOn Availability Groups connection with primary database established for secondary database ‘DBName’ on the availability replica ‘SERVER’ with Replica ID: {3284e6a0-ca68-41ed-92ca-759757477e54}. This is an informational message only. No user action is required.

Error: 35285, Severity: 16, State: 1.

The recovery LSN (256072:1024000:1) was identified for the database with ID 7. This is an informational message only. No user action is required.

7407 transactions rolled forward in database ‘DBName’ (7:0). This is an informational message only. No user action is required.

Recovery completed for database DBName (database ID 7) in 24 second(s) (analysis 1561 ms, redo 13390 ms, undo 0 ms.) This is an informational message only.

No user action is required.

AlwaysOn Availability Groups connection with primary database established for secondary database ‘DBName’ on the availability replica ‘SERVER’ with Replica ID: {3284e6a0-ca68-41ed-92ca-759757477e54}. This is an informational message only. No user action is required.

Error: 35285, Severity: 16, State: 1.

The recovery LSN (256072:1024000:1) was identified for the database with ID 7. This is an informational message only. No user action is required.

CHECKDB for database ‘DBName’ finished without errors on 2016-09-03 04:56:19.650 (local time). This is an informational message only; no user action is required.

 

At the same time, I had WSFC issue, my secondary node went offline in the fail over cluster manager. But, the node is online and ping. I tried to bring online, but no luck. I had some automatic fail-overs and node was up and down. The fail over cluster manager was not working and even does not show the cluster name. I manually tried to connect the cluster by clicking –> connect to clusters –> Typed the name –> Timeout and cannot able to connect.

I validated the cluster status and vote by PowerShell. The vote was zero, which means node cannot able to talk to the cluster. The cluster works fine, since it is a windows server 2012 with dynamic quorum.

I tried to validate the cluster by clicking –> validate configuration –> next –> Enter name

–> Click browse button –> the node of secondary has not added and other nodes are added.

See the image, the cluster name does not come in in the cluster manager and validation does not add the secondary node. It only added a primary and witness/quorum node.

I have no clue and the cluster errors are generic and reboot the secondary server, which shows the cluster name and all the nodes online in the WSFC. There was some glitch in the secondary node, which made the disconnection in the WSFC.

Errors from the alwaysON health check:

Message: A connection timeout has occurred while attempting to establish a connection to availability replica ‘server’ with id [3284E6A0-CA68-41ED-92CA-759757477E54]. Either a networking or firewall issue exists, or the endpoint address provided for the replica is not the database mirroring endpoint of the host server instance.

Statement: ALTER AVAILABILITY GROUP [Groupname] FAILOVER;

From Cluster log

The state of the local availability replica in availability group ‘GroupName’has changed from ‘SECONDARY_NORMAL’ to ‘RESOLVING_PENDING_FAILOVER’.

The state changed because of a user initiated failover.  For more information, see the SQL Server error log, Windows Server Failover Clustering (WSFC) management console, or WSFC log.

AlwaysOn Availability Groups connection with primary database terminated for secondary database ‘DB’on the availability replica ‘server’ with Replica ID: {3284e6a0-ca68-41ed-92ca-759757477e54}. This is an informational message only. No user action is required.

AlwaysOn: The local replica of availability group ‘GroupName’is preparing to transition to the primary role in response to a request from the Windows Server Failover Clustering (WSFC) cluster. This is an informational message only. No user action is required.

The state of the local availability replica in availability group ‘GroupName’has changed from ‘RESOLVING_PENDING_FAILOVER’ to ‘RESOLVING_NORMAL’.

The state changed because the availability group is coming online.  For more information, see the SQL Server error log,

Windows Server Failover Clustering (WSFC) management console, or WSFC log.

AlwaysOn Availability Groups connection with primary database terminated for secondary database ‘DB’ on the availability replica ‘server’ with Replica ID: {3284e6a0-ca68-41ed-92ca-759757477e54}. This is an informational message only. No user action is required.

Conclusion: The suspect mode of alwaysON is because of log file full in the primary server, due to transaction log backup failure, it does not have any database level corruption.

Muthukkumaran kaliyamoorthy

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

More Posts - Website

TempDB database is Full and Optimization

What is TempDB and best practice for TempDB

TempDB is the system database and it is per instance. It is a common and shared by all other databases. All the temporary activities are done here and yes, definitely the TempDB will become full and occupy more space depends on the temporary tasks, which we are running. There are many activities can happen in tempDB.

Best practice, create a TempDB in separate disk with the estimated initial file size, those are old days and now most of us using disk array. The spindles and HDDS/SSDs are striped through RAID and shared across LUNs and pools, check with your infra team about the disk configuration from the storage. How the disk is mounted or presented to the server. Is it a dedicated drive (commonly in DAS) or a disk array (commonly in SAN). DAS or SAN, SAN is most common industry standard that is being used. We had both DAS and SAN, it depends on the database application. Definitely a RAID level has been used, check with that as well. A common recommendation will be RAID 1+0, which is costly. It is always good to know, what kind of storage and storage vendor, we are using it to plan and test that perfectly. Some servers we use advanced Automated Tiered Storage, which is good some without tired etc. DBAs have limited knowledge in the storage, but it is good to learn from our SAN & VM admins :-) .

Issue 1:

What are all the activities are done in Tempdb and which is occupying the space

The list is very big (User Objects, Internal Objects & Version Stores) are stored in temporarily in tempDB.  Read the TechNet article for more.

If it gets full often, we need to capture the tasks that are hitting to tempDB and need to plan accordingly. 

Find out the Tempdb physical file usage

SELECT SUM(unallocated_extent_page_count) AS [free pages],
SUM(unallocated_extent_page_count
+ user_object_reserved_page_count
+ internal_object_reserved_page_count
+ mixed_extent_page_count
+ version_store_reserved_page_count) * (8.0/1024.0/1024.0) AS [Total TempDB SizeInGB]
, SUM(unallocated_extent_page_count * (8.0/1024.0/1024.0)) AS [Free TempDB SpaceInGB]
,unallocated_extent_page_count
,user_object_reserved_page_count
 ,SUM(version_store_reserved_page_count  * (8.0/1024.0/1024.0)) AS [version_store_GB]
,internal_object_reserved_page_count
,mixed_extent_page_count
FROM tempdb.sys.dm_db_file_space_usage
--where [FreeTempDBSpaceInGB]>50
group by unallocated_extent_page_count,user_object_reserved_page_count,internal_object_reserved_page_count,mixed_extent_page_count;

We need to work based on the result from the above code, like version store or internal objects etc.

TempDB DMV:

Monitor the disk space used by the user objects, internal objects, and version stores in the tempdb files.

sys.dm_db_file_space_usage – Returns space usage information for each file in the database.

Allocation or deallocation activity in tempdb at the session or task level

sys.dm_db_session_space_usage - Returns the number of pages allocated and deallocated by each session for the database.

sys.dm_db_task_space_usage – Returns page allocation and deallocation activity by task for the database.

These views can be used to identify large queries, temporary tables, or table variables that are using a large amount of tempdb disk space.

For more:TechNet article diagnosing tempdb Disk Space Problems

 

Useful queries to find out who is using my TempDB

Following is code originally from Gianluca Sartori and Deepak Biswal. I just copied here. The MSDN article has more code, have a look at it and use the same based on your case. This code really cool and helped me a lot.

The following query will only show the Active request joining from sys.dm_exec_requests DMV. If the query finished, you cannot get that by using this.

;WITH task_space_usage AS (
-- SUM alloc/delloc pages
SELECT session_id,
request_id,
SUM(internal_objects_alloc_page_count) AS alloc_pages,
SUM(internal_objects_dealloc_page_count) AS dealloc_pages
FROM sys.dm_db_task_space_usage WITH (NOLOCK)
WHERE session_id <> @@SPID
GROUP BY session_id, request_id
)
SELECT TSU.session_id,
TSU.alloc_pages * 1.0 / 128 AS [internal object MB space],
TSU.dealloc_pages * 1.0 / 128 AS [internal object dealloc MB space],
EST.text,
-- Extract statement from sql text
ISNULL(
NULLIF(
SUBSTRING(
EST.text,
ERQ.statement_start_offset / 2,
CASE WHEN ERQ.statement_end_offset < ERQ.statement_start_offset THEN 0 ELSE( ERQ.statement_end_offset - ERQ.statement_start_offset ) / 2 END
), ''
), EST.text
) AS [statement text],
EQP.query_plan
FROM task_space_usage AS TSU
INNER JOIN sys.dm_exec_requests ERQ WITH (NOLOCK)
ON TSU.session_id = ERQ.session_id
AND TSU.request_id = ERQ.request_id
OUTER APPLY sys.dm_exec_sql_text(ERQ.sql_handle) AS EST
OUTER APPLY sys.dm_exec_query_plan(ERQ.plan_handle) AS EQP
WHERE EST.text IS NOT NULL OR EQP.query_plan IS NOT NULL
ORDER BY 3 DESC, 5 DESC

The following query will show the session space usage. If the session is closed, you cannot get that by using this. 

SELECT DES.session_id AS [SESSION ID],
Db_name(DDSSU.database_id) AS [DATABASE Name],
host_name AS [System Name],
program_name AS [Program Name],
login_name AS [USER Name],
status,
( user_objects_alloc_page_count * 8 ) AS
[SPACE Allocated FOR USER Objects (in KB)],
( user_objects_dealloc_page_count * 8 ) AS
[SPACE Deallocated FOR USER Objects (in KB)],
( internal_objects_alloc_page_count * 8 ) AS
[SPACE Allocated FOR Internal Objects (in KB)],
( internal_objects_dealloc_page_count * 8 ) AS
[SPACE Deallocated FOR Internal Objects (in KB)],
cpu_time AS [CPU TIME (in milisec)],
total_scheduled_time AS
[Total Scheduled TIME (in milisec)],
total_elapsed_time AS
[Elapsed TIME (in milisec)],
( memory_usage * 8 ) AS [Memory USAGE (in KB)],
CASE is_user_process
WHEN 1 THEN 'user session'
WHEN 0 THEN 'system session'
END AS [SESSION Type],
row_count AS [ROW COUNT]
FROM tempdb.sys.dm_db_session_space_usage AS DDSSU
INNER JOIN sys.dm_exec_sessions AS DES
ON DDSSU.session_id = DES.session_id
ORDER BY [space allocated for internal objects (in kb)] DESC

If you find the session ID, which is using more temp space. Pass the session ID to find the code which is taking more TempDB. One of my other case, A poorly written query used more than 100GB of space.

SELECT TEXT
FROM sys.dm_exec_connections
CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle)
WHERE session_id = (181)

 

Issue 2:

How to solve the Tempdb contention and improve the performance

This is another one issue with tempDB that, we generally get.

Easy way to find the tempdb contention by using Whoisactive and more workout for TempDB.

You can also see the overall system wait type.

Latch contention can occur in tempDB allocation pages of GAM, SGAM and PFS. It is a common one, but when we have a lot of hits, it creates a performance issue, since the wait queue increases.

If you see a lot wait_type is PAGELATCH or PAGEIOLATCH with tempDB: PFS, GAM and SGAM ([wait_type]:[database_name]:[file_id](page_type)), then we have contention that needs to be fixed to improve the performance.

 

We can fix the contention by creating more data file with equal size, more data file will give more allocation pages (GAM, SGAM and PFS per data file).

If you cannot create a file equal size, since the one file can be very big, we can use trace flag 1117. It forces other files in the filegroup to grow it. It applies to other databases as well.

The easiest way to alleviate tempdb allocation contention is to enable trace flag 1118 and to add more tempdb data files. 

http://www.sqlskills.com/blogs/paul/correctly-adding-data-files-tempdb/

 

The number of data files is it depends. Recommendation from Paul Randal.

Then if you have less than 8 logical cores, create the same number of data files as logical cores. If you have more than 8 logical cores, create 8 data files and then add more in chunks of 4 if you still see PFS contention. Make sure all the tempdb data files are the same size too.

 

Temporary and permanent fix

We can reduce the file size by shrinking the files. Try to shrink the files first, if it is not shrinking, free the procedure cache and shrink again. You can FREEPROCCACHE more than one time until, you see or want to reduce space used by files.

Note: Freeproccache will clear the procedure cache and will cache the data newly.

It is a temporary fix and it does not need a recycle of SQL service. Shrinking the tempDB is fine, but not more frequently, since, it may leads an external disk fragmentation.

USE TEMPDB;
GO
DBCC SHRINKFILE (TEMPDEV,10000) -- Initial size with 10 GB
DBCC FREEPROCCACHE
DBCC SHRINKFILE (TEMPDEV,10000)

When we run more FREEPROCCACHE with a shrink, sometimes you can see the following error, Not sure the error looks like an internal file allocation, it can be fixed by increasing the file size + some MB. Ex: 1000MB file can increased by 1005MB.

File ID 1 of database ID 2 cannot be shrunk as it is either being shrunk by another process or is empty.

Msg 0, Level 11, State 0, Line 0

A severe error occurred on the current command. The results, if any, should be discarded.

Sometimes the free cache will not help, when you do not have a free space in the file. Use the following query and check the free space.

USE [tempdb]
SELECT
[name]
,CONVERT(NUMERIC(10,2),ROUND([size]/128.,2)) AS [Size]
,CONVERT(NUMERIC(10,2),ROUND(FILEPROPERTY([name],'SpaceUsed')/128.,2)) AS [Used]
,CONVERT(NUMERIC(10,2),ROUND(([size]-FILEPROPERTY([name],'SpaceUsed'))/128.,2)) AS [Unused]
FROM [sys].[database_files]

Another case, I had a row versioning enabled for the database which prevent the TempDB shrinking, and disabled it temporarily and shrink it. It helped me.

--check which dbs are in snapshot isoation mode
select name,is_read_committed_snapshot_on,snapshot_isolation_state, snapshot_isolation_state_desc,* from sys.databases
--where is_read_committed_snapshot_on =1
order by 1
--Disable snapshot isolation
alter database DBname set READ_COMMITTED_SNAPSHOT off with rollback after 30 seconds
go
use tempdb
go
dbcc shrinkfile (tempdev, 10000) --Shrink with 10GB
go
select (size*8)/1024.0 as FileSizeMB from sys.database_files --check new size
go
--Enable snapshot isolation
alter database DBname set READ_COMMITTED_SNAPSHOT on with rollback after 30 seconds
go

The permanent fix is hard one, we need to get the query or tasks, which all are hitting the tempDB and need to tune those. In my case, it is an 8 TB database and when the predefined maintenance plan runs, it occupies all the tempDB space. I used different method Ola Hallengren’s script. It was a fantastic script, we can have more control than the maintenance plan. I have skipped the non- clustered index and some historical VLT table, which is not a critical one. It reduced the tempDB space and job run time as well. I had a small contention and I did not create more data files.

Note: My case is different and the very large table is total dump data. I removed from checkDB. Make sure, before you remove any of your tables. Skipping non-clustered index is not always good, instead you can run split checkDB: http://www.sqlskills.com/blogs/paul/checkdb-from-every-angle-consistency-checking-options-for-a-vldb/

In another case, one of the developer code uses all tempDB space and it has more contention as well and yes it got reduced after creating a more data files. Use the whoisactive and other script to find out the exact issue and fix based on what you have in your the system. Since, I mixed with two more issues in this post.

 

Muthukkumaran kaliyamoorthy

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

More Posts - Website

How to solve the LSN mismatch in SQL server

 

There are many times, we face the LSN mismatch issue in alwaysON and other HA technologies. It is a bit hard to find the missing transaction log backup to apply. Since, there are hundreds of thousands log generated, depends on the transaction log frequency and it can be run in any secondary alwaysON database server.

Think about “the VLDB” and “the backup is in different data center” and database are out of sync in DR site because of LSN mismatch. For VLDB 8 TB database, we cannot take a full or differential backup to fix this. Since, it will take more and more time. Backup is in different data center in CIFS share restore over the WAN will kill the performance and time.

 

LSN: Every record in the SQL Server transaction log is uniquely identified by a log sequence number (LSN). The restore database will work with the sequence of LSN order, no break in log chain.

It is easy if we understand the LSN chains a bit internally. Let me try to show in this post.

Example:

For the first log backup, the first LSN is 100 and Last LSN is 200, second log backup the first LSN should be 200 and last LSN is 300 and third log backup the first LSN should be 300 and last LSN will be some number and the chain goes on.

How to track, where the LSN breaks out and how to fix it. There are many methods, I used. All it depends on the issue and situation. Let me show all the methods and try it out.

Try is in testing not in production: Easy way to break the LSN in AlwaysON database.

Remove a database from an AlwaysON group in a secondary.

ALTER DATABASE [dba3] SET HADR OFF;

Take couple more log backup in primary.

DECLARE @MyFileName varchar(200)
SELECT @MyFileName='\\Sharepatht\dba3_' + REPLACE(convert(nvarchar(20),GetDate(),120),':','-') + '.trn'
--select @MyFileName
BACKUP log dba3 TO DISK=@MyFileName

Try to add the database back to secondary.

ALTER DATABASE [dba3] SET HADR AVAILABILITY GROUP = [AG-Test];

Error:

Msg 1478, Level 16, State 211, Line 1

The mirror database, has insufficient transaction log data to preserve the log backup chain of the principal database.  This may happen if a log backup from the principal database has not been taken or has not been restored on the mirror database.

Use the backup table in MSDB to retrieve the backup history with dynamic SQL.

Note: The filter where clause is important and it will be modified based on my steps. I just uncommented all conditions.

Dynamic Backup script:

 

SELECT  'restore database dba3 from disk= ''' +f.physical_device_name+''' with norecovery',
b.backup_finish_date,b.first_lsn,b.backup_size /1024/1024 AS size_MB,b.type,b.recovery_model,
b.server_name ,b.database_name,b.user_name, f.physical_device_name,b.*
FROM MSDB.DBO.BACKUPMEDIAFAMILY F
JOIN MSDB.DBO.BACKUPSET B
ON (f.media_set_id=b.media_set_id)
WHERE database_name like'dba3'
--and b.backup_finish_date>='2016-08-29 00:00:00.000'
--and b.first_lsn<=40000000007600001
-- and last_lsn= 39000000047800001
--and b.checkpoint_lsn =39000000047800001
--and database_backup_lsn =39000000047800001
--AND B.type='L'
--ORDER BY b.backup_finish_date desc
--order by b.first_lsn  desc

 

First method – A little tough, but good to understand the first and last LSN and the restore failures of LSN mismatch. This can be used for any other DR LSN issues, not only for alwaysON.

Step 1:

Run the dynamic backup script to get the latest transaction log backup from the server. Run this in all alwaysON group replica servers and make sure, you got the latest one.

Uncomment “ORDER BY b.backup_finish_date desc” to display in first line. Get the backup file and pass on it to the following restore command.

restore database dba3 from disk= '\\share\dba3_2016-08-29 09-43-58.trn' with norecovery

It will error and display the required LSN – “includes LSN 41000000017300001 can be restored”, which means, we need the one before the LSN of includes this.

 

To understand better, following is the example of restoring error, which is a very recent log backup and some old log backup restore.

An Old log backup error:

Msg 4326, Level 16, State 1, Line 1

The log in this backup set terminates at LSN 39000000023900001,

which is too early to apply to the database. A more recent log backup that includes LSN 41000000017300001 can be restored.

Msg 3013, Level 16, State 1, Line 1

RESTORE DATABASE is terminating abnormally.

The Latest log backup error:

Msg 4305, Level 16, State 1, Line 1

The log in this backup set begins at LSN 41000000017600001,

which is too recent to apply to the database. An earlier log backup that includes LSN 41000000017300001 can be restored.

Msg 3013, Level 16, State 1, Line 1

RESTORE DATABASE is terminating abnormally. 

I just run the two restore command with the latest and old log backup, one says it is “too early to apply” which means it is a before LSN and other says “too recent to apply” which means it is a after LSN.

Example:

39000000023900001 – Before/early LSN

41000000017300001Required/includes LSN

41000000017600001 – After/Recent LSN

 

It is a little hard to find which backup set holds the required LSN which is included.

If we know the time when it breaks, you can run restore command one by one from that time or before and can find out the backup file. OR run “restore headeronly from disk = ‘\\share\dba3_2016-08-28 07-22-32.bak’” to match closer to the required LSN comparing with first LSN.

Again, it is tough when it generated lots of transaction log backup. The above example the LSN are from the “restore headonly” and “restore database” both are showing the same.

Step 2:

We know the required LSN set”41000000017300001” from the restore error.

Use the dynamic backup script add the condition “and b.first_lsn<=41000000017300001” to get the LSN which is early or smaller than the needed (OR) which includes the required LSN with “order by b.first_lsn  desc” clause.

Copy the first_LSN in the first row “41000000016700001”. It is the breaking point. (Error: log backup that includes LSN “41000000017300001″ can be restored.)

 

Step 3:

Use the dynamic backup script add “and b.first_lsn>=41000000016700001” with “ORDER BY b.backup_finish_date”. It will take all the required backup files to be applied in a sequence.

 

Another Method: It is a bit easy one for alwaysON.

Use the following command it will display the exact required LSN in the “s.truncation_lsn”, which is same as “First_LSN” =41000000016700001. The “s.recovery_lsn & s.last_hardened_lsn” will show the recovery LSN which is same as “includes LSN” = 41000000017300001.

Find the required LSN of using “DMV of alwaysON” and note the truncation_lsn.

 

select database_id,db_name(database_id),name,replica_server_name,synchronization_health_desc,synchronization_state_desc,
s.truncation_lsn,s.recovery_lsn,s.last_hardened_lsn,s.last_sent_time,s.last_received_time,s.last_redone_lsn,
s.end_of_log_lsn,s.last_commit_lsn,s.last_sent_lsn,s.last_received_lsn,s.last_commit_lsn
from sys.dm_hadr_database_replica_states S join  sys.availability_groups G on (s.group_id=g.group_id)
join sys.availability_replicas   GR on (GR.group_id=g.group_id)
--where synchronization_health_desc <>'HEALTHY'
and db_name(database_id)='dba3'
and synchronization_health_desc ='NOT_HEALTHY'
-- change the DB name and health status

For more about columns read the MS DMVs link:

https://msdn.microsoft.com/en-us/library/ff877972.aspx

 

Run the dynamic backup script with the filter clause of the “s.truncation_lsn” “41000000016700001”

SELECT  'restore database dba3 from disk= ''' +f.physical_device_name+''' with norecovery',
b.backup_finish_date,b.first_lsn,b.last_lsn,b.database_backup_lsn,b.checkpoint_lsn,
b.type,b.is_copy_only,b.recovery_model,b.backup_size /1024/1024 AS size_MB,
b.server_name ,b.database_name,b.user_name, f.physical_device_name,b.*
FROM MSDB.DBO.BACKUPMEDIAFAMILY F
JOIN MSDB.DBO.BACKUPSET B
ON (f.media_set_id=b.media_set_id)
WHERE database_name like'dba3'
--and b.backup_finish_date>='2016-08-29 05:50:00.000'
AND b.first_lsn>=41000000016700001
AND B.type='L'
ORDER BY b.backup_finish_date

Copy the restore script and execute in the LSN break server, if you know the log backup only configured and run on the server. Mostly in two node alwaysON, it is configured and run more server see the following and use it.

Join the database into the alwaysON group.

ALTER DATABASE [dba3] SET HADR AVAILABILITY GROUP = [AG-Test];

 

How to fix if the log backup runs more than one secondary replica, when the break happens.

I had a case in more than one replica server, which are two in the primary data center and two are in DR data center.

We can use the above command and steps and can run on all the available replica server and compare the results and can prepare the restore command. But, it will take a little time and maybe some confusion as well. I used “Xp_cmdshell” to get the files from the backup folder and insert into a table to prepare the restore command.

To find the required transaction backup file, we need the required LSN:

1. Get the required LSN either from “DMV of alwaysON” OR from restore error 2. Get the backup file name by using “Dynamic backup script” by passing the required LSN. Filter and compare it with the “Xp_cmdshell” results.

It is a different test, so the LSN will differ from the previous one. 

1. You can use the above command “DMV of alwaysON” to find the required LSN, but it will show all the replica server, you just note the unhealthy of database server of required LSN from “s.truncation_lsn”

2. Use “Dynamic Backup script” and filter it “AND b.first_lsn>=41000000021500001”  with “ORDER BY b.backup_finish_date”. Note the “f.physical_device_name”. That is “\\share\dba3_2016-08-30 05-10-00.trn

Step 1: Enable the xp_cmdshell

 

EXEC sp_configure 'show advanced options', 1; 
RECONFIGURE; 
EXEC sp_configure 'xp_cmdshell', 1; 
RECONFIGURE;

Step 2: Create table to export the backup files

--drop table tbl_backup_filename
create table tbl_backup_filename (id int identity, Bak_filename varchar(500))
insert into tbl_backup_filename
exec xp_cmdshell 'dir \\share\SQL-Backup\QA\DBA_Test /b /O:D' -- Bare format sort by date

Step 3: Compare the backup file name with the export backup file name and note the identity ID.

select * from tbl_backup_filename
--\\share\dba3_2016-08-30 05-10-00.trn
select * from tbl_backup_filename where Bak_filename like '%dba3_2016-08-30 05-10-00%'

Step 4: Run the dynamic SQL restore command to generate the restore script with the condition of identity number, which you got it from above step.

select 'restore headeronly from disk=''\\sclfilip13\MFGProcess\SQL-Backup\QA\DBA_Test\'+Bak_filename+'''', *
from tbl_backup_filename where id >=76
-- DB
select 'restore database dba3 from disk=''\\sclfilip13\MFGProcess\SQL-Backup\QA\DBA_Test\'+Bak_filename+''' with norecovery', *
from tbl_backup_filename where id >=76

Step 5: Disable the xp_cmdshell

 

EXEC sp_configure 'xp_cmdshell', 0; 
RECONFIGURE; 
EXEC sp_configure 'show advanced options', 0; 
RECONFIGURE;

 

If you have a backup folder separate for a replica server, then you need to export all and can compare it with all replica servers.

If anyone interested in the test script, please drop me a note. It is always great to share the my findings and learning with you all.

Happy learning!

 

Muthukkumaran kaliyamoorthy

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

More Posts - Website

page 1 of 8»

Welcome , today is Wednesday, April 26, 2017