Sql server Blog Forum Helping SQL server DBAs and Developers

31Dec/160

VLDB very large database DBCC checkDB

Database corruption - DBCC checkDB for Very large database

We know SQL server data is stored in a filesystem storage. There has been always an (I/O) input and output interaction between SQL server and storage subsystem both in the memory and disk. IO subsystem plays a major role, 99% of the time database corruption can happen with IO subsystem (Such as in the controllers, disk and driver level etc.)

In this post, I am sharing few things.

1. How important is the CHECKDB. 2. How to fine tune and use the checkDB for VLDBs.

3. Methods of troubleshooting the corruption issues.

Storage / VM Admin:  Sent a graph states that, IOPS for the server is very peak weekly once between this day to day.

DBA Admin: Yes, we are running the checkDB job for VLDB weekly once between the days. It might cause, since it reads every allocated page in the database will take a lot of IOPS.

Storage / VM Admin: It is a huge spike to the VM machines, can you disable for next week, if it reduces the IOPS and you can run it monthly once.

DBA Admin: No, this is very important for the data consistency and integrity check.

Changed the checkDB to monthly once. All are going good, but there was a day, when the database reported a corruption.

Now, what: Restore the latest full backup with different database name, run the checkDB, surprise, that is also got corrupted. The corruption is severe either restore from backup or run repair allow data loss. The application will not work for repair allow data loss. We used another method of application that is a different story.

The point is checkDB is very important, run at least before taking a full backup, it gives us a minimum level of production from the corruption.

 

Best options for very large database - VLDB. The database is 10 TB+ and the checkDB is running more than two days, how to reduce the run time. I had this for one of my database. I used different approach and got some good point from Paul Randal.

Me: My 10TB database checkDB runs more than two days, it took 8+ hours if I excluded the non-clustered index. Hope, I can go with it, since I can recreate the NCI, if it gets corrupted.

A response from Paul Randal: Sure - you can do that, but you won't know when your indexes are corrupt until queries start failing or getting wrong results. I don't recommend it. Backup, copy, restore, checkdb is the way to do it, or split the checks up using DBCC CHECKTABLE.

 

1. Initially, I skipped the non-clustered index, it can be drop and create in case if it gets corrupted. – This will definitely reduce the run time. My case: From 2 days to 8 Hours.

2. Use Ola Hallengren checkDB script. It has more parameters you can use those.

If you have a 10 TB database with a table 500 GB, if it is not critical you can skip that, since it's a very old dump data and can import from the original source file. (It is different case)

3. Split checkDB option is good for VLDBs.

It has two methods. 1. File and filegroup checkDB 2. Default, one MDF filegroup checkDB.

Method 1: File and file group, you can run “DBCC CHECKFILEGROUP”. It is easy one and you need to make sure the size of the files needs to run each day. If the size is not same for all the files, then plan it to combine and run accordingly.

Method 2: Single file VLDBs, use a split checkDB.

“Figure out your largest tables (by number of pages) and split the total number into 7 buckets, such that there are a roughly equal number of database pages in each bucket.”

Example: Find out the larger tables in the first list and the remaining tables in 2, 3,4,5,6 bucket and need to run dbcc per the above post. If you have a larger table, it has 100000 Pages, all bucket should almost equal to 100000 pages in each day.

4. One more interesting case by Argenis Fernandez. A non clustered index with sparse column, make checkDB run time worse.

5. A post by Aaron Bertrand covers with trace flag usages and more.

You can use and combine the 1 to 5 methods for the VLDBs, it will reduce the time. Test and make sure which option is good for your business and use that one, but do not leave the checkDB run.

 

Steps to identify the corruption:

Step 1: Run the DBCC checkDB at least weekly once in the agent job, this will report the corruption.

Step 2: Check the error logs daily, if you have a centralised server automate an email by checking an error log every one hour for a critical error report (OR) create an alert notification by using an operator.

Note the database name from the error message, if any corruption in the database.

Step 3: If you find any error, run consistence check with an option to get an exact corruption message - DBCC checkdb ('DBname') with no_infomsgs,all_errormsgs

The checkDB will give you the error message with hint that, what option can fix this corruption, it just a suggestion given by SQL server, some memory level corruption cases, a recycle fix it without an actual REPAIR_ALLOW_DATA_LOSS run. But, you should know, which case needs a reboot.

http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2130-corruption-can-be-fixed-by-restarting-sql-server/

https://www.sqlskills.com/blogs/paul/misconceptions-around-database-repair/

Step 4: If you have good experience in the error and you think you can fix without a data loss, you can try. Like a non-clustered index corruption – drop and create, some memory corruption – recycle of the SQL service etc.

There are cases, a recycle of SQL service fix the inconsistency.

http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2130-corruption-can-be-fixed-by-restarting-sql-server/

My case: I had a database shows online in the “sys.master_files” the data and log files are available in the physical filesystem. But, I cannot see any tables.

 

Msg 1823, Level 16, State 2, Line 1

A database snapshot cannot be created because it failed to start.

Msg 7928, Level 16, State 1, Line 1

The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.

Msg 5030, Level 16, State 12, Line 1

The database could not be exclusively locked to perform the operation.

Msg 7926, Level 16, State 1, Line 1

Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details.

Msg 9001, Level 21, State 1, Line 1

The log for database 'DB' is not available. Check the event log for related error messages. Resolve any errors and restart the database.

The operating system returned error 21(The device is not ready.) to SQL Server during a read at offset 0x00001e49c26000 in file 'F:\Microsoft SQL Server\DATA\BI.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

We had some glitches in the subsystem, after it was fixed and SQL service has been rebooted.

 

If you have no idea of the error or need some help from SQL database corruption masters, yes you can get a help from them -->https://twitter.com/#sqlhelp. I did many times.

Read Gail Shaw’s post http://www.sqlservercentral.com/articles/Corruption/65804/

All it matters, we should have good non-corrupted and up to date backup in hand for all the production servers.

We have identified the database corruption, what are the basic steps we can run. There are different levels of corruption happened, each needs its own steps. Let me put some basic steps that you can try it out.

1. Restore the database in a different server and storage subsystem and run the checkDB. For the VLDBs, this option does not work, since it needs a large additional storage and time taken for restore.

2. If you have up to date backup (Including a tail log) restore it, in the sequence.

3. No option, you do not have a backup and minimum levels of corruption fix are not supported, Then Last resort, use "Repair Allow Data Loss", which will repair the database with data loss.

4. There are corruptions, that cannot fix by the repair allow data loss. So, the only good way is to restore a good backup. It is very important to back up the database after checkDB and do a restore test in frequent time.

 

How can we prevent database corruption? There is no way to prevent this, but we can do a proactive DBA work.

Run weekly checkDB, before a full backup

Do a rotational test restore for the database

Enable checksum

Schedule a DBCC CHECKDB 🙂

 

Muthukkumaran Kaliyamoorthy

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

More Posts

13Sep/163

TempDB database is Full and Optimization

What is TempDB and best practice for TempDB

TempDB is the system database and it is per instance. It is a common and shared by all other databases. All the temporary activities are done here and yes, definitely the TempDB will become full and occupy more space depends on the temporary tasks, which we are running. There are many activities can happen in tempDB.

Best practice, create a TempDB in separate disk with the estimated initial file size, those are old days and now most of us using disk array. The spindles and HDDS/SSDs are striped through RAID and shared across LUNs and pools, check with your infra team about the disk configuration from the storage. How the disk is mounted or presented to the server. Is it a dedicated drive (commonly in DAS) or a disk array (commonly in SAN). DAS or SAN, SAN is most common industry standard that is being used. We had both DAS and SAN, it depends on the database application. Definitely a RAID level has been used, check with that as well. A common recommendation will be RAID 1+0, which is costly. It is always good to know, what kind of storage and storage vendor, we are using it to plan and test that perfectly. Some servers we use advanced Automated Tiered Storage, which is good some without tired etc. DBAs have limited knowledge in the storage, but it is good to learn from our SAN & VM admins :-).

Issue 1:

What are all the activities are done in Tempdb and which is occupying the space

The list is very big (User Objects, Internal Objects & Version Stores) are stored in temporarily in tempDB.  Read the TechNet article for more.

If it gets full often, we need to capture the tasks that are hitting to tempDB and need to plan accordingly. 

Find out the Tempdb physical file usage

SELECT SUM(unallocated_extent_page_count) AS [free pages],
SUM(unallocated_extent_page_count
+ user_object_reserved_page_count
+ internal_object_reserved_page_count
+ mixed_extent_page_count
+ version_store_reserved_page_count) * (8.0/1024.0/1024.0) AS [Total TempDB SizeInGB]
, SUM(unallocated_extent_page_count * (8.0/1024.0/1024.0)) AS [Free TempDB SpaceInGB]
,unallocated_extent_page_count
,user_object_reserved_page_count
 ,SUM(version_store_reserved_page_count  * (8.0/1024.0/1024.0)) AS [version_store_GB]
,internal_object_reserved_page_count
,mixed_extent_page_count
FROM tempdb.sys.dm_db_file_space_usage
--where [FreeTempDBSpaceInGB]>50
group by unallocated_extent_page_count,user_object_reserved_page_count,internal_object_reserved_page_count,mixed_extent_page_count;

We need to work based on the result from the above code, like version store or internal objects etc.

TempDB DMV:

Monitor the disk space used by the user objects, internal objects, and version stores in the tempdb files.

sys.dm_db_file_space_usage - Returns space usage information for each file in the database.

Allocation or deallocation activity in tempdb at the session or task level

sys.dm_db_session_space_usage - Returns the number of pages allocated and deallocated by each session for the database.

sys.dm_db_task_space_usage - Returns page allocation and deallocation activity by task for the database.

These views can be used to identify large queries, temporary tables, or table variables that are using a large amount of tempdb disk space.

For more:TechNet article diagnosing tempdb Disk Space Problems

 

Useful queries to find out who is using my TempDB

Following is code originally from Gianluca Sartori and Deepak Biswal. I just copied here. The MSDN article has more code, have a look at it and use the same based on your case. This code really cool and helped me a lot.

The following query will only show the Active request joining from sys.dm_exec_requests DMV. If the query finished, you cannot get that by using this.

;WITH task_space_usage AS (
-- SUM alloc/delloc pages
SELECT session_id,
request_id,
SUM(internal_objects_alloc_page_count) AS alloc_pages,
SUM(internal_objects_dealloc_page_count) AS dealloc_pages
FROM sys.dm_db_task_space_usage WITH (NOLOCK)
WHERE session_id <> @@SPID
GROUP BY session_id, request_id
)
SELECT TSU.session_id,
TSU.alloc_pages * 1.0 / 128 AS [internal object MB space],
TSU.dealloc_pages * 1.0 / 128 AS [internal object dealloc MB space],
EST.text,
-- Extract statement from sql text
ISNULL(
NULLIF(
SUBSTRING(
EST.text,
ERQ.statement_start_offset / 2,
CASE WHEN ERQ.statement_end_offset < ERQ.statement_start_offset THEN 0 ELSE( ERQ.statement_end_offset - ERQ.statement_start_offset ) / 2 END
), ''
), EST.text
) AS [statement text],
EQP.query_plan
FROM task_space_usage AS TSU
INNER JOIN sys.dm_exec_requests ERQ WITH (NOLOCK)
ON TSU.session_id = ERQ.session_id
AND TSU.request_id = ERQ.request_id
OUTER APPLY sys.dm_exec_sql_text(ERQ.sql_handle) AS EST
OUTER APPLY sys.dm_exec_query_plan(ERQ.plan_handle) AS EQP
WHERE EST.text IS NOT NULL OR EQP.query_plan IS NOT NULL
ORDER BY 3 DESC, 5 DESC

The following query will show the session space usage. If the session is closed, you cannot get that by using this. 

SELECT DES.session_id AS [SESSION ID],
Db_name(DDSSU.database_id) AS [DATABASE Name],
host_name AS [System Name],
program_name AS [Program Name],
login_name AS [USER Name],
status,
( user_objects_alloc_page_count * 8 ) AS
[SPACE Allocated FOR USER Objects (in KB)],
( user_objects_dealloc_page_count * 8 ) AS
[SPACE Deallocated FOR USER Objects (in KB)],
( internal_objects_alloc_page_count * 8 ) AS
[SPACE Allocated FOR Internal Objects (in KB)],
( internal_objects_dealloc_page_count * 8 ) AS
[SPACE Deallocated FOR Internal Objects (in KB)],
cpu_time AS [CPU TIME (in milisec)],
total_scheduled_time AS
[Total Scheduled TIME (in milisec)],
total_elapsed_time AS
[Elapsed TIME (in milisec)],
( memory_usage * 8 ) AS [Memory USAGE (in KB)],
CASE is_user_process
WHEN 1 THEN 'user session'
WHEN 0 THEN 'system session'
END AS [SESSION Type],
row_count AS [ROW COUNT]
FROM tempdb.sys.dm_db_session_space_usage AS DDSSU
INNER JOIN sys.dm_exec_sessions AS DES
ON DDSSU.session_id = DES.session_id
ORDER BY [space allocated for internal objects (in kb)] DESC

If you find the session ID, which is using more temp space. Pass the session ID to find the code which is taking more TempDB. One of my other case, A poorly written query used more than 100GB of space.

SELECT TEXT
FROM sys.dm_exec_connections
CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle)
WHERE session_id = (181)

 

Issue 2:

How to solve the Tempdb contention and improve the performance

This is another one issue with tempDB that, we generally get.

Easy way to find the tempdb contention by using Whoisactive and more workout for TempDB.

You can also see the overall system wait type.

Latch contention can occur in tempDB allocation pages of GAM, SGAM and PFS. It is a common one, but when we have a lot of hits, it creates a performance issue, since the wait queue increases.

If you see a lot wait_type is PAGELATCH or PAGEIOLATCH with tempDB: PFS, GAM and SGAM ([wait_type]:[database_name]:[file_id](page_type)), then we have contention that needs to be fixed to improve the performance.

 

We can fix the contention by creating more data file with equal size, more data file will give more allocation pages (GAM, SGAM and PFS per data file).

If you cannot create a file equal size, since the one file can be very big, we can use trace flag 1117. It forces other files in the filegroup to grow it. It applies to other databases as well.

The easiest way to alleviate tempdb allocation contention is to enable trace flag 1118 and to add more tempdb data files. 

http://www.sqlskills.com/blogs/paul/correctly-adding-data-files-tempdb/

 

The number of data files is it depends. Recommendation from Paul Randal.

Then if you have less than 8 logical cores, create the same number of data files as logical cores. If you have more than 8 logical cores, create 8 data files and then add more in chunks of 4 if you still see PFS contention. Make sure all the tempdb data files are the same size too.

 

Temporary and permanent fix

We can reduce the file size by shrinking the files. Try to shrink the files first, if it is not shrinking, free the procedure cache and shrink again. You can FREEPROCCACHE more than one time until, you see or want to reduce space used by files.

Note: Freeproccache will clear the procedure cache and will cache the data newly.

It is a temporary fix and it does not need a recycle of SQL service. Shrinking the tempDB is fine, but not more frequently, since, it may leads an external disk fragmentation.

USE TEMPDB;
GO
DBCC SHRINKFILE (TEMPDEV,10000) -- Initial size with 10 GB
DBCC FREEPROCCACHE
DBCC SHRINKFILE (TEMPDEV,10000)

When we run more FREEPROCCACHE with a shrink, sometimes you can see the following error, Not sure the error looks like an internal file allocation, it can be fixed by increasing the file size + some MB. Ex: 1000MB file can increased by 1005MB.

File ID 1 of database ID 2 cannot be shrunk as it is either being shrunk by another process or is empty.

Msg 0, Level 11, State 0, Line 0

A severe error occurred on the current command. The results, if any, should be discarded.

Sometimes the free cache will not help, when you do not have a free space in the file. Use the following query and check the free space.

USE [tempdb]
SELECT
[name]
,CONVERT(NUMERIC(10,2),ROUND([size]/128.,2)) AS [Size]
,CONVERT(NUMERIC(10,2),ROUND(FILEPROPERTY([name],'SpaceUsed')/128.,2)) AS [Used]
,CONVERT(NUMERIC(10,2),ROUND(([size]-FILEPROPERTY([name],'SpaceUsed'))/128.,2)) AS [Unused]
FROM [sys].[database_files]

Another case, I had a row versioning enabled for the database which prevent the TempDB shrinking, and disabled it temporarily and shrink it. It helped me.

--check which dbs are in snapshot isoation mode
select name,is_read_committed_snapshot_on,snapshot_isolation_state, snapshot_isolation_state_desc,* from sys.databases
--where is_read_committed_snapshot_on =1
order by 1
--Disable snapshot isolation
alter database DBname set READ_COMMITTED_SNAPSHOT off with rollback after 30 seconds
go
use tempdb
go
dbcc shrinkfile (tempdev, 10000) --Shrink with 10GB
go
select (size*8)/1024.0 as FileSizeMB from sys.database_files --check new size
go
--Enable snapshot isolation
alter database DBname set READ_COMMITTED_SNAPSHOT on with rollback after 30 seconds
go

The permanent fix is hard one, we need to get the query or tasks, which all are hitting the tempDB and need to tune those. In my case, it is an 8 TB database and when the predefined maintenance plan runs, it occupies all the tempDB space. I used different method Ola Hallengren's script. It was a fantastic script, we can have more control than the maintenance plan. I have skipped the non- clustered index and some historical VLT table, which is not a critical one. It reduced the tempDB space and job run time as well. I had a small contention and I did not create more data files.

Note: My case is different and the very large table is total dump data. I removed from checkDB. Make sure, before you remove any of your tables. Skipping non-clustered index is not always good, instead you can run split checkDB: http://www.sqlskills.com/blogs/paul/checkdb-from-every-angle-consistency-checking-options-for-a-vldb/

In another case, one of the developer code uses all tempDB space and it has more contention as well and yes it got reduced after creating a more data files. Use the whoisactive and other script to find out the exact issue and fix based on what you have in your the system. Since, I mixed with two more issues in this post.

 

Muthukkumaran Kaliyamoorthy

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

More Posts