I am not a DBA and we don't have a dedicated one at our company therefore when I had a database file with a huge table containing 80 million rows of data taking up 35 GB I needed to remove it.
I re-planned my usage of the table and decided that it would only keep a days worth of data instead of every day (going back 3 years) and re-jigged some code about.
I then set up an nightly job with a TRUNCATE TABLE statement to remove the data quickly at midnight each day.
However just by doing this alone does not reduce the size of the database file (MDF) and you will have reserved and unused space that you may want to reclaim or get rid of.
The current sizes of the database file before running any clean ups was 35 GB and as I was using simple mode for the transaction log the size of that file was negligible. Therefore after much research I had to go through the following routine to reduce the size of the database file.
A lot of people will warn you not to Shrink a database file as it will only grow again and cause disk fragmentation which is correct however if you have just removed a large table like I have then this is one case where a DBCC SHRINKFILE command is useful.
Therefore this is the approach I followed this morning.
I first ran this SQL to find out current size of tables and indexes plus any reserved and unused space within each database table. I also list out the tables in order of reserved size to see which has the most to be reclaimed and whether or not it matched the table I thought was causing problems - which it did.
CREATE TABLE #t (name SYSNAME, rows CHAR(11), reserved VARCHAR(18), data VARCHAR(18), index_size VARCHAR(18), unused VARCHAR(18)) DECLARE @Sizes TABLE(name SYSNAME, rows int, reserved int, Data int, index_size int, unused int) -- ensure correct values are being returned by using @updateusage see http://msdn.microsoft.com/en-us/library/ms188776.aspx EXEC sp_msforeachtable 'INSERT INTO #t EXEC sp_spaceused ''?'', @updateusage = N''TRUE'';' INSERT INTO @Sizes SELECT Name, Rows, CAST(SUBSTRING(Data, 1, LEN(Data)-3) as INT), CAST(SUBSTRING(Reserved, 1, LEN(Reserved)-3) as INT), CAST(SUBSTRING(Index_Size, 1, LEN(Index_Size)-3) as INT), CAST(SUBSTRING(Unused, 1, LEN(Unused)-3) as INT) FROM #t DROP TABLE #t SELECT CAST(SUM(Data) as varchar)+' KB' as 'Data Size', CAST(SUM(Reserved) as varchar)+' KB' as 'Reserved Size', CAST(SUM(Index_Size) as varchar)+' KB' as 'Index Size', CAST(SUM(Unused) as varchar)+' KB' as 'Unused Size' FROM @Sizes SELECT * FROM @Sizes ORDER BY Reserved DESC
Then I ran this script that I found to find out if there is space that can be removed.
SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB FROM sys.database_files;
This showed me that there was indeed a good 30 GB or so that could be removed.
Once I had worked out the new size of the data file in megabytes that I wanted to reduce the MDF file of my database to I ceased all activity on the database to help speed up the process and prevent any errors.
I then ran the following command: DBCC SHRINKFILE (details here)
To find the logical file name that you need to shrink either an MDF (database) or LDF (log file) run this command.
sp_helpdb [database name]
Once everything was ready I ran the following command to shrink a database to 6 GB (you have to pass the value in as MB) which left some room for the database to grow into.
DBCC SHRINKFILE([database name], 6144);
After running the command I did not see an immediate difference when I checked the properties of the database in MS Management Studio.
Therefore I ran a job to ReIndex my tables and rebuild rebuild statistics.
If you don't already have a nightly/weekly MS Agent job set up to do this for you then the following script of mine might be useful for you: ReIndex or ReOrganize your MS SQL indexes.
This script will allow you to set a percentage e.g 30% and if the index is fragmented over that percentage then a full REBUILD of the Index is carried out otherwise a REORGANIZE is performed.
After the rebuilding is done a call to EXEC sp_updatestats is made to update the systems statistics.
If you are going to use this SQL code check that the parameters within the stored procedure are to your liking and turn the DEBUG flag on if you require it.
The script will show you the fragmentation level before and after the script has run so that you can see how much difference it has made.
I found that after doing this when I went to database > properties > files I could now see the new sizes.
|Logical Name||File Type||Filegroup||Initial Size||Autogrowth||Path||File Name|
|MyDB||Rows Data||PRIMARY||6144||By 1 MB, unrestricted growth||C:\Program Files 9x86)\Microsoft SQL Server\MSSQL.1\MSSQL\DATA||MyDB.mdf|
|MyDB_log||Log||Not Applicable||7377||By 10 Percent, restricted growth||C:\Program Files 9x86)\Microsoft SQL Server\MSSQL.1\MSSQL\Data_logs||MyDB_log.ldf|
This clearly shows that my database MDF file has been reduced from 35 GB to 6 GB.
If anyone else has any comments or notes on best practise I would be glad to hear them.