Category: Microsoft Sql Server

Search Solving SQL Server High CPU with IIS Request Filtering

The other day I was troubleshooting 100%  CPU utilization on a SQL Server 2008 database server. The server had 100 or so databases of varying sizes however none were larger than a few hundred MB and each database had a corresponding web site on a separate web server.  Since the server hosted quite a few databases the high CPU needed to be resolved quickly because it was causing issues for everyone.  High CPU on a database server can often be symptomatic of a issues occurring outside the server. In this case the real issue was in fact being caused by a SQL Injection attack on a web server.\r\n

Top Queries by Total CPU Time

\r\nThe knee jerk reaction when experiencing high CPU may be to stop it immediately either by restarting services or recycling app pools however letting it run temporarily will help you to isolate the cause. SQL Server 2008 has some great built-in reports to help track down CPU utilization. On this occasion I used the Top Queries by Total CPU Time report. You can get to this report by right clicking on the server name in SQL Server Management Studio and then selecting Reports.\r\n\r\nimage\r\n\r\n \r\n\r\nThe Top Queries by Total CPU Time report will take a few minutes to run. However once it completes it provides a wealth of information. You’ll get a Top 10 report clearly showing which queries and databases are consuming the most CPU on the server at that moment. Using this report I was able to see that one of the databases on the server had 4 different queries running that were contributing to the high CPU. Now I could focus my attention on this 1 problematic database and hopefully resolve the high CPU.\r\n\r\n \r\n\r\nimage\r\n\r\n \r\n

SQL Profiler and Database Tuning Advisor

\r\nNow that I knew which database was causing the problems I fired up SQL Profiler for just a few minutes. I wanted to get a better understanding of the activity that was occurring within the database. Looking at the high number of Reads coming from the app named “Internet Information Services” I was starting to realize that web site activity was hitting the database pretty hard. I could also see plaintext  data being inserted into the database and it was clearly spam.\r\n\r\nimage\r\n\r\n \r\n\r\nBefore I turned my attention to the web site however I wanted to see if there could be any performance improvement using the Database Engine Tuning Advisor since I had the valuable profiler trace data. The DTA will analyze the database activity and provide a SQL script with optimizations using indexes, partitioning, and indexed views. Usually with DTA I’ll see 5-10 % performance improvement. I was excited to see a 97% improvement!\r\n\r\nimage\r\n

\r\n

Preventing SQL Injection with IIS Request Filtering

\r\nAfter I applied the optimizations script from the Database Engine Tuning Advisor the CPU utilization on the database server improved considerably. However, I knew the web site was experiencing suspicious activity so I used Log Parser to get some reports from the site’s traffic log. Using the query below I could see the most frequently used querystring values and it was obvious the site experiencing a SQL Injection attack.\r\n

\r\n\r\nlogparser.exe -i:iisw3c “select top 20 count(*),cs-uri-query from ex140702.log\r\n\r\ngroup by cs-uri-query order by count(*) desc” -rtp:-1 >file.txt\r\n\r\n

\r\n

\r\n\r\nWith attacks like this a natural inclination is to start blocking IP addresses. Unfortunately sophisticated attacks will use a variety of IP addresses so as soon as you block a few address malicious requests from new ones will take over. The best solution is to block the malicious requests with Request Filtering so I quickly added a few rules to block keywords I had seen in my log parser reports.\r\n\r\nrequestfiltering\r\n\r\n \r\n\r\nImplementing the IIS Request Filtering rules stymied the SQL Injection attack. Using the Log Parser query below I could see the http status codes of all the requests hitting the site with the new rules in place.\r\n

\r\n\r\nSELECT STRCAT(TO_STRING(sc-status), STRCAT(‘.’, TO_STRING(sc-substatus))) AS Status, COUNT(*)\r\n\r\nAS Total FROM w3svc.log to TopStatusCodes.txt GROUP BY Status ORDER BY Total DESC\r\n\r\n

\r\n

\r\n\r\nRequest Filtering uses the http substatus 404.18 when a query string sequence is denied. Looking at Log Parser report below you can see the  50,039 requests were blocked by the new Request Filtering rules.\r\n\r\ntopstatuscodes\r\n

An Once of Prevention…

\r\nThe web site that had been attacked hosted free cooking recipes and allowed visitors to submit their own recipes. Unfortunately the owner’s goodwill was easily exploited because there was no form field validation on site’s submission page and new recipes were automatically being displayed on the site without being approved. This is a dangerous site design and should never have been deployed without basic security measures in place.\r\n\r\nI did a quick select count(*) from the recipe table in the database and was amused by all the delicious recipes I found Smile.\r\n\r\nimage\r\n\r\n \r\n

In Summary

\r\nSQL Server 2008 has several built-in reports like Top Queries by Total CPU Time to help Investigate high CPU utilization. Running SQL Profiler will provide detailed analysis of database activity. Running the profiler output through the Database Tuning Advisor can yield significant performance improvements for the database. IIS Request Filtering is a powerful tool to block SQL Injection attacks against a web site. However, SQL Injection can be easily mitigated using basic data validation. Thanks for reading.

 

Reference Link – http://www.peterviola.com/solving-sql-server-high-cpu-with-iis-request-filtering/#comment-4458

Posted by Sheikvara

+919840688822, +919003270444

Script to take backups of all databases at a time in SQL Server

You can use below script to take the backup of all databases in an SQL Server.

DECLARE @DBName varchar(255)

DECLARE @DATABASES_Fetch int

DECLARE DATABASES_CURSOR CURSOR FOR
select
DATABASE_NAME   = db_name(s_mf.database_id)
from
sys.master_files s_mf
where
— ONLINE
s_mf.state = 0

— Only look at databases to which we have access
and has_dbaccess(db_name(s_mf.database_id)) = 1

— Not master, tempdb or model
and db_name(s_mf.database_id) not in (‘tempdb’)
group by s_mf.database_id
order by 1

OPEN DATABASES_CURSOR

FETCH NEXT FROM DATABASES_CURSOR INTO @DBName

WHILE @@FETCH_STATUS = 0
BEGIN
declare @DBFileName varchar(256)
set @DBFileName = datename(dw, getdate()) + ‘ – ‘ +
replace(replace(@DBName,’:’,’_’),’\’,’_’)+’.bak’

exec (‘BACKUP DATABASE [‘ + @DBName + ‘] TO  DISK = N”D:\2008r2_backups\’ +
@DBFileName + ”’ WITH NOFORMAT, INIT,  NAME = N”’ +
@DBName + ‘-Full Database Backup”, SKIP, NOREWIND, NOUNLOAD,compression,STATS = 100’)

FETCH NEXT FROM DATABASES_CURSOR INTO @DBName
END

CLOSE DATABASES_CURSOR
DEALLOCATE DATABASES_CURSOR

Posted by Sheikvara
+919840688822, +919003270444

Script to get the estimated completion time of Backup, Restore, Index Rebuild or Reorg

 Hi Friends, Below script will give you the estimate completion time of Backup, Restore, Index Rebuild e.t.c.

SELECT command,

s.text,

start_time,

percent_complete,

CAST(((DATEDIFF(s,start_time,GetDate()))/3600) as varchar) + ‘ hour(S), ‘

+ CAST((DATEDIFF(s,start_time,GetDate())%3600)/60 as varchar) + ‘min, ‘

+ CAST((DATEDIFF(s,start_time,GetDate())%60) as varchar) + ‘ sec’ as running_time,

CAST((estimated_completion_time/3600000) as varchar) + ‘ hour(S), ‘

+ CAST((estimated_completion_time %3600000)/60000 as varchar) + ‘min, ‘

+ CAST((estimated_completion_time %60000)/1000 as varchar) + ‘ sec’ as est_time_to_go,

dateadd(second,estimated_completion_time/1000, getdate()) as est_completion_time

FROM sys.dm_exec_requests r

CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) s

WHERE r.command in (‘RESTORE DATABASE’, ‘BACKUP DATABASE’, ‘RESTORE LOG’, ‘BACKUP LOG’)

Posted by Sheikvara

+919840688822, +919003270444

SQL Server High Availability Features ( Brief Description )

LOG SHIPPING :

Log shipping is a data base level high availability feature, Basically it works using log backup and restore.

  • It is simple method of keeping entire database in different server.
  • It works with backup, copy and restore jobs.
  • We need 3 servers
    * Primary server
    * Secondary ,,
    * Monitor server (optional)
  • In primary server database is in online state where the applications or users are connected.
  • In secondary server database is in standby( Read Only ) mode where we can read data.
  • In secondary server T.Log backups are applied either with
    * Norecovery or
    * Standby
  • It will take 45 min to apply the changes from primary to secondary data base.
  • We can have multiple secondary servers.

Advantages :

  • To reduce downtime for the applications or users in case of primary server failed.
  • To implement load balancing. i.e we can allow reading from secondary server and manipulations(insert, update, delete) in primary server.
  • To maintain multiple secondary databases.

Disadvantages :

  • No automatic fail over, It supports only manual fail over.
  • It will take at least 45 min to apply the changes from primary to secondary data base.
  • In case primary data base crashed there might be a data loss.

MIRRORING :

It is another high availability feature available from SQL Server 2005. Previous versions support the simple high availability feature called Transaction Log Shipping. Log shipping has its own limitation as it doesn’t support automatic fail over as well as there might be data loss.  All these limitation we can overcome with database mirroring.

Database mirroring supports automatic fail over and the transactions are applied to standby (Mirror) server immediately once they are committed at principle server. Like Log shipping no need of backup, copy and restore operations and jobs.

  • In Principal server database is in ONLINE state.
  • In mirror server database is in a restoring state, which means it is not available for incoming             requests.

Advantages and benefits:

  • Protection against database failures
  • Automatic failure detection and fail over
  • Support of easy manual fail over
  • Automatic client redirection
  • Multiple operating modes
  • No special hardware requirements
  • Minimized chance of data loss
  • Relatively ease of set up and configuration

Note: If we want to use secondary data base for reading purpose it is not possible in mirroring because secondary database will be in restoring state.

Replication:

Replication allows to maintain same database multiple copies at different locations. Log shipping and mirroring allows to maintain complete database redundancy whereas replication allows to maintain some part of the database (a set of required objects) at users location. Changes made at different user locations are synchronized to the main server. It is object level high availability feature

Advantages :

  • Improved performance
  • To reduce locking conflicts when multiple users are working
  • Improved availability
  • Easy maintenance
  • To allow sites work independently. So that each location can set up its own rules and procedures for working with its copy of the data.
  • To move data closer to the user
  1. Snapshot Replication
  • The snapshot process is commonly used to provide the initial set of data and database objects fortransnationaland merge publications.
  • It copies and distributes data and database objects exactly as they appear at the current moment of time.
  • Snapshot replication is used to provide the initial data set for transnationaland merge replication.
  • It can also be used when complete refreshes of data are appropriate (BOL).
  • Scenarios
  • When the data is not changing frequently.
  • If we want to replicate small amount of data.
  • To replicate Look-up tables which are not changing frequently.

Note: We can go with Snapshot Replication if data is not changing frequently also there is delay in replicating the data from publisher to subscriber (secondary database).

Posted by Sheikvara

+919840688822, +919003270444

 

Creation of SQL Server Database ( For Beginners )

Creation of SQL Server Database

We can create Database in 2 ways  :

  1. Using GUI
  2. Using T-SQL Script

Using GUI:

Step 1: From the Windows Start Menu, select “Microsoft SQL Server”, and then “SQL Server Management Studio”. Once the Management Studio starts, connect to the SQL instance then right click the Databases folder and select new Database. Enter a name in the “Database name” text box.

Step 2:  Click on new database and provide the database name

For this example, well use the name “MyNewDB”.

  1. Logical Name:

MyNewDB         – Data file is where the actual data is saved on the hard drive.

                  MyNewDB_log – Log file   keeps track of any changes to that data.

  1. File Type:

Data files will be defined as ROWS and Log file will be defined as LOG

             C: Filegroup:

                     Primary: The Primary data file contains the start-up information for the database and points to the other files in the database. User data and objects can be stored in this file or in secondary data files. Every database has one primary data file and. The recommended file name extension for primary data file is .mdf

      Transaction Log: The Transaction log files hold the log information that is used to recover the database. There must be at least one log file for each database. The recommended file name extension for transaction logs is .ldf  , we can add multiple Log files to database.

            Secondary: Secondary data files are optional, user-defined, and store user data. Secondary files can be used to spread data across multiple disks by putting each file on a different disk drive. Additionally, if a database exceeds the maximum size for a single Windows file, you can use secondary data files so the database can continue to grow.

  1. Initial Size(MB): 

It always recommended to give some initial size for a user defined database based on Database average DB growth. SQL Server files can grow automatically from their originally specified size. When you define a file, you can specify a specific growth increment. Every time the file is filled, it increases its size by the growth increment. If there are multiple files in a filegroup, they will not auto grow until all the files are full. Growth then occurs using round-robin algorithm. The default value can be set 100MB for .MDF and 1024MB for .LDF.  Later will analyse the DB average growth by monthly and can set Initial size for MDF file.

. Autogrowth:  

     It always recommended to set auto growth for both .mdf and, ldf files Initially we can set auto growth512MB later can change the value based on the growth of DB.

For .mdf-  should be given in MB’s (Recommended)

For  .ldf should be given in MB’s or Percent (ex10%) depends on the expected size of your log file.

  1. Path:

       You have select the path based on your requirement to be placed .MDF and .LDF files in drive.

  1. File Name:

      We do not need to provide any file name since files will get generated automatically based on Logical name once click OK. If you want to give any separate name, then you can provide.

Using T-SQL Script:

Use the bellow T-SQL script and execute in master

USE master

GO

CREATE DATABASE Sales

ON

( NAME = MyNewDB,

FILENAME = ‘c:\program files\microsoft sql server\mssql\data\MyNewDB.mdf’,

SIZE = 102400kb,

MAXSIZE = 10MB,

FILEGROWTH = 10MB )

LOG ON

( NAME = ‘Sales_log’,

FILENAME = ‘c:\program files\microsoft sql server\mssql\data\ MyNewDB_log.ldf’,

SIZE = 102400KB,

MAXSIZE = 10MB,

FILEGROWTH = 10% )

GO

 

Reference Link – https://sqldbahouse.blogspot.in/2017/03/creation-of-sql-server-database-for.html?spref=fb

 

Posted by Sheikvara

+919840688822, +919003270444