Search Solving SQL Server High CPU with IIS Request Filtering

The other day I was troubleshooting 100%  CPU utilization on a SQL Server 2008 database server. The server had 100 or so databases of varying sizes however none were larger than a few hundred MB and each database had a corresponding web site on a separate web server.  Since the server hosted quite a few databases the high CPU needed to be resolved quickly because it was causing issues for everyone.  High CPU on a database server can often be symptomatic of a issues occurring outside the server. In this case the real issue was in fact being caused by a SQL Injection attack on a web server.\r\n

Top Queries by Total CPU Time

\r\nThe knee jerk reaction when experiencing high CPU may be to stop it immediately either by restarting services or recycling app pools however letting it run temporarily will help you to isolate the cause. SQL Server 2008 has some great built-in reports to help track down CPU utilization. On this occasion I used the Top Queries by Total CPU Time report. You can get to this report by right clicking on the server name in SQL Server Management Studio and then selecting Reports.\r\n\r\nimage\r\n\r\n \r\n\r\nThe Top Queries by Total CPU Time report will take a few minutes to run. However once it completes it provides a wealth of information. You’ll get a Top 10 report clearly showing which queries and databases are consuming the most CPU on the server at that moment. Using this report I was able to see that one of the databases on the server had 4 different queries running that were contributing to the high CPU. Now I could focus my attention on this 1 problematic database and hopefully resolve the high CPU.\r\n\r\n \r\n\r\nimage\r\n\r\n \r\n

SQL Profiler and Database Tuning Advisor

\r\nNow that I knew which database was causing the problems I fired up SQL Profiler for just a few minutes. I wanted to get a better understanding of the activity that was occurring within the database. Looking at the high number of Reads coming from the app named “Internet Information Services” I was starting to realize that web site activity was hitting the database pretty hard. I could also see plaintext  data being inserted into the database and it was clearly spam.\r\n\r\nimage\r\n\r\n \r\n\r\nBefore I turned my attention to the web site however I wanted to see if there could be any performance improvement using the Database Engine Tuning Advisor since I had the valuable profiler trace data. The DTA will analyze the database activity and provide a SQL script with optimizations using indexes, partitioning, and indexed views. Usually with DTA I’ll see 5-10 % performance improvement. I was excited to see a 97% improvement!\r\n\r\nimage\r\n

\r\n

Preventing SQL Injection with IIS Request Filtering

\r\nAfter I applied the optimizations script from the Database Engine Tuning Advisor the CPU utilization on the database server improved considerably. However, I knew the web site was experiencing suspicious activity so I used Log Parser to get some reports from the site’s traffic log. Using the query below I could see the most frequently used querystring values and it was obvious the site experiencing a SQL Injection attack.\r\n

\r\n\r\nlogparser.exe -i:iisw3c “select top 20 count(*),cs-uri-query from ex140702.log\r\n\r\ngroup by cs-uri-query order by count(*) desc” -rtp:-1 >file.txt\r\n\r\n

\r\n

\r\n\r\nWith attacks like this a natural inclination is to start blocking IP addresses. Unfortunately sophisticated attacks will use a variety of IP addresses so as soon as you block a few address malicious requests from new ones will take over. The best solution is to block the malicious requests with Request Filtering so I quickly added a few rules to block keywords I had seen in my log parser reports.\r\n\r\nrequestfiltering\r\n\r\n \r\n\r\nImplementing the IIS Request Filtering rules stymied the SQL Injection attack. Using the Log Parser query below I could see the http status codes of all the requests hitting the site with the new rules in place.\r\n

\r\n\r\nSELECT STRCAT(TO_STRING(sc-status), STRCAT(‘.’, TO_STRING(sc-substatus))) AS Status, COUNT(*)\r\n\r\nAS Total FROM w3svc.log to TopStatusCodes.txt GROUP BY Status ORDER BY Total DESC\r\n\r\n

\r\n

\r\n\r\nRequest Filtering uses the http substatus 404.18 when a query string sequence is denied. Looking at Log Parser report below you can see the  50,039 requests were blocked by the new Request Filtering rules.\r\n\r\ntopstatuscodes\r\n

An Once of Prevention…

\r\nThe web site that had been attacked hosted free cooking recipes and allowed visitors to submit their own recipes. Unfortunately the owner’s goodwill was easily exploited because there was no form field validation on site’s submission page and new recipes were automatically being displayed on the site without being approved. This is a dangerous site design and should never have been deployed without basic security measures in place.\r\n\r\nI did a quick select count(*) from the recipe table in the database and was amused by all the delicious recipes I found Smile.\r\n\r\nimage\r\n\r\n \r\n

In Summary

\r\nSQL Server 2008 has several built-in reports like Top Queries by Total CPU Time to help Investigate high CPU utilization. Running SQL Profiler will provide detailed analysis of database activity. Running the profiler output through the Database Tuning Advisor can yield significant performance improvements for the database. IIS Request Filtering is a powerful tool to block SQL Injection attacks against a web site. However, SQL Injection can be easily mitigated using basic data validation. Thanks for reading.

 

Reference Link – http://www.peterviola.com/solving-sql-server-high-cpu-with-iis-request-filtering/#comment-4458

Posted by Sheikvara

+919840688822, +919003270444

Splitting the Log Files as per day in MongoDB

https://docs.mongodb.com/v3.2/tutorial/rotate-log-files/

Rotate the log file.

Rotate the log file by issuing the logRotate command from the admin database in a mongo shell:

use admin
db.runCommand( { logRotate : 1 } )

 

Note –  If anyone knows automated Script for the above command. Please share the automated script in my mail.

 

Posted by Sheikvara

+919840688822, +919003270444

mail id – ahmedonmail@gmail.com

 

 

 

Enterprise Baked Privileged Access for Microsoft Cloud (no-Hybrid)

All Identities

Introduction

Many organizations in order to comply with information security standards keep some baseline security safeguards what includes privileged access management (PAM) and monitoring. In legacy pre-cloud environments, many PAM vulnerabilities i.e. weak passwords or creeping privileges, were identified with rather low/medium risk. With emerging cloud computing legacy security boundaries like firewalls, on-premisses directories etc does not reduce the risk of weak or simply missing PAM processes. Microsoft Cloud has attracted maybe not all but definitely majority of customers with large and complex environments. Some of the customers decided to move maintain Hybrid solution with Azure Active Directory Connect (AADConnect) synchronizing on-premises Active Directory Domain Services (AD DS) with cloud-based Azure Active Directory (Azure AD or WAAD). However, there are companies that decided to manage Azure AD as a separate target system, a security boundary with no on-premises systems impact (i.e. AD DS). These companies use pure-cloud (no-Hybrid) solutions (see Microsoft Cloud – Azure AD: doing it right) where…

View original post 2,098 more words

Security : Auditing and HIPAA compliance for Health domain

HIPAA stands for the Health Insurance Portability and Accountability Act, and is a set of mandates that were introduced to protect sensitive patient information. Any US organisation, including subcontractors and business associates, that deals with protected health information (PHI), must comply with HIPAA.

There are two main rules associated with HIPAA, these are: The Privacy Rule and the Security Rule.

The Privacy Rule relates to the saving, accessing and sharing of personal information, whereas the Security Rule defines the standards for protecting health information that is created, received, or transmitted electronically. Such information is otherwise referred to as electronic protected health information (ePHI).

Organisations may wish to host their data with a third-party provider. These hosting providers must also comply with the HIPAA regulations. There must be physical safeguards in place, which restrict access, and provide policies relating to workstations and electronic media. Likewise, there must also be technical safeguards put in place which prevent unauthorised access, encrypt data transmission and provide audit reports about relevant system changes. There must also be policies in place to deal with disaster recovery and offsite backup.

Failure to comply with the HIPAA regulations could result in hefty fines being issued by the Office for Civil Rights (OCR). Likewise, lawsuits may be filed in the event of an ePHI breach. Organisations must report any breach to both the OCR and the patients involved.

To comply with the HIPPA, it is very important that you are using a sophisticated suite of auditing tools in order to monitor system activity. You will need to be able audit and provide reports about who logged in to your system, where they logged in from, when they logged in, and what protected health information (PHI) was accessed when they logged in. You will also need to audit password changes and failed login attempts. Additionally, you will need to know when the last time your system software was updated, what programs have been installed, by whom, and when.

Below are some examples which illustrate the importance of such auditing operations:

Scenario 1: It is common for attackers to try to guess a system password by trying out a large number of username and password combinations. Your system can record these attempts, and look for patterns which suggest that the activity is suspicious. For example, if the system recorded 1,000 failed password attempts within 5 minutes, you can be fairly sure that it was an attack. Auditing solutions, such as those provided by Lepide, use real-time threshold alerting to quickly identify such attacks.

Scenario 2: Back in 2013, Walgreens – the second-largest pharmacy store chain in the United States – were requested to pay a fine of $1.4 Million as a result of a breach of confidential patient information. Were they able to quickly determine who was accessing the confidential information, and when, this would have helped mitigate the damage caused by this kind of breach.

Scenario 3: The HIPAA auditor will request that you provide evidence that you have a record of your organisation’s audit log for six years or more. They will want verification that all crucial information is included, and that there has been daily reviews carried out. Relying on the native system logs to provide such verification would be a very cumbersome routine. Most modern auditing tools enable administrators to carry out such tasks with ease.

Scenario 4: Finally, in the event of a security breach, auditing the system logs will help forensic investigators find out the cause of the attack, and propose a strategy for strengthening the security policy, thus mitigating future attacks. Without the ability to clearly analyse the logs, investigators would have to assume that all records were subject to the breach.

SCOMpercentageCPUTimeCounter cause CPU Spike

Blog, Operations Manager

*Originally posted at adatum.no

To be honest this have existed for years, and written about back in 2014. Now, in 2017, SCOM 2016 UR2 is released the problem remains. Perhaps with greater consequence due to virtualization.

If you’re unfamiliar with the problem SCOMpercentageCPUTimeCounter.vbs (.ps1 in SCOM 2016) is a script included in the “System Center Core Monitoring” management pack, and is used as the data source for a rule and a monitor to determinate agent health by gathering ‘HealthService’ CPU usage. The rule and monitor are set to run at a fixed interval of 321 seconds (I assume the person who wrote the MP just tapped 3-2-1 on their numpad 🙂 ) and sync time set to 00:00

If you want to look at the actual code you will find  the data source on SystemCenterCore.com

Running this script every 5 minutes isn’t exactly a problem when you have physical servers or a small amount of virtual machines on your Hypervisor. But if you run 100 or 300VM’s on one host and each single VM start this script simultainiasly it will create it creates unnecessary load on your host. If this host is overcommitted as well CPU wait time could cause a ‘freeze’ on your tenant machines as well.

To illustrate the problem, I have attached a graph, that clearly show spikes during script execution.

vcenter host cpu spike SCOM

On a monitored computer you will see a cscript.exe process executing the following command line “c:\windows\system32\cscript.exe” /nologo “SCOMPerventageCPUTimeCounter.vbs

Cscript.exe running SCOM Cpu percentange script

Unfortunately out of the box there isn’t much to do. Sync time and interval is the only overridable parameters, and these will only help reduce the load on the agent machine itself. So if you experience CPU utilization peaks due to this script, I see only two options

  • Disable the rule and monitor
    • Then you will have to rely on the CPU utilization monitor from the operating system management pack
  • Create a new rule and monitor, using SpreadInitializationOverInterval parameter
    • Reduces load as executions occurs randomly within the set interval
    • Requires authoring skills, but possible. Some information here.

To not let this go into oblivion, I have left feedback on Operations Manager user voice. Hopefully, Microsoft will make some changes in the future. If you have suggestions or other experience please let me know and i will update accordingly.

 

Posted by Sheikvara

+919840688822, +919003270444