Tuesday 9 December 2014

SQL Denial of Service Attacks

SQL Denial of Service (DOS) attacks - Using SQL for DOS Attacks

By Strictly-Software

Updated Article - First written in 2008

You should have heard about SQL injection attacks and most certainly DOS (Denial of Service) or DDOS (Distributed Denial of Service) attacks but you may not have heard about SDOS (SQL Denial of Service) attacks as something to be on the look out for when considering your websites OR database security.

The idea behind the SQL Denial of Service attack is pretty simple and that is to use any search forms or remote requests, maybe a JSON or SOAP request in a web service, that accesses your SQL database to execute a number of long running and CPU intensive queries so that the site becomes unusable to other users e.g a SQL Denial of Service attack as your request blocks other users from using the servive.

Running multiple attacks from proxies or BOTNETS or even firing multiple requests really quickly in parallel from one computers could be considered an SQL Denial of Service attack.

The reasons these queries cause such an effect is due to any one of these reasons:

  • The CPU on the database server becoming maxed out running these intensive queries.
  • All connections to the database become in use by the attacker and therefore no other connections can be spawned, therefore locking up the site to other users.
  • Bad database design which is causing locking on the tables being scanned. Data is waiting to be updated/deleted/inserted into the table but your query/queries are causing the table to lock up as every row in it is scanned looking for results.



Who is vulnerable to an SQL Denial of Service attack?

Most websites have an SQL backend (MS SQL, MySQL, Oracle etc) and the majority of sites have some user interface that allow visitors to enter textual keywords that are then used as criteria to filter an SQL SELECT statement to return matching records. 7

A few examples of search forms are:

  • Searching for content in a CMS system.
  • Searching for jobs, members of a site or homes and properties to rent.
  • Searching for news articles or comments posted on a message board.


If you are using a database rather than a textual index to return these results you may be at risk if you are using the LIKE keyword or in SQL 2005 a CLR regular expression function to pattern match.

As an example I will use one of my own jobboard sites and imagine that we used the database to handle keyword searches instead of a NoSQL index.

We have a search criteria form that allows the user to enter search terms which we will try and match within the job description column of our JOB table using the LIKE keyword.

( N.B. The database is SQL 2005 running on Windows 2003 and the table in question has 646,681 rows. )

If like a lot of sites we treat gaps in the search term to indicate multiple terms that then get treated as an OR search then if the user had entered "ASP SQL" in the keywords box we would run a search using the following SQL:


SELECT   TOP 10 JobPK,JobDescription
FROM     JOBS as j
WHERE    JobDescription LIKE '%ASP%'
        OR JobDescription LIKE '%SQL%'
ORDER BY JobPK

This particular search returned me 10 rows worth of data in under 1 second.

Now if the user had entered something like this for their search keywords

_[^|?$%"*[(Z*m1_=]-%RT$)|[{34}\?_]||%TY-3(*.>?_!]_

Then the SQL will be


SELECT   TOP 10 JobPK,JobDescription
FROM     JOBS as j
WHERE    JobDescription LIKE '%_[^|?$%"*[(Z*m1_=]-%RT$)|[{34}\?_]||%TY-3(*.>?_!]_%'    
ORDER BY JobPK

This SQL when run returned 0 rows and took 2 minutes 12 seconds to return!

This means that every single row in the database has been searched which is exactly the aim of the
SQL DOS Attack.

Also during the execution of this one query the CPU on the SQL server was maxed out at 100% ! 

You can test this yourself  on your own databases by using the task manager or if you have a decent understanding of the SQL Activity Monitor use that,

Whilst this one query was running other users of the system had problems connecting to the database server and I logged the following errors:



[DBNETLIB][ConnectionRead (recv()).]General network error. Check your network documentation.
[Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionRead (recv()).[Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionWrite (send()).


These errors are down to the SQL Server being so busy that it cannot open new TCP/IP ports.

This mayhem was due to 1 query causing the SQL server to max out. 

Now if you can imagine that the attacker could spawn multiple queries that consumed all available pooled connections each running a query taking 2+ minutes then you can see the scale of the problem and how a Distributed SQL Denial of Service attack could be a disaster for your site.

Any part of your website or web service that used a database connection would be out of action for some time.


Changing the effectiveness of an attack

The SQL LIKE DOS attack can be modified in a number of ways:

  1. The longer the string being matched the longer the query execution time.
  2. Query plans get cached and re-used to increase performance so changing the pattern by a character or two each time will cause a recompile and prevent a cached plan being used.
  3. Combining patterns in an OR statement will increase the execution time. Make sure each pattern is different.
  4. Adding % around the search string will increase the effectiveness especially at the beginning of the string as it will prevent any available index from being used.

Solutions to the SQL DOS attack

In general you should always set a timeout limit for

  1. Connections (the length of time allowed for connecting to a database)
  2. Commands (the length of time allowed for a query/procedure to run)


The defaults for ADO are 30 seconds each.

If you must use an SQL database LIKE search for your search forms then validate the user input in the following ways:
  • Strip all non alpha numerics from the input. Or if you know what types of input you should be searching for you could test the search term against a regular expression first to make sure it matched a legitimate term.
  • Set a maximum length for search terms that can be entered. If you are allowing long strings make sure there are an appropriate amount of spaces for the length e.g one space every 15 characters. If you have a string of 100 characters and only one space then something is wrong!!
  • If allowing non alpha characters then make sure characters such as % _ [ ] that are also used as wildcards in LIKE statements are either escaped or stripped out.
  • Also strip out concurrent non alphanumeric characters.
  • Make sure you have words that consist of 3 concurrent alphanumeric characters or more.

You could also try and prevent automated search requests by using one of the following:
  • Only allow logged in registered users to use the search form.
  • Use session variables to limit the number of searches a user can carry out in a set time period. If the user does not have sessions enabled prevent them from searching.
  • Use JavaScript to run the search. Most bots will not be able to run script so the search will be disabled.
  • Use session variables again so that only one search at a time per user can be requested. Set a flag on search submission and also when the results are returned. If another search is requested within that time prevent it from executing. Again if sessions are disabled prevent searching.

A better option is to not use LIKE statements at all but use a full text index to handle searching.

As well as being more secure due to the fact that symbols are treated as noise characters and therefore ignored you can offer your users a more feature rich search function. You can use SQL Servers inbuilt full text indexing or a 3rd party index systems such as DTSearch.

Setting up full text indexing on a table in SQL 2012 is very easy for example
  1. Make sure the Full Text Index service is installed on the server.
  2. Right click the table you wish to create an index for and select "Define Full-Text Index"
  3. Select the columns you wish to create the index on. In my jobboard example it will be the JobDescription column.
  4. Select how you want changes to the underlying data to be updated within the index. I chose "Automatic" as it means that whenever the data in the JOB table changes the index will get updated at the same time. You could set manually and then create a schedule to update the index.
  5. Give the full text catalog a name. I create a new Full Text Catalog per table that needs an index. If you have multiple indexes that are quite small then you can share a catalog.
  6. If you have not chosen to automatically update the index when the data changes then create a schedule to do this for you.
  7. Create the index!

Once created you can access the data in a variety of ways by joining your index to your other database tables. However this is not an article on Full Text indexing so for a quick example that will replace our earlier search SQL that used LIKEs and OR's.


SELECT    TOP 10 JobPK, JobDescription,Rank
FROM      JOBS as j
JOIN      FREETEXTTABLE(JOBS,JobDescription,'SQL ASP') as K
 ON      K.[KEY] = j.JobPK
ORDER BY Rank DESC


This query will search the index I just created for the search term "SQL ASP" and return the top 10 results ordered by the Rank value SQL gives each result depending on how close it matches the term.

There are many features built into SQL Servers Full Text Indexing implementation which I won't cover here. However this serves as an example of how easy it is to convert a LIKE based search to an index based search.


Conclusion

As you can see it could be very easy, depending on how you validate any search terms entered by your users, for a malicious user or BOT to create havoc by executing long running and CPU intensive queries against your database .

If crafted correctly an effective SQL DOS attack could cause your system to become unavailable for a period of time.

Any part of your site that offers a search facility should be tested for vulnerabilities by entering various search terms and then measuring the effect.

Although deadly at the time of execution these attacks are temporary and can be prevented with good application design and input validation.


Further Information

If you are wanting to create a rich Google like searching system for your site I suggest reading the following article which is very detailed:

http://www.sqlservercentral.com/articles/Full-Text+Search+(2008)/64248/

For more details about SQL Denial of Service attacks read the following article:

http://www.slideshare.net/fmavituna/dos-attacks-using-sql-wildcards/

For more information about pattern matching in general and the problems with regular expressions and CPU read my blog article:

http://blog.strictly-software.com/2008/10/dangers-of-pattern-matching.html

Wednesday 12 November 2014

Twitter Rush caused by Tweet BOTs visiting your site

Twitter Traffic can cause a major hit on your server

If you are using Twitter to post tweets to whenever you blog or post an article you should know that a large number of BOTS will immediately hit your site as soon as they see the link.

This is what I call a Twitter Rush as you are causing a rush of traffic from posting a link on Twitter.

I did a test some months back and I like to regularly test how many hits I get whenever I post a link so that I can weed out the chaff from the wheat and set up rules to ban any BOTS I think are just wasting me money.

Most of these BOTS are also stupid.

If you post the same link to multiple Twitter accounts e.g by using my Wordpress plugin - Strictly Tweetbot then the same BOT will come to the same page multiple times.

Why? Who wrote such crap code and why don't they check before hitting a site that they haven't just crawled that link. I cannot believe the developers at Yahoo cannot write a BOT that works out they have just crawled a page before doing it two more times.

Some BOTS are obviously needed such as the major SERP search engines e.g Google or Bing but many are start up "social media" search engines and other such content scrapers, spammers. hackbots and bandwidth wasters.

Because of this I now 403 a lot of these BOTS or even send them back to the IP address they came from with an ISAPI rewrite rule as they don't provide me with any benefit and just steal bandwidth and cost me money.

RewriteCond %{HTTP_USER_AGENT} (?:Spider|MJ12bot|seomax|atomic|collect|e?mail|magnet|reaper|tools\.ua\.random|siphon|sweeper|harvest|(?:microsoft\surl\scontrol)|wolf) [NC]
RewriteRule .* http://%{REMOTE_ADDR} [L,R=301]

However if you are using my Strictly Tweet BOT plugin which can post multiple tweets to the same or multiple accounts then the new version allows you to pre-post a page which hopefully gets cached by the caching plugin you should be using (WP SUPER CACHE or W3 TOTAL CACHE etc) before the article is made public and the BOT looking at Twitter for URL's to scrape can get to it.

The aim is to get the page cached BEFORE multiple occurrences of BOTS hit the page. If the page is already cached then the load on your server should be a lot less than if every BOT's loading of the page was trying to cache the page at the same time (due to the quickness of their visit). 

However if you are auto blogging and using my TweetBOT you might be interested in Strictly TweetBOT PRO as it had extra features for people who are tweeting to multiple accounts or multiple tweet in different formats to the same account. These new features are all designed to reduce the hit from a Twitter Rush.

The paid for version allows you to do the following:

  • Make an HTTP request to the new post before Tweeting anything. If you have a caching plugin on your site then this should put the new post into the cache so that when the Twitter Rush comes they all hit a cached page and not a dynamically created one.
  • Add a query-string to the URL of the new post when making an HTTP request to aid caching. Some plugins like WP Super Cache allow you to force an uncached page to be loaded with a querystring. So this will enable the new page to be loaded and re-cached.
  • Delay tweeting for N seconds after making the HTTP request to cache your post. This will help you ensure that the post is in the cache before the Twitter Rush.
  • Add a delay between each Tweet that is sent out. If you are tweeting to multiple accounts you will cause multiple Twitter Rushes. Therefore staggering the hits aids performance.


Buy Now


I did a test this morning to see how much traffic was generated by a test post. I got almost 50 responses within 2 seconds!

50.18.132.28 - - [30/Nov/2011:07:04:41 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 471 "-" "bitlybot"
50.57.137.74 - - [30/Nov/2011:07:04:43 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "EventMachine HttpClient"
50.57.137.74 - - [30/Nov/2011:07:04:43 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "EventMachine HttpClient"
184.72.47.46 - - [30/Nov/2011:07:04:43 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "UnwindFetchor/1.0 (+http://www.gnip.com/)"
204.236.150.14 - - [30/Nov/2011:07:04:44 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 403 471 "-" "JS-Kit URL Resolver, http://js-kit.com/"
50.18.121.55 - - [30/Nov/2011:07:04:45 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "UnwindFetchor/1.0 (+http://www.gnip.com/)"
184.72.47.71 - - [30/Nov/2011:07:04:47 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "UnwindFetchor/1.0 (+http://www.gnip.com/)"
50.18.121.55 - - [30/Nov/2011:07:04:48 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "UnwindFetchor/1.0 (+http://www.gnip.com/)"
184.72.47.71 - - [30/Nov/2011:07:05:11 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "UnwindFetchor/1.0 (+http://www.gnip.com/)"
199.59.149.31 - - [30/Nov/2011:07:04:43 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "Twitterbot/0.1"
107.20.160.159 - - [30/Nov/2011:07:04:43 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "http://unshort.me/about.html"
46.20.47.43 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 403 369 "-" "Mozilla/5.0 (compatible"
199.59.149.165 - - [30/Nov/2011:07:04:43 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28862 "-" "Twitterbot/1.0"
173.192.79.101 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 403 471 "-" "-"
50.18.121.55 - - [30/Nov/2011:07:05:11 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "UnwindFetchor/1.0 (+http://www.gnip.com/)"
46.20.47.43 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 403 369 "-" "Mozilla/5.0 (compatible"
199.59.149.31 - - [30/Nov/2011:07:05:11 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "Twitterbot/0.1"
65.52.0.229 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28863 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"
66.228.54.132 - - [30/Nov/2011:07:04:43 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 106183 "-" "InAGist URL Resolver (http://inagist.com)"
199.59.149.165 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28863 "-" "Twitterbot/1.0"
107.20.42.241 - - [30/Nov/2011:07:05:11 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "PostRank/2.0 (postrank.com)"
65.52.0.229 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28862 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"
65.52.0.229 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28862 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"
110.169.128.180 - - [30/Nov/2011:07:05:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28862 "http://twitter.com/" "User-Agent:Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_4; en-us) AppleWebKit/533.18.1"
74.112.131.128 - - [30/Nov/2011:07:05:15 +0000] "GET /some-test-url-I-posted/ HTTP/1.0" 200 106203 "-" "Mozilla/5.0 (compatible; Butterfly/1.0; +http://labs.topsy.com/butterfly/) Gecko/2009032608 Firefox/3.0.8"
74.112.131.131 - - [30/Nov/2011:07:05:16 +0000] "GET /some-test-url-I-posted/ HTTP/1.0" 200 106203 "-" "Mozilla/5.0 (compatible; Butterfly/1.0; +http://labs.topsy.com/butterfly/) Gecko/2009032608 Firefox/3.0.8"
74.112.131.127 - - [30/Nov/2011:07:05:16 +0000] "GET /some-test-url-I-posted/ HTTP/1.0" 200 106203 "-" "Mozilla/5.0 (compatible; Butterfly/1.0; +http://labs.topsy.com/butterfly/) Gecko/2009032608 Firefox/3.0.8"
74.112.131.128 - - [30/Nov/2011:07:05:16 +0000] "GET /some-test-url-I-posted/ HTTP/1.0" 200 106203 "-" "Mozilla/5.0 (compatible; Butterfly/1.0; +http://labs.topsy.com/butterfly/) Gecko/2009032608 Firefox/3.0.8"
74.112.131.131 - - [30/Nov/2011:07:05:17 +0000] "GET /some-test-url-I-posted/ HTTP/1.0" 200 106203 "-" "Mozilla/5.0 (compatible; Butterfly/1.0; +http://labs.topsy.com/butterfly/) Gecko/2009032608 Firefox/3.0.8"
74.112.131.128 - - [30/Nov/2011:07:05:17 +0000] "GET /some-test-url-I-posted/ HTTP/1.0" 200 106203 "-" "Mozilla/5.0 (compatible; Butterfly/1.0; +http://labs.topsy.com/butterfly/) Gecko/2009032608 Firefox/3.0.8"
107.20.78.114 - - [30/Nov/2011:07:05:18 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
65.52.54.253 - - [30/Nov/2011:07:05:19 +0000] "GET /2011/11/hypocrisy-rush-drug-test-welfare-benefit-recipients HTTP/1.1" 403 470 "-" "-"
107.20.78.114 - - [30/Nov/2011:07:05:28 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
65.52.62.87 - - [30/Nov/2011:07:05:30 +0000] "GET /2011/11/hypocrisy-rush-drug-test-welfare-benefit-recipients HTTP/1.1" 403 470 "-" "-"
107.20.78.114 - - [30/Nov/2011:07:05:43 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
107.20.78.114 - - [30/Nov/2011:07:06:15 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
199.59.149.165 - - [30/Nov/2011:07:04:45 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28860 "-" "Twitterbot/1.0"
107.20.160.159 - - [30/Nov/2011:07:04:59 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "http://unshort.me/about.html"
107.20.78.114 - - [30/Nov/2011:07:06:15 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
199.59.149.31 - - [30/Nov/2011:07:04:46 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "Twitterbot/0.1"
107.20.42.241 - - [30/Nov/2011:07:05:01 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "PostRank/2.0 (postrank.com)"
107.20.42.241 - - [30/Nov/2011:07:05:07 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "PostRank/2.0 (postrank.com)"
107.20.78.114 - - [30/Nov/2011:07:06:15 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
107.20.78.114 - - [30/Nov/2011:07:06:17 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
107.20.78.114 - - [30/Nov/2011:07:06:17 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 403 - "-" "MetaURI API/2.0 +metauri.com"
50.16.51.20 - - [30/Nov/2011:07:06:21 +0000] "HEAD /some-test-url-I-posted/ HTTP/1.1" 200 - "-" "Summify (Summify/1.0.1; +http://summify.com)"
74.97.60.113 - - [30/Nov/2011:07:07:11 +0000] "GET /some-test-url-I-posted/ HTTP/1.1" 200 28889 "-" "Mozilla/5.0 (Windows NT 5.1; rv:8.0) Gecko/20100101 Firefox/8.0"



Buy Now

Tuesday 11 November 2014

Turn off WordPress HeartBeat to reduce bandwidth and CPU

Turn off WordPress HeartBeat to reduce bandwidth and CPU

By Strictly-Software

I recently noticed a spike in bandwidth and costs on my Rackspace server. The cost had jumped up a good $30 from normal months.

Now I am still in the process of finding out why this has happened but one thing I did come across was a lot of calls to a script called /wp-admin/admin-ajax.php which was happening every 15 seconds.

Now this is the sign of WordPress's HeartBeat functionality which allows the server and browser to communicate and I quote from the inmotionhosting.com website, HeartBeat
allows WordPress to communicate between the web-browser and the server. It allows for improved user session management, revision tracking, and auto saving.
The WordPress Heartbeat API uses /wp-admin/admin-ajax.php to run AJAX calls from the web-browser. Which in theory sounds awesome, as WordPress can keep track of what's going on in the dashboard.
However this can also start sending excessive requests to admin-ajax.php which can lead to high CPU usage. Anytime a web-browser is left open on a page using the Heartbeat API, this could potentially be an issue.
Therefore I scanned my log files and found that my own server IP was making calls to a page every 15 seconds e.g

62.21.14.247 - - [11/Nov/2014:15:00:20 +0000] "POST /wp-admin/admin-ajax.php HTTP/1.1" 200 98 "http://www.mysite.com/wp-admin/post.php?post=28968&action=edit&message=1" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36" 0/799585
62.21.14.247 - - [11/Nov/2014:15:00:35 +0000] "POST /wp-admin/admin-ajax.php HTTP/1.1" 200 98 "http://www.mysite.com/wp-admin/post.php?post=28968&action=edit&message=1" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36" 25/25540888

I checked my browser (Chrome) which I do leave lots of windows open for ages, multi tasking :) , and I found that I had left open a post edit window in WordPress. This was causing the HeartBeat to call the script every 15 seconds.

Now I don't know that this is the ONLY reason for my increase in Bandwidth and obviously CPU due to all the HTTP requests but I am guessing it made up a big part of it.

Therefore I decided to turn off the HeartBeat functionality.

I have already disabled auto saving and revisions as I don't need that functionality so I am going to see what happens - hopefully my costs will go down!

Turning Off WordPress HeartBeat

To turn off the HeartBeat functionality go to your themes functions.php file and put the following code at the top of it.


// stop heartbeat code
add_action( 'init', 'stop_heartbeat', 1 );

function stop_heartbeat() {
        wp_deregister_script('heartbeat');

}


So I will see what happens with this turned off. So far not a lot.

However if you do notice a spike in your Bandwidth or CPU and you use WordPress check you haven't left a page open in your browser that would be causing the HeartBeat function to call the /wp-admin/admin-ajax.php every 15 seconds!

Wednesday 22 October 2014

Windows 8 and Windows 8 RT - Issues and Workarounds to get Windows 8.1 behaviour

Getting around Window 8's issues

By Strictly-Software

I use Windows 8.1 (proper version - thank god) but if you are stuck on Windows 8 or the Windows 8 RT version you should upgrade to Windows 8.1 ASAP as the design of Windows 8 was so that they could use the same display format for both phones, tablets, PC's and laptops.

The problem was that the techies didn't want numptified hidden away features behind a design obviously meant for touch screens and not for coders.

So they introduced Windows 8 RT which was a sort of half way house between Win 8.1 and Win 8 as they realised they were going to have another Vista issue where people hated the OS and many skipped over it or stayed with the previous version anyway Windows XP (I know I did!).

So when they realised they had another Vista/XP issue on their hands they rolled out Win 8 RT which tried to get rid of the main tablet screen and have am option to get back to a normal desktop screen.

Windows 8.1 was a total reverse of Windows 8 with the desktop as the main screen and the horrible tablets accessible by the home button.

As the article says:

If your PC is currently running Windows 8 or Windows RT, it's free to update to Windows 8.1 or Windows RT 8.1. And unlike previous updates to Windows, you'll get this update from the Windows Store.

by following this Windows Knowledge Base article: upgrade from Windows 8.0 or 8.1 RT to Windows 8.1.

Making some changes

The first thing I hated about the OS was that the mouse would change the size of the screen without me asking it to. In Chrome or elsewhere I was constantly manually putting the screen size back to normal ratio 100%.

In fact the mouse was moving so fast I couldn't hardly see it and I needed to make some changes. The list below are those I made.

Increasing the size and changing the colour of the mouse

I changed the colour of the mouse using the "Pointers" tab where you can pick the group of pointers you want to use to "Windows Black (extra large)" in the top selection and in the bottom I chose "Normal Select".

     

Now I could see the cursor much easier due to the colour change as most programs have a white background so a white cursor with a black border is not very helpful.

Changing the auto drag that the mouse does

One of the annoying things about Windows 8+ is the auto drag ability of the mouse which enables you to highlight or drag features without using the CTRL key.

You can turn this off in the "Buttons" > "Click Lock" panel by de-selecting the "Turn on Click Lock" feature.

This was one of the major pains I had when I first used Windows 8.1 especially when I was remote desktopped into my work PC. Then I found a way to turn it off.


Changing the mouse controlling the size of the screen

This is one thing that really pissed me off. When I was surfing the net or just writing a swipe of the mouse would change the whole screen size.

Other Mouse Changes

I also did the following changes in the "Pointer Options" tab I changed the pointer speed to a slower one.

With a bigger black pointer it helps a lot. I also did the following.


  • I chose to "Automatically move pointer to default selection in a dialogue box". No point hunting for CTRL keys is there! 
  • I chose to "Show location of the pointer when the CTRL key is hit" 
  • A nice circle appears helping you find the cursor. 
  • I disabled the "Hide pointer when typing" option so that I can always see the cursor.

Other Windows 8.1 Programs

With all the numptyfictation going on with the Windows tablet format there are some actual applications that can help you without going command line.

Even though the start bar is pretty crap you can either use the "Search" function to find an app or just go into the app page and start typing. The words will just start to appear in a search box in the top right of the page e.g "Paint" to get the Paint application.




The Windows App Store is one of the great features that Windows 8 has introduced and there are lots of great programs you can download for free including "Live TV" to get TV from freeview channels as well as US and European TV shoes.

With WiFi and network connectivity you want to check that your broadband speed is fast and everything is okay. One of these programs is "Network Speed Test" which you can get from the App Store.

This application tests your broadband or WiFi speed and tells you whether you would have the capabilities of downloading high or low quality video, your internet status, network details and other ISP related information.

So now you know how to get Windows 8.1 features, change some annoying ones in the OS related to mouse usage and use the Windows App Store to download apps to test your WiFi for you.

You should now be on your way.

Monday 13 October 2014

Debugging Issues with Strictly AutoTags or Strictly TweetBOT

Debugging Issues with Strictly AutoTags or Strictly TweetBOT

By Strictly-Software

You can purchase the pro versions (and download the free versions with less features) for these plugins here.

http://www.strictly-software.com/plugins/strictly-tweetbot
http://www.strictly-software.com/plugins/strictly-auto-tags

This article is about debugging issues you may occasionally get with these two plugins.

I usually find (as my code hasn't changed) that if the plugins stop working that the issue is down to one of these scenarios.
  • Another plugin causing issues.
  • WordPress code / update (bugs) - I usually wait until the first subversion is released before upgrading due to having to many problems upgrading to 3.7, 3.8 and 3.9 so I am waiting until 4.1 is released before upgrading this time.
  • Not enough memory on your server. Lots of tags equals lots of regular expressions to run which means memory usage.
  • Too many tags for the post save job to loop through on save. Trim them down regularly.
  • If you are on a shared server / VPS another user maybe taking up all your memory and preventing your site from running correctly - contact your hosting provider.
  • A hacker has caused issues on your server. Check your SSH file to see if it has been compromised with this console line /etc/init.d/ssh and view the contents.

Strictly AutoTags

Specific things to try with this plugin include:

  • De-Activating any auto-feed importer tools that create articles from RSS feeds and then saving a draft / published article from the test article in the Readme.txt file.
  • Ensuring you keep your tag list trim. If you have 25,000+ articles you shouldn't have many tags related to a single article or even three. So use the delete feature to remove tags with few articles related to them. The less articles, the more relevant the tags and the less memory is used.
  • If Auto Discovery is enabled and the test article in Readme.txt file works and tags are added the plugin is working. This means another cause for the issue.
  • Disable Jetpack, I have found this plugin to cause many issues.
  • Disable Post Revisions and set the Auto Save time so far in the future it won't cause problems. Depending on your settings you could be re-tagging your article every X seconds due to Auto Save and Post Revisions.
  • The code for disabling the WordPress features in the wp-config.php file are below.

/* turn off post revisions */
define ('WP_POST_REVISIONS', false);

/* set auto saves so far in future they shouldn't get saved on auto posts */
define( 'AUTOSAVE_INTERVAL', 10000 );

// increase memory limit
define( 'WP_MEMORY_LIMIT', '120M' );


Buy Now


Strictly TweetBot

Specific things to try with this plugin include:
  • Disable Jetpack Sharing or Jetpack altogether. Having multiple tweet systems can cause issues such as duplicate tweets and Twitter thinking you are sending automated messages. Jetpack has also been found to cause issues with many plugins.
  • Ensure you don't have too many Twitter BOT accounts setup. If you do increase your memory in wp-config.php (see the above code for Strictly AutoTags)
  • Constantly check your Admin panel to see if you are getting messages back from Twitter such as "duplicate tweet" - "this tweet looks like an automated message" - "this account needs re-authenticating". If you get these errors then Twitter is blocking your tweets. Change the formats of your messages and prevent duplicates from being sent.
  • If you need to re-authenticate you can try to delete the account, then re-add it and see if it works straight away without requiring a new PIN code.
  • If this doesn't work then you may need to re-install the plugin and all the accounts one by one and re-authenticate the OAuth with new PIN codes.
  • If you are linked automatically with Strictly AutoTags then ensure that the AutoTag plugin is working correctly. Make sure the tag list is not too large, there is enough memory to run the tagging and that when a post is made the POST META value strictlytweetbot_posted_tweet has been added to the post. If it hasn't and the post has tags but no tweets went out then you need to find out why. Check the Strictly AutoTags help list for information on fixing this. Memory and the size of the tag list are two major issues.

Important!

Always remember that if the code in the plugin hasn't changed but then the plugin suddenly stops working LOGICALLY it is likely NOT to be the plugins fault. Therefore:
  • Have you just done something on the server or website? If so roll it back to see if that effected the plugin.
  • Have you just upgraded or installed a new plugin? If so rollback or remove it.
  • Check other plugins by disabling or removing them.
  • Ensure your host has not had issues e.g memory / CPU is too high.
  • Ensure any new plugins have not caused issues.
  • If you have just upgraded WordPress try rolling back to the last working version.
  • Try re-starting Apache, then if that fails try a reboot (soft).

Buy Now

Thursday 18 September 2014

Tricks to make your code independent of machine and server. DLL Writing and portability

Tricks to make your code independent of machine and server. DLL Writing and portability.

By Strictly-Software

Lately I have been working on moving a project that was a web service tied to my machine at work to a new version due to an upgrade in the API I had to use (Betfairs).

They were moving from a few lines of simple SOAP code, to having to write thousands of lines of code, interfaces, and classes for every object due to moving to JSON. To call it a pain is mild.

However by doing this I have learned some tricks that have helped me make the DLL code totally independent of any PC or Server.

It can be run from a windows service on my laptop, work PC or a server. It can be used by a console app to do one simple task repetitively or many tasks at timed intervals using threading and timer callback functions.

I thought I would share some of the things I have learned in-case you find them useful.

Locking, Logging and preventing multiple threads from initiating the same process from the same or different computers.

Now if the DLL is running from multiple machines and doing the same task you don't want it to do the same task multiple times from multiple computers.

Things in my BOT would be to run certain stored procedures or send out certain emails.

Therefore I use Database Locks to get round the problem of multi threading where different jobs are initiated within my Windows Service by timers.

For example in my service once started I have multiple timers with callback functions that run methods on my DLL at various intervals like below.


// set up timers in Windows Service class when running these timers control the time the jobs run 
// e.g RunPriceCheckerJob checks for current prices in all markets, GetResultsJob gets latest results
this.PriceTimer = new Timer(new TimerCallback(RunPriceCheckerJob), null, Timeout.Infinite, Timeout.Infinite);
this.GetResultsTimer = new Timer(new TimerCallback(GetResultsJob), null, Timeout.Infinite, Timeout.Infinite);
this.SystemJobsTimer = new Timer(new TimerCallback(SystemJobsJob), null, Timeout.Infinite, Timeout.Infinite);

// set timer limits
this.PriceTimer.Change(0, 60000); // every minute
this.GetResultsTimer.Change(0, 300000); // every 5 mins
this.SystemJobsTimer.Change(0, 1800000); // every 30 mins but only after 11pm


To prevent a new thread spawning a job to send emails for instance when one is already running either from this or another machine I use a simple LOCK system controlled by a database.

  1. A table called JOB_STEPS with a dates tamp and a column to hold the step/lock.
  2. A method with two parameters. A string with the name of the Job Step OR LOCK and the mode e.g "INSERT" or "DELETE". This method calls a stored procedure that either inserts or removes the record for that day.
  3. A method with one parameter. A string with the name of the Job Step or LOCK. If I want to check if the process I am about to run is already locked and in progress OR has finished then I use this method.
  4. Before each important method I don't want to have multiple instances running I do the following.
1. I build up the name of the Job Step or LOCK record using the computer/maching name e.g

// Use the computer name to help build up a unique job step / lock record
string logfileComputerName = "ARVHIVELOGIFLE_V2_" + System.Environment.MachineName.Replace(" ", "_").ToUpper();

3. I also check that a LOCK file doesn't exist to say that it's already being run.
4. After the job has finished I always remove the lock file.
5. If successful I add in a Job Step record so future processes skip over this code altogether.

The total code for a computer specific example is below.

This is when I need to archive the daily log file for that machine.

You can tell what each method does by the comments.


// As a log file will exist at /programdata/myservice/logfile.log on each machine this runs on. We need to arhive it at midnight and create a new file
// as it's computer specific we use the machine name in the Job Step file as other machines will have log files to archive as well.
string logfileComputerName = "ARVHIVELOGIFLE_V2_" + System.Environment.MachineName.Replace(" ", "_").ToUpper();

// if no Job Step record exists to say the job has been completed and no _LOCK file exists to say it is currently running we attempt the job
if (!this.BetfairBOT.CheckJobStep(logfileComputerName) && !this.BetfairBOT.CheckJobStep(logfileComputerName + "_LOCK"))
{
 bool success = false;

 // add a lock file record in so other processes calling this method know its locked
 if (this.LockFile(logfileComputerName + "_LOCK", "LOCK"))
        {
  success = this.ArchiveLogFile();
 }

 // unlock whatever happened as the attempt has finished - remove the LOCK file
 this.LockFile(logfileComputerName + "_LOCK", "UNLOCK"))

 // if it was successful we add in our actual Job Step record to say its complete for this computer
 this.LockFile(logfileComputerName, "LOCK"))
}


I also use database locks because just setting a flag in a global class that handles things like logging or archiving etc isn't going to cut it when a new process creating the class is happening all the time.

I can then ensure that when I am trying to archive the log file any calls to output log messages are disabled and the LogMsg method is exited ASAP with the this.Locked property.

Otherwise you will run into lots of I/O errors due to the log file being locked by "another process" as you try to archive it.

public Helper()
{
 // get current locked status from database so any concurrent systems have same value
 this.Locked = this.CheckLoggingDisabled();

}

public void LogMsg(string msg, string debugLevel = "HIGH")
{    
    // we are prevented from logging at this point in time from this process
    if (this.Locked){
 return;
    }

    bool doLog = false;

    // if debug level not same or below system level dont output
    if (this.DebugLevel == "HIGH") // log everything passed to us
    {
 doLog = true;
    }
    // only log medium and important messages
    else if (this.DebugLevel == "MEDIUM" && (debugLevel == "MEDIUM" || debugLevel == "LOW"))
    {
 doLog = true;
    }
    // only log important messages
    else if (this.DebugLevel == "LOW" && (debugLevel == "LOW"))
    {
 doLog = true;
    }
    else
    {
 doLog = false;
    }

    // if doLog then output to our log file
}

I tend to wrap code that might fail to due to I/O errors in my DB lock code AND multiple TRY/CATCH statements with an increasing Thread.Sleep(30000); wait in-between each failure.

If the process doesn't work the first time. Then the DB LOCK file is removed and after 5 (or however long your timer is set for) runs it again until you either stop trying or it eventually succeeds.

I found with my old non DLL related service that the Event Log was full of I/O errors at midnight due to failed attempts to transfer the log file. However with this new outer wrapper of DB locks it works first time no matter how many other processes run the DLL.

Levels of Logging

As you can see in the above LogMsg method I not only pass in the message to be logged but a Debug Level parameter that is either HIGH, MEDIUM or LOW.

I also have a system wide setting that says the level of debug I want to store. This is broken down like so:
  • HIGH = Log everything passed to the LogMsg function. The default value as you can see is set to HIGH so if no parameter is passed it will revert to it anyway.
  • MEDIUM = Important method calls, Return values and other important information such as when a job starts or finishes.
  • LOW = Very important messages only such as SQL Errors, Exceptions and other issues when the code cannot be run.

Testing Connectivity

Along with my service I have a little Windows Form application that starts with my PC and sits in the desktop tray. It has a Start and Stop button on it which enables me to stop and start the service from the application.

It also has a couple of labels that tell me information such as my current balance so I don't have to check Betfair and whether the API is up and running.

This is done by a timer in the form class that calls a method in the DLL that tests connectivity. It tests whether the Betfair API can be reached as well as if the database server is up and running. It then shows me the status on the form.

Testing API connectivity is done by creating a BetfairAPI class instance which tries logging in with a saved session (I save any session value to a text file so I don't have to keep getting new ones), and ensuring I have an Account and Client object (2 objects needed to run methods on the Betfair API).

This method is also useful if you experience an exception halfway through a method that had been running okay. I have seen this happen on many occasions when I am getting and saving Market or Price data. An exception will suddenly be thrown with an error like:

The underlying connection was closed or error occurred on a receive or error occurred on a send or even a sudden Object reference not set to an instance of an object.

I have no idea why these errors suddenly pop up during a process that has been running okay for minutes but what I do is re-call the method if one of a number of error message is in the exception list I want to retry on.

So what I do is:
  1. All the methods I want to retry on such a failure has a parameter called retry with a default value of FALSE. 
  2. Wrapped in a Try/Catch if an exception is called I pass the name of the method and the exception to a function called HandleError. 
  3. If the error is one I want to retry I check if it's database related or API related and if so I Kill existing objects like the Data object or BetfairAPI object, re-set them, then call my TestConnectivity method to ensure everything is setup and working. 
  4. I then call a switch statement with my job name and if found I set the success of the method call to another try and pass in TRUE for the retry parameter.

So a TestConnectivity function that can handle lost objects and data connections and re-set them up is ideal not just for checking your application is up and running but for handling unexpected errors and re-setting everything so it works again.

Obviously your own TestAPI function will be related to the API or code you need to check for but an example function to test if you have connectivity to your database is below.

Any exception is just logged. The error is also stored in a global property called this.LastErrorMessage so that the Windows Service can access it and write it to the event log and show it on my Windows Form (if open).

 
public bool TestDatabaseConnection(string connectionType)
{
    bool success = false;

    if (this.DataAccess == null)
    {
 this.DataAccess = new SqlDataAccess();
    }

    try{

      string connectionString = this.HelperLib.GetSetting.GetSettingValue("DEFAULTDATABASE");
      
      this.DataAccess.ConnectionString = connectionString;    

      string sql = "SELECT TOP 1 1 as DBExists FROM sys.sysDatabases";

     DataTable sqlRecordset = null;
     int numDBs = 0;
    
     sqlRecordset = this.DataAccess.GetDataTable(sql);    

     numDBs = sqlRecordset.Rows.Count;

     // got a record then the DB exists
     if (numDBs > 0)
     {
  success = true;
     }
     else
     {
  success = false;
     }

     sqlRecordset.Dispose();
   }catch(Exception ex){
        // store in global propety so Windows Service can access and write to event log or show on form
 this.LastErrorMessage = "SQL Error Testing Connectivity: " + ex.Message.ToString();

        // Log error to log file
        this.HelperLib.LogMsg(this.LastErrorMessage);
   }

    return success;
}


Handling Configuration Settings

At first when I tried copying my code from my old service to a DLL I was stuck on the fact that DLL's don't have app.config XML files to hold constants and connection strings etc.

However I read a few articles and it all seemed like overkill to me. Converting the app.config file into an XML file and then using convoluted methods to obtain the values and so on that involved finding out the location of the DLL and then other paths etc.

So I thought why bother?

If you are storing important information such as mail settings or paths in a config file why not just make things easy and create a simple GetSetting() class that had one method with a Switch statement in it that returned the value you were looking for.

Put this in your top most class so the values are always loaded if they don't already exist and you are basically just obtaining hard coded values which is the same as a config file anyway.

For example:


// HelperLib constuctor
public HelperLib()
{
 this.DefaultDatabase;

 if(this.GetSetting == null)
 {
  this.GetSetting = new GetSetting();

  // get and store values
  if (this.IsEmpty(this.DefaultConnectionString))
  {
   this.DefaultConnectionString = GetSetting.GetSettingValue("DEFAULTDATABASE");
  }
   
 }
}

// GetSetting Class
public class GetSetting
{       
 // empty constructor
 public GetSetting()
 {
    
 }
 
 // get the right value and ensure its upper case in case a mixed case value is passed in
 public string GetSettingValue(string configValue)
 {          
     string settingValue = "";           

     if (!String.IsNullOrWhiteSpace(configValue))
     {
  // ensure upper case
  configValue = configValue.ToUpper();                               
  
  switch (configValue)
  {      
      case "DEFAULTDATABASE":
   settingValue =  "SERVER=BORON;DATABASE=MYDB;uid=myuserlogin;pwd=hu46fh7__dd7;";
   break;                                       
      case "LOGFILE":
   settingValue =  "Logfile.log";
   break;
      case "LOGFILEARCHIVE":
   settingValue =  "LogfileArchived";
   break;    
      /* Mail settings */
      case "MAILHOST":
   settingValue = "generic.smtp.local";
   break;
      case "MAILPORT":
   settingValue = "25"; // port to relay
   break;     
      default:                        
   settingValue =  "";
   break;
  }
     }   

     return settingValue;
 }
}

So these are just a few things I have done to convert my old Windows Service into a DLL that is consumed by a much smaller Windows Service, Console Applications and Windows Form applications.

It shows you how to use a database to handle concurrent access and how important a TestConnectivity method is to ensure that your systems are all up and working correctly.

Hope this has been helpful to at least someone!

Monday 8 September 2014

Rebuilding a Stored Procedure From System Tables MS SQL

Rebuilding a Stored Procedure From System Tables MS SQL

By Strictly-Software

Quite often I find "corrupted" stored procedures or functions in MS SQL that cannot be opened in the visual editor.

The usual error is "Script failed for StoredProcedure [name of proc] (Microsoft.SqlServer.Smo)"

This can be due to comments in the header of the stored procedure that confuse the IDE or other issues that you may not be aware of.

However if you get this problem you need to rebuild the stored procedure or function ASAP if you want to be able to edit it visually again in the IDE.

The code to do this is pretty simple and uses the sys.syscomments table which holds all the text for user-defined objects. We join on to sys.sysobjects so that we can reference our object by it's name.

When you run this with the output as "Results To Grid" you may only get 1-4+ rows returned and the data isn't formatted usefully for you to just copy and paste and rebuild.

Therefore always ensure you chose "Results To Text" when you run this code.

Make sure to change the stored procedure name from "usp_sql_my_proc" to the name of the function of stored procedure you need to rebuild!


SELECT com.text
FROM sys.syscomments as com
JOIN sys.sysobjects as sys
 ON com.id = sys.id
WHERE sys.name='usp_sql_my_proc'
ORDER BY colid

Tuesday 12 August 2014

Why is it every new WordPress update causes me nightmares!

Why is it every new WordPress update causes me nightmares!

By Strictly-Software

Why is every time I upgrade to the latest version of either WordPress or Jetpack (a tool built by WP) I end up with a big bag of shit lumped all over me?

I never upgrade on the first new main version but when it's 3.9.2 you would think it would be safe to upgrade however the problem is that even though WordPress may have updated a load of their code plugin authors would have no idea of the changes until too late to make the changes.

Unless they have the urge to run round like headless chickens re-writing code and trying to work out what has gone on they might be left with their own plugins not working or causing issues.

The number of times I have updated WordPress or Jetpack to find
-Statistics not showing - another plugins dashboard shows instead
-Omnisearch breaking the site.
-Share buttons causing JS errors due to cross protocol scripting and other issues.
-Cloudlflares "SmartErrors" interfering with the automatic plugin update functions and causing /404 urls instead of nice "update plugin" options.

I could go on.

This weekend was one of those weekends as I stupidly decidided to upgrade on Saturday morning. Ever since I have had:
-Emails not being sent out and posted on the site through the Postie plugin.
-My own tagging plugin not tagging for some unknown reason, I think it is a memory issue where if you have over 5,000 tags the number of regular expressions done could be a problem. I always now keep the number of tags under that limit with a clean out now and then.
-Incorrectly formatted content in posts.

It has been a nightmare and in future I am thinking of just keeping to the versions of the plugins and WordPress codebase that I know works instead of constantly upgrading - unless due to security issues which should be easily sorted with some decent .htaccess rules, firewall, CloudFlare and some tools on your server like DenyHosts and Fail2Ban etc.

Plus the odd plugin like Limit Login Attempts helps prevents those bad boys knocking on your admin door all the time!

As a WordPress developer actually said to me once whilst chatting over a discussion on the WP forums - "they are not going to re-write the code from the ground up as there are not enough developers to do it" Instead they spend their time making funky widgets and little wizbang "ooh look at that" type of tools.

Even though a big team like Automatic with such a widely used program, yet full of crap code, and if you have looked at the WordPress codebase you will know how crap it is. And this comes from the mouth of a WordPress developer as well as myself after a long discussion on the forums.

Apparently they won't create a new version to the side of the old one whilst carrying on with the current one due to not having enough developers who know the code well enough to build it from scratch.

It seems to have started off as a small CMS project like a lot of systems then grew like a monster out of all control and now it is uncontrollable so to speak.

So I am fed up with debugging other peoples plugins, hacking WordPress and trying to work out what is going on with serious analysis of mail / error / access logs. So I might just stick to what is working and safe - plus keeps me sane!

It seems WordPress themselves have been hacked recently with a mobile porn vector recently: BaDoink Website RedirectMalicious Redirections to Porn Websites on Mobile Devices and I cannot even get to the plugin site at the moment to complain about the 4 things I have on my current list so maybe their site is down as well.

Who knows. If it wasn't free and such a commonly used tool people wouldn't use it if they knew what code lay beneath the surface!

Tuesday 5 August 2014

Cloudlfare SmartErrors Causes 404 Errors On WordPress Plugin Updates

Cloudlfare's SmartErrors Causes 404 Errors On WordPress Plugin Updates

By Strictly-Software

The other week I wrote a message on the forum on WordPress about an error I was getting when I tried to do an automatic plugin update from one of my sites: Getting A 404 When I Try To Auto Update A Plugin.

The question was:

Hi
I am using the latest software and I haven't had to update a plugin for a while now but tonight I noticed Jetpack was out of date on one site and not on another (both on same server, same plugins and behind CloudFlare with WP Super Cache - which has been having issues lately??)
Anyway when I try to auto update a plugin on this one site the following happens.
1. The password field is full to the brim with *********** on my other site I just have the same number of **** as the word beneath it (inspect and change type="password" to "text" to prove it.
2. When I hit update it goes straight to a /404 page e.g mysite.com/404
3. It then shows a greenish page that says "sorry we couldn't find what you were looking for" here are some suggestions. A cloudflare page.
4. I have asked Cloudflare about this but I have had problems before with Omnisearch being enabled causing similar issues and also I cannot use any JetPack stats as the URL /wp-admin/admin.php?page=stats just takes me to WP-O-Matics dashboard.
I have checked WP-O-Matics source code and CANNOT see anything about going to that URL.
Is this to do with something WP has done or Cloudflare or both. The password box full to the brim with ****** on one site but not the other suggest something with WP is going on.
Thanks for your help
Rob



Now that I have just gone through Cloudflares settings to check what I had on I have realised that the site I didn't have the problem on (also behind Cloudflare) didn't have SmartErrors enabled.

The site that didn't let me auto update plugins from Wordpress DID have SmartErrors enabled.

I disabled the SmartErrors app and then tried an auto update on the broken site. It worked!

So that is the cause of any problems you may get if you try and update your plugins from within WordPress and get a /404 error and a greenish Clouflare page with some links to other pages.

I don't know if it's a Cloudflare issue or a WP issue but if SmartErrors are on it's an issue!

Thursday 24 July 2014

Customer Reviews of Strictly AutoTags version 2.9.7

Custom Reviews of Strictly AutoTags version 2.9.7

Here is what people are saying about it!

By Strictly-Software

The new PRO version of Strictly AutoTags is out version 2.9.7 which you can buy for merely £40 from either my site: www.strictly-software.com/plugins/strictly-auto-tags or on etsy.com.

With Etsy.co:m I have to constantly check an old email account OR the site for sales or messages.

Sometimes this can cause issues if you are using Etsy's own message system as I don't get the messages straight away as they don't email to my main account so I won't know about them.

If you are going to buy the plugin I would prefer you did it from my own site: www.strictly-software.com/plugins/strictly-auto-tags as I get notified straight away. Plus if there are any problems I can help solve them ASAP.

If you cannot find my email address or the contact link doesn't work you can always email me at Email me,


Buy Now


Reviews of Strictly AutoTags

Here are what some of the people who have used my plugin have said about it. Just a quick Google Search will give you more than enough sites talking about it. Here are some of the best reviews.

Manage Your WordPress Tags with Strictly Auto Tags
Tagging was such a chore for me before that I am going to get the paid version of Rob’s Strictly Auto Tags plugin so I can run it with all the extra features and avoid the tagging problems which caused me to get rid of them all before. I did find tags to be a good thing and I do see that they add to traffic and the chance for my blog posts to be found. So, doing away with tags was a good experiment, but I’m bringing them back now. Very glad to have found a plugin to do a lot of the work for me. Sure, I could have ignored all the past posts and just started tagging from here, but that would bug me. I am a bit all or nothing in that way.
http://wordgrrls.com/2014/04/manage-your-wordpress-tags-with-strictly-auto-tags/


I'm so happy I received the paid version. It was a giant goof up and neither of our faults :(.
WOW is all I can say, the extra features in the paid version are just, GREAT!
If you like the free plugin, you will LOVE the paid version!
YAH! 
Dugg Brown
http://wordpress.org/support/topic/paid-version-is-downloading-free-version?replies=5

From BlackHatWorld
This is from a forum about the best tagging plugin to use in WordPress.

Best one is "Strictly auto tags"
Works just like a charm and its hell simple to use.
BlackRat

I find 'strictly auto tags' better than simple or tagpig. Fast and superefficient.
theindiaphile


I am using two plugins at same time. Auto tag plugin and strictly auto tag plugin. I have set some fixed number of tags from both...Auto tag plugin is good as it fetch some keywords from Yahoo and tagthenet while strictly auto tag is really cool...once you will use...you will love it...
chdsandeep

From the BlackHatSEO Forum
http://www.blackhatworld.com/blackhat-seo/blogging/261512-best-auto-tag-plugin-wordpress.html

I need to start out by saying that no automated tool is going to be able to match the abilities of a good editor or content architect. There are tools, however, that will help streamline the process and take a large amount of that potentially heavy load off your back.
My go to solution for this scenario is the Strictly Auto Tags plugin.
Geoffrey Fortier
http://itswordpress.com/tips-tricks/tag-you-are-it-part-3-auto-tagging-for-the-high-volume-blogger/#sthash.q9C3ghFv.dpuf


WordPress Reviews on Strictly AutoTags
Simple, Powerful and Effective... a MUST have!!!
I love how easy it is to auto-generate relevant tags on all posts or just the ones that are not tagged already.
There are some valuable options in the settings area to fine-tune what you want the plugin to do for your particular site. Great work!
I wish more free plugins were this powerful and effective at doing what they are designed to do.
By lazy_sunday, January 30, 2013 for WP 3.5
http://wordpress.org/support/view/plugin-reviews/strictly-autotags

Great Tagging Plugin
Am using this on a sports site and it's picking out player names, team names, organisations etc no problems.

By VATCalculatorPlus, May 22, 2014

Great plugin 
it does what it says. 
thanks for such a nice plugin for tagging... 
By anuragk098, October 28, 2013 for WP 3.7                                                                                                                                                                                                                                                               
User Friendly
User friendly, save lot of time, Increase SEO,INCREASE TRAFFIC
By muqeetsoomro, May 4, 2013
http://wordpress.org/support/view/plugin-reviews/strictly-autotags

Other sites
Top must have WordPress plugins > http://www.justin.my/2011/11/top-must-have-plugins-for-your-wordpress/
Best SEO WordPress plugins > http://robertmstanley.com/tag/strictly-auto-tags/


Now you have seen some of my reviews I hope that will ease your mind about the quality of the plugin, it's usefulness and ability to increase your sites SEO.

However when used in conjunction with another one of my PRO plugins, Strictly TweetBOT, which has automatic configuration so that the Strictly TweetBOT plugin is linked to the latest edition of Strictly AutoTags.

This means that when a post is being tagged no tweets are sent until the tagging is completed so that post tags and categories are available to be used as #hashtags in your tweets!

The PRO TweetBOT version has an option to disable this auto configuration feature so that tweets are send out immediately after an article is posted.

However I don't recommend turning this on unless you are having problems and are willing to always use default hashtags within your tweets.

I use these two plugins in conjunction with each other on sll my sites and with the new PRO Strictly TweetBot's delay options you can now put your new post into a caching plugin and stagger out your tweets to prevent Twitter Rushes.

With the great dashboard feature and "Test Configuration" option it really is a great tool for autobloggers and when used in conjunction with Strictly AutoTags it makes AutoBlogging a great way to make your site look like it has regulatr unique content as well as unique tweets with relevant #hashtags.

Buy Now




Check both plugins out on my site:

Strictly AutoTags
Strictly TweetBOT

Or on Etsy.com

Strictly Software Etsy.com Shop

Monday 21 July 2014

Problems with CloudFlare

Problems with CloudFlare

By Strictly-Software

Recently I moved a couple of my sites behind the free Cloudflare proxy option and set the DNS so it pointed to their server rather than the 123reg.co.uk ones I had been using.

However before I did this I tested out whether there was much difference between the sites.

1. I set to use Cloudflare and WP Super Cache.
2. I used WP Super Cache, Widget Cache and WP Minify.

I actually found that that the 2nd set-up gave me better results. Why I don't know.

However in the end due to all the spam and apparent blocking Cloudflare claimed to be able to do I set the other site up behind it.

However after a while I noticed a few things you might want to be aware of

1. I had a number of email scripts that sent out thousands of emails and I used a

set_time_limit(5000);

at the top of the file to ensure it didn't time out by the Virtual Servers standard 30 seconds.

Also and in-between each email (which I appended to a log file) I did a wait command with:

sleep(2);

So that I didn't overkill the server.

However you should be aware that when you use set_time_limit and then use functions like sleep or file_put_contents or file_get_contents the time it takes to wait, access files and retrieve data is not included in the time limit.

Also as I was using Cloudflare, my PHP script which is web based so I can easily call it by hand if I need to, was using the standard domain e.g http://www.mysite.com/mysendemailjob.php.

However Cloudflare it seems has it's own timeout limit which overrides anything you set in Apache or in your PHP file of about 60 seconds.

I noticed this because my script kept bombing out after around 60 seconds and returned a Cloudflare 524 error which you can read about here.


CloudFlare Timeout Error

So to get round this problem I used the "direct" domain method they set as default to bypass CloudFlare.

I have obviously changed mine as you should to for security sake but once changed to something else when I ran the url: http://www.mysite.com/mysendemailjob.php it didn't bomb out any-more and carried on until the end.

Another thing you have to be careful about CloudFlare is if you have spent ages filling your IP Table up with IP addresses that you want banned due to spam, hack attempts or just over use.

Because all IP addresses in the Apache Access Log are now CloudFlare addresses these won't be used and you are now relying on ClouldFlare's own security measures to block dangerous IP's.

The same goes for your .htaccess file. If you have banned a whole countries range say China or Russia (biggest hackers on the earth - apart from the NSA of course) then these ranges won't mean jack as the user from China will be going through a CloudFlare proxy IP address to your site so any IP you had banned him from will now be useless.

The only thing left to do is ban by user-agent, blank agents is a good one and so are short ones (less than 10 characters as they are usually jibberish).

I ban most of the standard HTTP libraries like CURL, WGET, WIN HTTP, Snoopy and so on as most script kiddies download a library, and don't even bother changing the user-agent before crawling and spamming. Therefore if someone isn't going to tell me who they really are then they can get a 403!

So they are a few things to watch out for with Cloudflare.

I know you can get modules that replace the Cloudflare IP with the original users IP but if you are on an old Debian Lenny box then they don't have support for that.

They must be supplying x-forwarded-for or other headers as when I did a scan using the bypass URL I got back my original IP but with a www.mysite.com scan I got back CloudFlare IPS e.g 104.28.25.11 etc.

The only thing you can do if you cannot take a modern module and reverse engineer it to older code is use the WordPress CloudFlare plugin that lets you get real Akismet IP addresses so you can still ban them.

It is a pain and one I am debating on whether to return to the days of before Cloudflare where my own security measures meant I banned over 50% of traffic and my server bills didn't go up and up.

"CloudFlare is supposed to save me bandwidth but ever since I have installed it although it claims it has saved me loads my Rackspace bill for bandwidth use has gone up and up!"

So just be careful when using Cloudflare it may seem like a magic tool but all those rocketscripts they add to your code are just "async" attributes and you can get many a plugin to minify your source and compress it on the server without using PHP to do so.

The choice is yours but be warned!

Friday 11 July 2014

Introducing Strictly TweetBOT PRO

Introducing Strictly TweetBOT PRO

By Strictly-Software

Today I created a premium version of my popular WordPress plugin Strictly TweetBOT.

This is the plugin I use on ALL my sites to automatically send out Tweets to various Twitter accounts related on the content of the article.

Buy Now


For example on my horse racing site UK Horse Racing Tipster I set up multiple TweetBOT Accounts that are linked to various Twitter Accounts e.g @ukhorseracetips and @ukautobot and when an article is posted the system uses the words in each accounts content analysis box to decide whether to send the Tweet out or not.

You can choose to
-Always send the Tweet.
-Only send it if one or more words are in the article.
-Only send it if ALL of the words are in the article.
-Don't send the Tweet if any of the words are in the article.

You can also prevent Tweets being sent if they contain "noise" words. This can be a list of swear words or other words you would never want tweeted.

Linking your accounts with Twitter is easy and using the new OAuth Twitter 1.1 API.

To link an account you set it up, save it, then when the page reloads a link will appear saying "Click Here To Authenticate", when you do you will be taken to Twitter where you login and are given a PIN number.

You go back to the plugin admin page and enter the PIN number in the relevant box. Hit save at the bottom and if the details are correct your account will now be linked to Twitter.

If you have already linked a Twitter account then adding another is easy as it will remember your credentials and authenticate it automatically.

The Admin dashboard will contain information on your last posted tweets and there is a "Test Configuration" button that will test all your accounts, links with Twitter and any problems you may have. Excellent for checking your plugin is set-up correctly.

The best thing about this plugin is you can use the articles post tags as #HashTags and with its automatic configuration with Strictly AutoTags it will wait until the tagging is complete before Tweeting rather than Tweeting on publish which would mean some articles wouldn't have tags at that point.

This is possible because of the new finished_doing_tagging hook in Strictly AutoTags which is fired once Tagging is complete. Other plugin authors are free to use this hook as well to ensure actions are only carried out AFTER tagging and not on the "onsave" action/event.


The PRO version of the plugin is only £25 and contains some very useful features especially if you are sending out a lot of Tweets or have an underpowered server. 

It can be bought on my site: Strictly TweetBOT WordPress plugin
Or on Etsy.com in my shop: The Strictly-Software shop.

The PRO version extends the performance features including those which enable you to send an HTTP request to the new article to put it into a cache (depending on the settings of your caching system e.g WP Super Cache, W3 Total Cache) and ensure when the first visitor hits it they get a cached version of the page.

In the PRO version you can do the following:

-Add a delay of X seconds so that after the HTTP request is made no Tweets are sent for a while. This ensures the caching system has time to build and cache the new post.
-Add a query-string to the URL as some caching plugins require special keys or query-string parameters to force the page to be cached.
-Use a special user-agent when making the cache HTTP request. This enables you to track the requests in your Apache Access Log as well as white-listing the user-agent to prevent it from being blocked by .htaccess rules.
-Specify a delay between each Tweet that is sent out from your site. This ensures not too many Tweets hit your account at once and not too much Twitter traffic hits your website all at once. Staggering out your Tweets prevents Twitter Rushes and stops too many new Apache processes from being spawned.

Remember when one Tweet is sent out you will get at least 50 requests to the new post immediately from BOTS who have found the link on Twitter.

This is called a Twitter Rush and can cause major performance problems especially if you are sending out 6+ tweets (6 * 50 = 300 concurrent requests). You can read more about Twitter Rushes here.

Any messages back from Twitter such as posting duplicate Tweets or API errors will be saved and listed in your dashboard so that you can view the status of your last post and all the accounts it tried to tweet to. It will tell you whether the account matched the content analysis or was blocked and the result of any "sleep" or "caching" actions.

The PRO version also allows you to:

-Specify the maximum length a Tweet message can ever have e.g 137. This will ensure all Tweets are sent as sometimes when Tweets are exactly 140 characters long they still get rejected by Twitter.

-Turn off the link between Strictly AutoTags and Strictly TweetBot. If you are having issues with the combination OR just want Tweets to be fired ASAP on publishing whether or not tagging is complete you could turn this option on.

Reasons might be debugging or the use of categories or default hashtags instead of using post tags in your Tweets. If you don't need to use the tags against a post then there is no need to wait in the first place so this option can be disabled.


AutoBlogging - Automatic Posting On WordPress

Combining two of my plugins, Strictly TweetBOT and Strictly AutoTags, make a powerful force for any auto-bloggers out there.

If you are posting your articles by importing them from a feed then these plugins make your site a diamond in the dust.

Not only do you have do nothing but when your feed importer inserts a new article the Strictly AutoTags plugin will do it's job making your article a custom piece of work by:
-Basic content spinning by removing old HTML formatting as B I, FONT and SPAN tags
-Wrapping important tagged content in strong tags to highlight them and tell SERPS like GoogleBOT and YSLURP they are the important words in your article.
-Converting textual links like www.ukhorseracingtipster.com into real clickable links e.g www.ukhorseracingtipster.com.
-Converting a certain number of important tags into links to their relevant tag page.
-Adding rel="nofollow" to existing links in your article that don't already have them as well as those converted to links

Plus the SEO benefits are amazing (as my sites can testify) as the Premium version of the plugin allows you to find certain words and tag others. For example you could find words like al-Qaeda, ISIS, Taliban but add the tag Terrorism. You can add as many of these "Tag Equivalents" as you want.

Also for your "Site Keywords", the words you want associated with your site, for example for my racing site it would be Horse Racing, Racing, Tips, Betting etc, then you can set them to be ranked higher than any other words when it comes to relevancy.

And of course you can specify that words in the Title, H1-6 tags, Anchors, Strong Tags and other content be ranked higher than words outside special format tags.

So the AutoTag plugin does it's magic (read more here) and then once it has completed and you haven't turned the link off between the two plugin the TweetBOT goes to work sending Tweets out to all relevant Twitter accounts.

The Strictly TweetBOT PRO Version is even better than normal because it allows you to stagger your tweeting so they all don't get blasted at once causing a Twitter Rush and because you can format each account differently you could have two accounts both going to the same Twitter Account but with totally different messages.

Plus you can use different tweet shortening functions for each e.g Tweet Shrink or Text Shrink.

For example if the article was about a horse race at Royal Ascot titled "Kingman destroys the pack at Royal Ascot" and the first account was using categories (Ascot, Racing News Horse Racing) as hash tags with the format:

Another Ascot news story %title% which you can read about here %url% %hashtags%

You would get:

Another #Ascot news story Kingman destroys the pack at Royal Ascot which u can rd about here bit.ly/34f53 #RacingNews #HorseRacing

And another account that uses post tags (Kingman, Night of Thunder, Royal Ascot, Ascot and many more) as #hashtags with the format:

Racing news from UK Horse Racing Tipster %url% %title% %hashtags%

Would give you this tweet.

Racing news from UK Horse Racing Tipster bit.ly/34f53 #Kingman destroys the pack at Royal #Ascot #RoyalAscot #NightofThunder #JohnGosden

Putting a time delay of 20 seconds in between each tweet would mean that the first tweet would get sent to your account causing 50+ BOTS and visitors to come and hit your new page (a Twitter Rush) and then 20 seconds later another tweet would be sent causing another lot of visitors. Staggering the tweets prevents Twitter Rushes.

Another new feature is the ability to send an HTTP GET request to your new post hopefully causing it to be cached by WP Super Cache / W3 Total Cache or any other caching plugin.

You have the ability to add another time delay after this request to give the caching plugin time to build the file as well as passing special parameters in the querystring (such as special keys or values required by caching plugins to force a cache).

You can also use a special user-agent to make this request so that you can prevent the request being blocked (for example if you ban blank user-agents like I do) or to identify the request in your log files.

If you don't want to use post tags as #HashTags you can use the articles categories or even specify default hash tags. The system will take the list of possible tags and then order them by size and try to fit as many into the Tweet as possible.

It will now even try and scan the title of the tweet looking for tags contained within the tweet itself and add a # in front of the word as this saves room and allows more hash tags to be used in your %hashtag% parameter.


Buy Strictly TweetBOT PRO

So if you want to aid your WordPress sites performance consider buying the Strictly TweetBOT plugin from my site or Etsy.com or if you download the free version please consider donating some money.

If you want to enhance your auto-blogging capabilities then combining the premium versions of both Strictly TweetBOT PRO and Strictly AutoTags Premium Version will do your site wonders as well as saving you time!

If everyone donated just a single pound for every plugin they had download from WordPress then I would have made hundreds of thousands of pounds by now and could spend my time developing plugins full time AND FOR FREE!

You can also buy a voucher from Etsy.com for £15 that will let you hire me to set the plugin up for you if you are having difficulties or are not a WordPress or Twitter expert. All I would need is access to your admin area to do the work.

Also if you haven't checked lately Strictly AutoTags has a premium version which you can buy for £40 and has LOTS more features than the free WordPress version. It also can be bought with a set-up voucher that you can purchase from Etsy.com.

Also if you could please visit my facebook.com/strictlysoftware fan page and "like" it if you could. Leave comments and pass it onto your friends and fellow WordPress developers.



Buy Now