Tuesday, 16 July 2013

MySQL Server won't restart

MySQL Server won't restart

Today I went to restart MySQL from my SSH console with the following command:

/etc/init.d/mysql restart

However even though the database server stopped it wouldn't restart.

I tried opening another console and running the status command.

/etc/init.d/mysql status

But this just told me it was stopped and a start command kept failing.

Even when I went into my VirtualMin website that manages my virtual server the service wouldn't restart.

I dug into the services and databases and tried accessing a database from VMIN and saw a message saying the system couldn't retrieve a list of databases. Further digging gave me this error:

can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'

Now I knew I had changed some settings in the my.cnf configuration file the other day for performance tuning. So I searched the web and found that I had added a command that wasn't supported by my older version i.e MySQL 5.0.51.

The command in question was:

skip-external-locking

Because Java had crashed and I couldn't access the file to edit it easily.

I quickly ran a:

CHMOD 777 /etc/mysql/my.cnf 

command to allow me to edit the file from FTP and then I swapped the new command with the older one which my version of MySQL supported:

skip-locking

I copied the file back and then hey presto a start command got the server back and running:

/etc/init.d/mysql status

I then made sure to CHMOD the file back so it couldn't be written by the website.

However on checking my website I only got to see the theme but NO articles.

It was a Wordpress site and the problem was probably WP-Super-Cache caching a page without data. I needed to run a REPAIR command on my wp_posts table to ensure all posts were visible again.

This is something I have seen many times before with hard reboots. The system comes back up but no articles appear. I always REPAIR and OPTIMIZE my wp_posts and wp_posts_meta table to rectify this.

This obviously locked the database up, as well as consuming 99% CPU whilst it ran - something which really annoys me, but afterwards the site was working.

So if you have been performance tuning your own MySQL database make sure you are not adding commands that your server doesn't support. A failed restart is a sure sign of unsupported configuration commands.

Wednesday, 10 July 2013

Apache Performance Tuning BASH Script

BASH Script to tune Apache Configuration Settings

As you might know a lot of the time I think the LAMP / Wordpress combo is a big bag of shite.

There are so many configuration options, at so many different levels that need tuning to get optimal performance, it is a nightmare to find the right information. There is also too many people offering various solutions for Wordpress / Linux / Apache / MySQL configuration.

Different people recommend different sizes for your config values and just trying to link up server load with page/URL/script requests to find out the cause of any performance issue is a nightmare in itself.

I would have thought there would have been a basic tool out there that could log server load, memory, disk swapping over time and then link that up with the MySQL slow query log, Apache error AND access logs so that you could easily tell when you had issues what processes were running, which URL's were being hit and how much activity was going on to identify culprits for tuning. I have even thought of learning PERL just to write one - not that I want to!

Even with all the MySQL tuning possible, caching plugins installed and memory limits on potentially intensive tasks it can be a nightmare to get the best out of a 1GB RAM, 40GB Virtual Server that is constantly hammered by BOTS, Crawlers and humans. I ban over 50% of my traffic and I still get performance issues at various times of the day - why? I have no FXXING idea!

Without throwing RAM at the problem you can try and set your APACHE values in the config file to appropriate values for your server and MPM fork type.

For older versions of Apache the Multi-Processing Module, non-threaded, pre-forking webserver is well suited as long as the configuration is correct. However it can consume lots of memory if not configured correctly.

For newer versions (2+) the Worker MPM is better as each thread handles a connection at a time and this is considered better for high traffic servers due to the smaller memory footprint. However to get PHP working on this setting apparently needs a lot of configuration and you should read up about this before considering a change.

Read about Apache performance tuning here Apache Performance Tuning.

To find out your current apache version from the console run

apache2 -v OR httpd -v (depending on your server type, if you run top and see apache2 threads then use apache2 otherwise use httpd)

You will get something like this.

Server version: Apache/2.2.9 (Debian) Server built: Feb 5 2012 21:40:20

To find out your current module configuration from the console run

apache2 -V OR httdp -V

Server version: Apache/2.2.9 (Debian)
Server built: Feb 5 2012 21:40:20
Server's Module Magic Number: 20051115:15
Server loaded: APR 1.2.12, APR-Util 1.2.12
Compiled using: APR 1.2.12, APR-Util 1.2.12
Architecture: 64-bit Server
MPM: Prefork threaded: no forked: yes (variable process count)
etc etc etc...

There are lots of people giving "suitable" configuration settings for the various apache settings but one thing you need to do if you run TOP and notice high memory usage and especially high virtual memory usage is try and reduce disk swapping.

I have noticed that when Apache is consuming a lot of memory that your virtual memory (disk based) will be high and you will often experience either high server loads and long wait times for pages to load OR very small server loads e.g 0.01-0.05, an unresponsive website and lots of MySQL Server Gone Away messages in your error log file.

You need to optimise your settings so that disk swapping is minimal which means trying to optimise your MySQL settings using the various MySQL tuning tools I have wrote about as well as working out the right size for your Apache configuration values.

One problem is that if you use up your memory by allowing MySQL to have enough room to cache everything it needs then you can find yourself with little left for Apache. Depending on how much memory each process consumes you can easily find that a sudden spike in concurrent hits uses up all available memory and starts disk swapping.

Therefore apart from MySQL using the disk to carry out OR caching large queries you need to find the right number of clients to allow at any one time. If you allow too many and don't have enough memory to contain them all then the server load will go up, people will wait and the amount of disk swapping will increase and increase until you enter a spiral of doom that only a restart fixes.

It is far better to allow fewer connections and serve them up quickly with a small queue and less waiting than open too many for your server to handle and create a massive queue with no hope of ending.

One of the things you should watch out for is Twitter Rushes caused by automatically tweeting your posts to twitter accounts as this can cause 30-50 BOTS to hit your site at once. If they all consume your memory up then it can cause a problem that I have wrote about before.

Working out your MaxClients value

To work out the correct number of clients to allow you need to do some maths and to help you I have created a little bash script to do this.

What it does is find out the average size of an Apache thread then restarts Apache so that the correct "free size" value can be obtained.

It then divides the remainder by the Apache process size. The value you get should be roughly the right value for your MaxClients.

It will also show you how much disk swapped or virtual memory you are using as well as the size of your MySQL process.

I noticed on my own server that when it was under-performing I was using twice as much disk space as RAM. However when I re-configured my options and gave the system enough RAM to accommodate all the SQL / APACHE processes then it worked fine with low swapping.

Therefore if your virtual memory is greater than the size of your total RAM e.g if you are using 1.5GB of hard disk space as virtual memory and only have 1GB of RAM then it will show an error message.

Also as a number of Apache tuners claim that your MinSpareServers should be 10-25% of your MaxClients value and your MaxSpareServers value 25-50% of your MaxClientsValue I have also included the calculations for these settings as well.


#!/bin/bash
echo "Calculate MaxClients by dividing biggest Apache thread by free memory"
if [ -e /etc/debian_version ]; then
 APACHE="apache2"
elif [ -e /etc/redhat-release ]; then
 APACHE="httpd"
fi
APACHEMEM=$(ps -aylC $APACHE |grep "$APACHE" |awk '{print $8'} |sort -n |tail -n 1)
APACHEMEM=$(expr $APACHEMEM / 1024)
SQLMEM=$(ps -aylC mysqld |grep "mysqld" |awk '{print $8'} |sort -n |tail -n 1)
SQLMEM=$(expr $SQLMEM / 1024)
echo "Stopping $APACHE to calculate the amount of free memory"
/etc/init.d/$APACHE stop &> /dev/null
TOTALFREEMEM=$(free -m |head -n 2 |tail -n 1 |awk '{free=($4); print free}')
TOTALMEM=$(free -m |head -n 2 |tail -n 1 |awk '{total=($2); print total}')
SWAP=$(free -m |head -n 4 |tail -n 1 |awk '{swap=($3); print swap}')
MAXCLIENTS=$(expr $TOTALFREEMEM / $APACHEMEM)
MINSPARESERVERS=$(expr $MAXCLIENTS / 4)
MAXSPARESERVERS=$(expr $MAXCLIENTS / 2)
echo "Starting $APACHE again"
/etc/init.d/$APACHE start &> /dev/null
echo "Total memory $TOTALMEM"
echo "Free memory $TOTALFREEMEM"
echo "Amount of virtual memory being used $SWAP"
echo "Largest Apache Thread size $APACHEMEM"
echo "Amount of memory taking up by MySQL $SQLMEM"
if [[ SWAP > TOTALMEM ]]; then
      ERR="Virtual memory is too high"
else
      ERR="Virtual memory is ok"
fi
echo "$ERR"
echo "Total Free Memory $TOTALFREEMEM"
echo "MaxClients should be around $MAXCLIENTS"
echo "MinSpareServers should be around $MINSPARESERVERS"
echo "MaxSpareServers should be around $MAXSPARESERVERS"


If you get 0 for either of the last two values then consider increasing your memory or working out what is causing your memory issues. Either that or set your MinSpareServers to 2 and MaxSpareServers to 4.

There are many other settings which you can find appropriate values for but adding indexes to your database tables and ensuring your database table/query caches can fit in memory rather than swapped to disk is a good way to improve performance without having to resort to more caching at all the various levels Wordpress/Apache/Linux users love doing.

If you do use a caching plugin for Wordpress then I would recommend tuning it so that it doesn't cause you problems.

At first I thought WP SuperCache was a solution and pre-caching all my files would speed things up due to static HTML being served quicker than PHP.

However I found that the pre-cache stalled often, caused lots of background queries to rebuild the files which consumed memory and also took up lots of disk space.

If you are going to pre-cache everything then hold the files as long as possible as if they don't change there seems little point in deleting and rebuilding them every hour or so and using up SQL/IO etc.

I have also turned off gzip compression in the plugin and enabled it at Apache level. It seems pointless doing it twice and PHP will use more resources than the server.

The only settings I have enabled in WP-Super-Cache at the moment are:


  • Don’t cache pages with GET parameters. (?x=y at the end of a url) 
  • Cache rebuild.
  • Serve a supercache file to anonymous users while a new file is being generated. 
  • Extra homepage checks. (Very occasionally stops homepage caching)
  • Only refresh current page when comments made. 
  • Cache Timeout is set to 100000 seconds (why rebuild constantly?)
  • Pre-Load - disabled.

Also in the Rejected User Agents box I have left it blank as I see no reason NOT to let BOTS like googlebot create cached pages for other people to use. As bots will most likely be your biggest visitor it seems odd to not let these BOTS create cached files.

So far this has given me some extra performance.

Hopefully the tuning I have done tonight will help the issue I am getting of very low server loads, MySQL gone away errors and high disk swapping. I will have to wait and see!

Thursday, 13 June 2013

Changes to Twitter API

Changes to Twitter API 1.1

As Twitter has changed their API from 1.0 to 1.1 which is totally reliant on oAuth and JSON I have had to take down the links to the Twitter Hash Tag Scanner as it was reliant on the old search RSS feeds which are no longer available.

I have had a quick look but it will take some time to re-develop and involve adding in consumer keys, access keys and so on. You would probably get blocked after a few scans anyway as you would have to login to your own twitter account to make the scans and their rate limits would apply.

Therefore I don't think a new version will be forthcoming to anyone who has purchased a previous version I can only apologise. It's a shame as I wanted to extend it but if I cannot make thousands of scans without being blocked the application just won't work with Twitters new API.

As for the Strictly TweetBot Wordpress plugin I have updated this to use the new API and I have tested it on a couple of my own blogs and it seems to be working.

Today was the switch off day so if you were using the plugin you would have noticed either:
  • No tweets being sent out when you posted.
  • In the Twitter message console lots of error messages saying Tweet not sent or Authentication error.
However if you upgrade to version 1.1.3 then this should fix the problem. 

You can get the latest version from Wordpress.

Also I am pissed off!

And I only just wrote a Twitter Direct Message Responder in PHP the other day which was working fine up until tonight as well!

Damn bloody Twitter.

Even with me being logged in and authenticated I was trying to get a list of my followers and for some reason I kept getting a message like this:

{"errors":[{"message":"Bad Authentication data","code":215}]}

I did write a post to the developer discussion boards on Twitter but as always I have cracked the problem before I got a response.

Basically I am using a very common Twitter / oAuth class which is used by my Twitter Plugin and many other plugins use it as well.

To fix the problem I had to do the following:

Change line 29 in the Twitter class to:

 /* Set up the API root URL. */
 public $host = "https://api.twitter.com/1.1/";


This resolved the issues in my own wordpress plugins which solved sending normal tweets out but to get my Direct Message Responder code working I needed to do one more thing.

Whereras before I was making use of a simple file_get_contents call to an XML feed which Twitter has now abandoned for JSON I had to change this to use the inbuilt HTTP request functions in the Twitter class e.g

$response = $oauth->get($followers_url);

This returns 20 of your new followers (I have not worked out how to get more yet) but in a JSON object.

You can either loop through the nested objects of you could use json_encode to convert the object to a string to do a simple regex to just get a list of screen_names e.g

$body = json_encode($response);

preg_match_all('@"screen_name":"([\s\S]+?)",@i',$body,$matches,PREG_SET_ORDER);

And that solved the problem!

Monday, 20 May 2013

Some clever code for SEO that won't annoy your users

Highlighting words for SEO, turning them off for the users

You might notice in the right side bar I have two options under the settings tab "Un-Bold" and "Re-Bold".

If you try them out you will see what the options do. Basically unbolding any STRONG or BOLD tags or re-bolding them again.

The reason is simple. Bolding important words either in STRONG or BOLD tags is good for SEO. Having content in H1 - H6 tags are even better and so are links - especially if they go to relevant and related content.

However, I don't claim to be the first person to start bolding important keywords and long tail sentences for SEO purposes but I was one of the first to catch on that the benefits for SEO were great.

To much bolding and it looks like spam, too little you might not get much benefit but you have to 2 areas to cater for.

1. The SERP crawlers (Googlebot, BingBot, Yandex etc etc) who see the original source code on the page. When they do they will just see words wrapped in normal STRONG and BOLD tags (See for yourself).

2. However if a user doesn't like the format and mix of bolded and non bolded wording then they can use the settings to add a class to all STRONG and BOLD tags that basically takes aways the font-weight of the element. You would only see this in the generated source code. Running the "Re-Bold" function after the first "Un-Bold" will just remove the class that took away the font-weight in the first place returning the element to it's normal bolded state.

Therefore the code is aimed for both BOTS and users and you can see a simple test page on my main site here: example to unbold and rebold with jQuery.

I have used jQuery for this only because it was simple to write however it wouldn't be too hard to rewrite with plain old JavaScript.

Another extension I have lost since updating this blog format but would be easy to add is the use of a JavaScript created cookie to store the users last preference so that they don't have to keep clicking the "un-bold" option when they visit the site.

As Blogger won't let  you add server side code to the blog you will need to do it all with JavaScript but with the new blogger layout (which I love by the way - unlike Google+) it is easy to add JavaScript (external and internal) plus CSS sections and link blocks to control the actions of your functions.

An example of the code is below and hopefully you can see how easy it is to use.

First I load in the latest version of jQuery from Google.

Then I use selectors to ensure I am only targeting the main content part of the page before I add or remove classes to STRONG or BOLD tags.

<style type="text/css">
.unbold{
 font-weight:normal;
}
</style>

<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>

<script>
function unbold()
{
 $(".entry-content").each(function(){  
  $("strong",this).addClass("unbold");
  $("b",this).addClass("unbold");
 });
}

function bold()
{
 $(".entry-content").each(function(){
  $("strong",this).removeClass("unbold");
  $("b",this).removeClass("unbold");
 });
}
</script>

So not only are you benefiting from SEO tweaks but you are letting your users turn it off if they feel it's a bit too much. Hey Presto!

Saturday, 18 May 2013

Why I hate the new Google+ API

I absolutely hate the new Google+ API

Yes Google+ have had a revamp and if you are not on it then you won't know what the old version was like if you now join.

To me it's as if someone has read too many books on the jQuery effects library and basically orgasmed code across the API.

If you go to type a new status message into a box the whole page shifts round so that your box moves to the centre of the screen and the rest of the messages and segments of the page do a little jig around it so that you are supposed to go "wow".

Not me. Too much API Jizz is something I hate. 

Not only does it repeatedly turn my PC into a helicopter as the CPU rises and falls like a coke head on the lash but it just is too much for my ageing eyes.

It really seems to me as if someone is showing off by writing their "funky" API code. Hey boss look what I can do with a shit load of JavaScript that takes ages for all the page segments to load but makes non techies go "oooh" as they see it in action.

Whilst an API should be friendly and easy to use there is nothing "useful" about the whole screen moving around just so your current type box is in the middle of the screen.

Why not just put the "new message" box in the middle to start with?

Not only that but the amount of times I go to reply to a conversation down the right hand side and someone I have never seen before pops up in a box on top of the place I am trying to write is beyond annoying.

It means not only can I hit the send button but sometimes if I can find a way to get rid of the annoying box (and that's not 100% of the time) the message I was writing disappears!

I know writing the whole page in JavaScript stops (or limits script kiddy's) from scraping easily but there really is a limit. Personally I just think Google+ have crossed it and that there was nothing too wrong with their old API.

What do you think?


Tuesday, 14 May 2013

Handling unassigned local variable errors with struct objects in C#

Handling non assigned struct objects in C#

If you have ever used structs and had use of unassigned local variable errors from your editor i.e Visual Studio then there is a simple solution.

The problem comes about because the compiler is not clever enough to realise that the struct object will always be initialised when used.

This is usually because the struct object is initialised within an IF statement or other code branch which makes the compiler believe that a similar situation to the "unreachable code" error has been detected.

As the compile cannot definitely tell that the struct object will always be initialised when it gets used it will raise a compile error.

In Visual Studio it will usually show up with a red line under the code in question with the error message "use of unassigned local variable ..."

Here is a simple example where the struct object is populated with a method and starts off in the main constructor method unassigned.

However because of the nature of the code and the fact that on the first loop iteration oldID will never be the same as currentID (as oldID starts off as 0 and currentID as 1) then the IF statement will always cause the this.FillObject method to run on each iteration.

Therefore the myvar variable which is based on a struct called myStructObj will always get populated with new values from the loop.

However the compiler cannot tell this from the code and will raise the "use of unassigned local variable myvar" error when I try to pass the object as a parameter into the this.OutputObject(myvar) method which just outputs the current property values from the object.
public class Test
{

 /* example of a method that believes the struct object won't get assigned even though due to the if statement it always will */
 public void Test()
 {

  myStructObj myvar;
  int oldID = 0; 

  /* just a basic loop from 1 to 9 */
  for(int currentID = 1; currentID < 10; currentID++)
  {
   /* as the oldID starts as 0 and currentID starts as 1 on the first loop iteration we will always populate the struct object with values */
   if(oldID != currentID)
   {
    /* populate our struct object using our FillObject method */
    myvar = this.FillObject(currentID, "ID: " + currentID.ToString());

    oldID = currentID;
   }

   /* try and parse our struct to a method to output the values - this is where we would get our red line under the myvar parameter being passed into the OutputObject method e.g. "use of unassigned local variable myvar" */
   this.OutputObject(myvar);
  }

 }

 /* Simple method to output the properties of the object to the console */
 private void OutputObject(myStructObj myvar)
 {
  Console.WriteLine(myvar.prop1);
  Console.WriteLine(myvar.prop2);
 }

 /* Simple method to populate the struct object with a string and integer value for both properties*/
 private myStructObj FillObject(string val1, int val2)
 {
  myStructObj myvar = new myStructObj();

  myvar.prop1 = val1;
  myvar.prop2 = val2;

  return myvar;
 }

 /* my struct object definition - using non nullable types */
 public struct myStructObj
 {
  public string prop1;

  public int prop2;
 }
}

Solution to use of unassigned local struct variable

The solution is to either to always initialise the object before you start the loop or to just use the default keyword to ensure your struct object variable is always set-up with default values.

Example Fix

myStructObj myvar = default(myStructObj);

This will get rid of those annoying red lines and use of unassigned local variable errors.

If your struct object is a value type then it calls the default constructor and if it's a reference type you will get a null that you can then test for before using it.

Simples!

Tuesday, 7 May 2013

Internet Explorer virus used to attack US nuclear weapons researchers

Internet Explorer virus used to attack US nuclear weapons

By Dark Politricks

From the popular alternative news site darkpolitricks.com comes the news that the "most popular browser in the USA - yes IE 8!" has been used by hackers to infiltrate US nuclear weapon researchers computers in America.

Apparently zero day exploits were used, as well as a virus on a popular website frequented by members of the nuclear weapons industry.

The hack was only discovered after an unknown number of computers became infected with a backdoor Trojan that was reportedly installed on the machines of web surfers who used IE 8 to navigate to a specific page on the US Department of Labor website.

"The Department of Labor site was rigged to redirect users to another site that infected computers with an iteration of the infamous "Poison Ivy" Trojan, which was able to avoid detection by all but two major anti-virus products,” Ben Weitzenkorn wrote Monday for TechNews Daily."
According to Microsoft, "The vulnerability may corrupt memory in a way that could allow an attacker to execute arbitrary code in the context of the current user within Internet Explorer."

Why IE 8 is still the most popular browser in the USA I have no idea. Have they not heard of Chrome, FireFox or even IE 9?

We all know IE 6 was a danger to itself, it's users and everyone else around it.

This was due to the severe amount of security holes in the code and the large number of hacks that had to be used to make websites work in it. This was both by CSS designers and JavaScript developers who had to come up with the many frameworks we are now left with. All just to make a standards compliant webpage work in IE and normal browsers.

Just think if it wasn't for IE 5 and IE 6 we probably would have never even heard of jQuery, Prototype, addEvent functions and hacks to get uncommon browsers working on your PC like window.opera and user-agents that are so full of shit they have lost all meaning to anyone.

You can view the full article US nuclear weapons researchers targeted with Internet Explorer virus at the popular #altnews site darkpolitricks.com.