Monday, 13 October 2014

Debugging Issues with Strictly AutoTags or Strictly TweetBOT

Debugging Issues with Strictly AutoTags or Strictly TweetBOT

By Strictly-Software

You can purchase the pro versions (and download the free versions with less features) for these plugins here.

http://www.strictly-software.com/plugins/strictly-tweetbot
http://www.strictly-software.com/plugins/strictly-auto-tags

This article is about debugging issues you may occasionally get with these two plugins.

I usually find (as my code hasn't changed) that if the plugins stop working that the issue is down to one of these scenarios.
  • Another plugin causing issues.
  • WordPress code / update (bugs) - I usually wait until the first subversion is released before upgrading due to having to many problems upgrading to 3.7, 3.8 and 3.9 so I am waiting until 4.1 is released before upgrading this time.
  • Not enough memory on your server. Lots of tags equals lots of regular expressions to run which means memory usage.
  • Too many tags for the post save job to loop through on save. Trim them down regularly.
  • If you are on a shared server / VPS another user maybe taking up all your memory and preventing your site from running correctly - contact your hosting provider.
  • A hacker has caused issues on your server. Check your SSH file to see if it has been compromised with this console line /etc/init.d/ssh and view the contents.

Strictly AutoTags

Specific things to try with this plugin include:

  • De-Activating any auto-feed importer tools that create articles from RSS feeds and then saving a draft / published article from the test article in the Readme.txt file.
  • Ensuring you keep your tag list trim. If you have 25,000+ articles you shouldn't have many tags related to a single article or even three. So use the delete feature to remove tags with few articles related to them. The less articles, the more relevant the tags and the less memory is used.
  • If Auto Discovery is enabled and the test article in Readme.txt file works and tags are added the plugin is working. This means another cause for the issue.
  • Disable Jetpack, I have found this plugin to cause many issues.
  • Disable Post Revisions and set the Auto Save time so far in the future it won't cause problems. Also turn off the automatic WP Cron job that runs on EVERY page load and replace it with a REAL cron job that runs every 10 minutes or so. The CRON line for my job looks like this and I use a user-agent StrictlyCron so I can identify it in my log files so I know it is running: wget -U StrictlyCron -q -O /dev/null http://www.mysite.com/wp-cron.php?doing_wp_cron
  • The code for disabling the WordPress features in the wp-config.php file are below.

/* disable WP-CRON from running all the time on every page load! */
define('DISABLE_WP_CRON', true);

/* turn off post revisions */
define ('WP_POST_REVISIONS', false);

/* set auto saves so far in future they shouldn't get saved on auto posts */
define( 'AUTOSAVE_INTERVAL', 10000 );

// increase memory limit
define( 'WP_MEMORY_LIMIT', '120M' );


Strictly TweetBot

Specific things to try with this plugin include:
  • Disable Jetpack Sharing or Jetpack altogether. Having multiple tweet systems can cause issues such as duplicate tweets and Twitter thinking you are sending automated messages. Jetpack has also been found to cause issues with many plugins.
  • Ensure you don't have too many Twitter BOT accounts setup. If you do increase your memory in wp-config.php (see the above code for Strictly AutoTags)
  • Constantly check your Admin panel to see if you are getting messages back from Twitter such as "duplicate tweet" - "this tweet looks like an automated message" - "this account needs re-authenticating". If you get these errors then Twitter is blocking your tweets. Change the formats of your messages and prevent duplicates from being sent.
  • If you need to re-authenticate you can try to delete the account, then re-add it and see if it works straight away without requiring a new PIN code.
  • If this doesn't work then you may need to re-install the plugin and all the accounts one by one and re-authenticate the OAuth with new PIN codes.
  • If you are linked automatically with Strictly AutoTags then ensure that the AutoTag plugin is working correctly. Make sure the tag list is not too large, there is enough memory to run the tagging and that when a post is made the POST META value strictlytweetbot_posted_tweet has been added to the post. If it hasn't and the post has tags but no tweets went out then you need to find out why. Check the Strictly AutoTags help list for information on fixing this. Memory and the size of the tag list are two major issues.
Always remember that if the code in the plugin hasn't changed but then the plugin suddenly stops working it is likely to NOT be the plugins fault.
  • Check other plugins by disabling or removing them.
  • Ensure your host has not had issues e.g memory / CPU is too high.
  • Ensure any new plugins have not caused issues.
  • If you have just upgraded WordPress try rolling back to the last working version.

Thursday, 18 September 2014

Tricks to make your code independent of machine and server. DLL Writing and portability

Tricks to make your code independent of machine and server. DLL Writing and portability.

By Strictly-Software

Lately I have been working on moving a project that was a web service tied to my machine at work to a new version due to an upgrade in the API I had to use (Betfairs).

They were moving from a few lines of simple SOAP code, to having to write thousands of lines of code, interfaces, and classes for every object due to moving to JSON. To call it a pain is mild.

However by doing this I have learned some tricks that have helped me make the DLL code totally independent of any PC or Server.

It can be run from a windows service on my laptop, work PC or a server. It can be used by a console app to do one simple task repetitively or many tasks at timed intervals using threading and timer callback functions.

I thought I would share some of the things I have learned in-case you find them useful.

Locking, Logging and preventing multiple threads from initiating the same process from the same or different computers.

Now if the DLL is running from multiple machines and doing the same task you don't want it to do the same task multiple times from multiple computers.

Things in my BOT would be to run certain stored procedures or send out certain emails.

Therefore I use Database Locks to get round the problem of multi threading where different jobs are initiated within my Windows Service by timers.

For example in my service once started I have multiple timers with callback functions that run methods on my DLL at various intervals like below.


// set up timers in Windows Service class when running these timers control the time the jobs run 
// e.g RunPriceCheckerJob checks for current prices in all markets, GetResultsJob gets latest results
this.PriceTimer = new Timer(new TimerCallback(RunPriceCheckerJob), null, Timeout.Infinite, Timeout.Infinite);
this.GetResultsTimer = new Timer(new TimerCallback(GetResultsJob), null, Timeout.Infinite, Timeout.Infinite);
this.SystemJobsTimer = new Timer(new TimerCallback(SystemJobsJob), null, Timeout.Infinite, Timeout.Infinite);

// set timer limits
this.PriceTimer.Change(0, 60000); // every minute
this.GetResultsTimer.Change(0, 300000); // every 5 mins
this.SystemJobsTimer.Change(0, 1800000); // every 30 mins but only after 11pm


To prevent a new thread spawning a job to send emails for instance when one is already running either from this or another machine I use a simple LOCK system controlled by a database.

  1. A table called JOB_STEPS with a dates tamp and a column to hold the step/lock.
  2. A method with two parameters. A string with the name of the Job Step OR LOCK and the mode e.g "INSERT" or "DELETE". This method calls a stored procedure that either inserts or removes the record for that day.
  3. A method with one parameter. A string with the name of the Job Step or LOCK. If I want to check if the process I am about to run is already locked and in progress OR has finished then I use this method.
  4. Before each important method I don't want to have multiple instances running I do the following.
1. I build up the name of the Job Step or LOCK record using the computer/maching name e.g

// Use the computer name to help build up a unique job step / lock record
string logfileComputerName = "ARVHIVELOGIFLE_V2_" + System.Environment.MachineName.Replace(" ", "_").ToUpper();

3. I also check that a LOCK file doesn't exist to say that it's already being run.
4. After the job has finished I always remove the lock file.
5. If successful I add in a Job Step record so future processes skip over this code altogether.

The total code for a computer specific example is below.

This is when I need to archive the daily log file for that machine.

You can tell what each method does by the comments.


// As a log file will exist at /programdata/myservice/logfile.log on each machine this runs on. We need to arhive it at midnight and create a new file
// as it's computer specific we use the machine name in the Job Step file as other machines will have log files to archive as well.
string logfileComputerName = "ARVHIVELOGIFLE_V2_" + System.Environment.MachineName.Replace(" ", "_").ToUpper();

// if no Job Step record exists to say the job has been completed and no _LOCK file exists to say it is currently running we attempt the job
if (!this.BetfairBOT.CheckJobStep(logfileComputerName) && !this.BetfairBOT.CheckJobStep(logfileComputerName + "_LOCK"))
{
 bool success = false;

 // add a lock file record in so other processes calling this method know its locked
 if (this.LockFile(logfileComputerName + "_LOCK", "LOCK"))
        {
  success = this.ArchiveLogFile();
 }

 // unlock whatever happened as the attempt has finished - remove the LOCK file
 this.LockFile(logfileComputerName + "_LOCK", "UNLOCK"))

 // if it was successful we add in our actual Job Step record to say its complete for this computer
 this.LockFile(logfileComputerName, "LOCK"))
}


I also use database locks because just setting a flag in a global class that handles things like logging or archiving etc isn't going to cut it when a new process creating the class is happening all the time.

I can then ensure that when I am trying to archive the log file any calls to output log messages are disabled and the LogMsg method is exited ASAP with the this.Locked property.

Otherwise you will run into lots of I/O errors due to the log file being locked by "another process" as you try to archive it.

public Helper()
{
 // get current locked status from database so any concurrent systems have same value
 this.Locked = this.CheckLoggingDisabled();

}

public void LogMsg(string msg, string debugLevel = "HIGH")
{    
    // we are prevented from logging at this point in time from this process
    if (this.Locked){
 return;
    }

    bool doLog = false;

    // if debug level not same or below system level dont output
    if (this.DebugLevel == "HIGH") // log everything passed to us
    {
 doLog = true;
    }
    // only log medium and important messages
    else if (this.DebugLevel == "MEDIUM" && (debugLevel == "MEDIUM" || debugLevel == "LOW"))
    {
 doLog = true;
    }
    // only log important messages
    else if (this.DebugLevel == "LOW" && (debugLevel == "LOW"))
    {
 doLog = true;
    }
    else
    {
 doLog = false;
    }

    // if doLog then output to our log file
}

I tend to wrap code that might fail to due to I/O errors in my DB lock code AND multiple TRY/CATCH statements with an increasing Thread.Sleep(30000); wait in-between each failure.

If the process doesn't work the first time. Then the DB LOCK file is removed and after 5 (or however long your timer is set for) runs it again until you either stop trying or it eventually succeeds.

I found with my old non DLL related service that the Event Log was full of I/O errors at midnight due to failed attempts to transfer the log file. However with this new outer wrapper of DB locks it works first time no matter how many other processes run the DLL.

Levels of Logging

As you can see in the above LogMsg method I not only pass in the message to be logged but a Debug Level parameter that is either HIGH, MEDIUM or LOW.

I also have a system wide setting that says the level of debug I want to store. This is broken down like so:
  • HIGH = Log everything passed to the LogMsg function. The default value as you can see is set to HIGH so if no parameter is passed it will revert to it anyway.
  • MEDIUM = Important method calls, Return values and other important information such as when a job starts or finishes.
  • LOW = Very important messages only such as SQL Errors, Exceptions and other issues when the code cannot be run.

Testing Connectivity

Along with my service I have a little Windows Form application that starts with my PC and sits in the desktop tray. It has a Start and Stop button on it which enables me to stop and start the service from the application.

It also has a couple of labels that tell me information such as my current balance so I don't have to check Betfair and whether the API is up and running.

This is done by a timer in the form class that calls a method in the DLL that tests connectivity. It tests whether the Betfair API can be reached as well as if the database server is up and running. It then shows me the status on the form.

Testing API connectivity is done by creating a BetfairAPI class instance which tries logging in with a saved session (I save any session value to a text file so I don't have to keep getting new ones), and ensuring I have an Account and Client object (2 objects needed to run methods on the Betfair API).

This method is also useful if you experience an exception halfway through a method that had been running okay. I have seen this happen on many occasions when I am getting and saving Market or Price data. An exception will suddenly be thrown with an error like:

The underlying connection was closed or error occurred on a receive or error occurred on a send or even a sudden Object reference not set to an instance of an object.

I have no idea why these errors suddenly pop up during a process that has been running okay for minutes but what I do is re-call the method if one of a number of error message is in the exception list I want to retry on.

So what I do is:
  1. All the methods I want to retry on such a failure has a parameter called retry with a default value of FALSE. 
  2. Wrapped in a Try/Catch if an exception is called I pass the name of the method and the exception to a function called HandleError. 
  3. If the error is one I want to retry I check if it's database related or API related and if so I Kill existing objects like the Data object or BetfairAPI object, re-set them, then call my TestConnectivity method to ensure everything is setup and working. 
  4. I then call a switch statement with my job name and if found I set the success of the method call to another try and pass in TRUE for the retry parameter.

So a TestConnectivity function that can handle lost objects and data connections and re-set them up is ideal not just for checking your application is up and running but for handling unexpected errors and re-setting everything so it works again.

Obviously your own TestAPI function will be related to the API or code you need to check for but an example function to test if you have connectivity to your database is below.

Any exception is just logged. The error is also stored in a global property called this.LastErrorMessage so that the Windows Service can access it and write it to the event log and show it on my Windows Form (if open).

 
public bool TestDatabaseConnection(string connectionType)
{
    bool success = false;

    if (this.DataAccess == null)
    {
 this.DataAccess = new SqlDataAccess();
    }

    try{

      string connectionString = this.HelperLib.GetSetting.GetSettingValue("DEFAULTDATABASE");
      
      this.DataAccess.ConnectionString = connectionString;    

      string sql = "SELECT TOP 1 1 as DBExists FROM sys.sysDatabases";

     DataTable sqlRecordset = null;
     int numDBs = 0;
    
     sqlRecordset = this.DataAccess.GetDataTable(sql);    

     numDBs = sqlRecordset.Rows.Count;

     // got a record then the DB exists
     if (numDBs > 0)
     {
  success = true;
     }
     else
     {
  success = false;
     }

     sqlRecordset.Dispose();
   }catch(Exception ex){
        // store in global propety so Windows Service can access and write to event log or show on form
 this.LastErrorMessage = "SQL Error Testing Connectivity: " + ex.Message.ToString();

        // Log error to log file
        this.HelperLib.LogMsg(this.LastErrorMessage);
   }

    return success;
}


Handling Configuration Settings

At first when I tried copying my code from my old service to a DLL I was stuck on the fact that DLL's don't have app.config XML files to hold constants and connection strings etc.

However I read a few articles and it all seemed like overkill to me. Converting the app.config file into an XML file and then using convoluted methods to obtain the values and so on that involved finding out the location of the DLL and then other paths etc.

So I thought why bother?

If you are storing important information such as mail settings or paths in a config file why not just make things easy and create a simple GetSetting() class that had one method with a Switch statement in it that returned the value you were looking for.

Put this in your top most class so the values are always loaded if they don't already exist and you are basically just obtaining hard coded values which is the same as a config file anyway.

For example:


// HelperLib constuctor
public HelperLib()
{
 this.DefaultDatabase;

 if(this.GetSetting == null)
 {
  this.GetSetting = new GetSetting();

  // get and store values
  if (this.IsEmpty(this.DefaultConnectionString))
  {
   this.DefaultConnectionString = GetSetting.GetSettingValue("DEFAULTDATABASE");
  }
   
 }
}

// GetSetting Class
public class GetSetting
{       
 // empty constructor
 public GetSetting()
 {
    
 }
 
 // get the right value and ensure its upper case in case a mixed case value is passed in
 public string GetSettingValue(string configValue)
 {          
     string settingValue = "";           

     if (!String.IsNullOrWhiteSpace(configValue))
     {
  // ensure upper case
  configValue = configValue.ToUpper();                               
  
  switch (configValue)
  {      
      case "DEFAULTDATABASE":
   settingValue =  "SERVER=BORON;DATABASE=MYDB;uid=myuserlogin;pwd=hu46fh7__dd7;";
   break;                                       
      case "LOGFILE":
   settingValue =  "Logfile.log";
   break;
      case "LOGFILEARCHIVE":
   settingValue =  "LogfileArchived";
   break;    
      /* Mail settings */
      case "MAILHOST":
   settingValue = "generic.smtp.local";
   break;
      case "MAILPORT":
   settingValue = "25"; // port to relay
   break;     
      default:                        
   settingValue =  "";
   break;
  }
     }   

     return settingValue;
 }
}

So these are just a few things I have done to convert my old Windows Service into a DLL that is consumed by a much smaller Windows Service, Console Applications and Windows Form applications.

It shows you how to use a database to handle concurrent access and how important a TestConnectivity method is to ensure that your systems are all up and working correctly.

Hope this has been helpful to at least someone!

Monday, 8 September 2014

Rebuilding a Stored Procedure From System Tables MS SQL

Rebuilding a Stored Procedure From System Tables MS SQL

By Strictly-Software

Quite often I find "corrupted" stored procedures or functions in MS SQL that cannot be opened in the visual editor.

The usual error is "Script failed for StoredProcedure [name of proc] (Microsoft.SqlServer.Smo)"

This can be due to comments in the header of the stored procedure that confuse the IDE or other issues that you may not be aware of.

However if you get this problem you need to rebuild the stored procedure or function ASAP if you want to be able to edit it visually again in the IDE.

The code to do this is pretty simple and uses the sys.syscomments table which holds all the text for user-defined objects. We join on to sys.sysobjects so that we can reference our object by it's name.

When you run this with the output as "Results To Grid" you may only get 1-4+ rows returned and the data isn't formatted usefully for you to just copy and paste and rebuild.

Therefore always ensure you chose "Results To Text" when you run this code.

Make sure to change the stored procedure name from "usp_sql_my_proc" to the name of the function of stored procedure you need to rebuild!


SELECT com.text
FROM sys.syscomments as com
JOIN sys.sysobjects as sys
 ON com.id = sys.id
WHERE sys.name='usp_sql_my_proc'
ORDER BY colid