Showing posts with label DOM. Show all posts
Showing posts with label DOM. Show all posts

Wednesday, 16 February 2022

Blogger EU Cookie Message Missing Problem & Solution

My EU Cookie Message Disappeared From My Site - How To Get It Back

By Strictly-Software

I had a bit of a weird experience recently, I found out, only from Google informing me, that for some reason one of my Blogger sites was not showing the EU Cookie Notice that should appear on all Blogger sites if in a European country where "Consent to use Cookies", is required by all website users.

It used to show, and my other blogger sites were still working and in fact, on my own PC, it was still showing the correct message e.g:

"This site uses cookies from Google to deliver its services and to analyse traffic. Your IP address and user agent are shared with Google, together with performance and security metrics, to ensure quality of service, generate usage statistics and to detect and address abuse."

However, when I viewed the site Google had told me about on another PC this was not appearing for some reason. 

I cleared all the cookies, path, domain, and session using the Web Extension called "Web Developer Toolbar", such as this Chrome version. After installing it, a grey cog appears in your toolbar if you fix it to. 

It is really helpful for turning password fields into text, if you want to see what you are typing, or need to see a Browser stored password in the field, or as needed in this instance, for deleting all kinds of Cookies. So after deleting all the Cookies, I refreshed the page and but the EU Cookie Message still didn't show.

Fixing Blogger Cookie Notice Not Showing

If you view the source of a blogger site that is showing the EU Cookie message, then you should find the following code in your source, not generated source, but the standard "View Source" options when you right-click on the page and view the context menu.

The following code should be just above the footer where all your widget scripts are loaded. Notice that I put some HTML comments above my version of the code so I could easily identify it from Bloggers version in the DOM when viewing the source. Why do this if Bloggers code is not in the source anyway? Wait and see.

<!-- this is my code as bloggers only appear in the source sometimes -->
<script defer='' src='/js/cookienotice.js'></script>
<script>
 document.addEventListener('DOMContentLoaded', function(event) {
      window.cookieChoices && cookieChoices.showCookieConsentBar && cookieChoices.showCookieConsentBar(
          (window.cookieOptions && cookieOptions.msg) || 'This site uses cookies from Google to deliver its services and to analyse traffic. Your IP address and user agent are shared with Google, together with performance and security metrics, to ensure quality of service, generate usage statistics and to detect and address abuse.',
          (window.cookieOptions && cookieOptions.close) || 'Ok',
          (window.cookieOptions && cookieOptions.learn) || 'Learn more',
          (window.cookieOptions && cookieOptions.link) || 'https://www.blogger.com/go/blogspot-cookies');
});
</script> 

So to fix the issue, copy the code out of the page and then go to your layout tool in Blogger. Add a widget at the bottom if no JavaScript/Text widget already exists and copy the code into it.

Now the odd thing is that as soon as I saved the widget and then my blogger site. I went to the website having issues and viewed the source code. When I did I saw that not only was my version of the code was in the HTML, but somehow this had put Bloggers own version back into the HTML as well!

Why this would do that I have no idea. However, it meant that I now had two lots of the same script being loaded and 2 lots of the EU Cookie code that shows the DIV and the options for wording appearing in my HTML.

The good thing though, is that this did not cause a problem for my site. I found by adding that code into a widget at the bottom of my page above Bloggers magically re-appearing code, that it did NOT cause the message to appear twice, but also that when I removed my own version of the code, the Blogger version remained.

Also even though I have put in a version of the Blogger code that uses English sentences into the HTML when I use a Proxy or VPN to visit a European country such as Germany, the wording appears in German.

I suspect that my code runs first as it's first in the DOM, then the Google code runs, overwriting my DIV with their DIV and of course the correct wording for the country we are in.

So as I thought everything was working, I removed my own code and saved the site. I then went to it, deleted path, domain, and session cookies, and then refreshed the page again and saw the blogger cookie code running okay. When viewing the source I could see that my code had gone but the blogger code was now still in the HTML, whereas it wasn't before.

However......

After a few hours when I came back to the computer which had not been showing the message and I re-checked by clearing all cookies (path, domain and session), and saw that Bloggers code had disappeared again, and the message was now not showing again!

Why this has happened I do not know as I had not re-saved the Blogger site in question during the time away so I have no idea what caused the Blogger EU Code to disappear again

There really should be an option in settings to force the Cookie compliance code to be inserted but as there isn't the answer seems to be to just leave your version of the cookie code in the HTML source in a widget at the bottom of the layout.

Why this works without causing issues I have no idea and it sounds like a bodge which it is, but as I cannot find any real answers to this problem online, or in Googles KB, I had to come up with a solution that worked to comply with Google's request and this seems to do it.

So the fact that when you have two lots of the same code in your HTML does NOT cause the message to appear twice is a good thing. This means that even if the original code re-appears then you are okay, and if it doesn't then your own code, which is a direct copy of the blogger code, runs instead. 

Also as your code runs first, if it is causing Bloggers code to also re-appear in the HTML then that will run afterwards ensuring the correct European language is shown in the message.

You can view the JavaScript which is loaded in by Blogger by just appending /js/cookienotice.js to any blogger site e.g this one, http://blog.strictly-software.com/js/cookienotice.js. You can then see the functions and HTML they use to show the DIV. You can also see at the top the ID's and Classes they put on the Cookie Message DIV.

So if you want to check which version of the EU Cookie code is running when both sets of JavaScript exist, you could add a bit of code underneath that checks for the Cookie DIV on display, and add some CSS to target cookieChoicesInfo, which is the ID of the DIV that is shown and you could change the background colour of the DIV to see if it is your DIV or Bloggers DIV that appears.

For example, you could put this under your JavaScript code to change the background colour of the DIV with the following code.

<style>
#cookieChoiceInfo{
	background-color: green !important;
}
</style>

Obviously green is a horrible colour for a background, but it easily stands out. When I did this I saw a Green DIV appear with the message in the correct language displayed, despite my EU options having English as the language for all the wording. 

This is because our code to load the script, and the cookie options into the page runs first, before any Blogger code that appears lower down in the HTML / DOM. When that Blogger code does run, it overwrites the DIV and the wording in the correct European language.

If you right-click on the DIV and choose "inspect" then the Developer Console will appear and you will be able to see that your style to change the background colour is being used on the Cookie message DIV. 

As it's a CSS style block with !important after the style, when the Blogger code overwrites the DIV and wording, the style for the background colour of the DIV is still being determined by our CSS Style block.

So the answer if your EU Cookie Compliance Message disappears is to add your own copy of their code into the site through a widget. 

This shouldn't cause any problems due to any duplicate DIV overwriting your DIV and if it disappears again then at least your version remains.

I just don't understand two things.

1. Why did the Cookie code disappear in the first place?

2. Why did the Blogger code re-appear when I added my own version of Bloggers own EU Cookie message code into the HTML and then why did it dissapear again a couple of hours? 

If anyone can answer these questions then please let me know. A search inside Googles Adsense site does not reveal any useful answers at all.

People just suggest adding query strings to your URL to force it to appear which is no good if your site is linked to from various Search Engines and other sites. Or to just delete all the cookies and refresh the page. 

These are two useless suggestions, and the only thing that seems to work for me is the solution I came up with above. So if you have the same problem try this solution.


By Strictly-Software

Monday, 17 January 2022

Running JavaScript Before Any Other Scripts On A Page

Injecting A Script At The Top Of Your HTML Page


By Strictly-Software

If you are developing an extension for either Firefox, Opera or Chrome, Brave, Edge, and Chromium, then you might come across the need to be able to inject some code into the top of your HTML page so that it runs before any other code.

When developing extensions for Chromium based browsers such as Chrome, Brave and Edge, you will most likely do this from your content.js file which is one of the main files that holds code to be run at certain stages of a pages lifetime.

As the Chrome Knowledge Base says the property values for the "run_at" property include;

  • document_idle, the preferred setting where scripts are guaranteed to run after the DOM is complete and after the window.onload event has loaded all resources which can include other scripts.
  • document_end, content.js scripts are injected immediately after the DOM is complete, but before subresources like images and frames have loaded. So after the DOM is loaded but before window.onload has finished loading external resources.
  • document_start, ensures scripts are injected after any CSS files are loaded but before any other DOM is constructed or any other script is run.  
  • There is of course the "run_at" property in the manifest.json file, which can be set to "document_start", to enable the codes running before other code does. This is especially useful if you need to change header values before a web page loads so that modified values such as the Referer or User-Agent can be modified. However, there may also be a need for you to set up an object or variable that is inserted into the DOM for code within the HTML page to access.

    For example in a User-Agent switcher where you need to both overwrite the Navigator object in JavaScript and the Request header, you may want to create an object or variable that holds the original REAL navigator object or its user-agent so that any page you may create yourself or offer to your users the ability to see the REAL user-agent, browser, language, and other properties if they wanted to.

    For example, I have my own page that I use to show me the current user-agent and list the navigator properties

    However, if they have been modified by my own user-agent switcher extension I also offer up a variable holding the original REAL user-agent so that it can be shown and compared with the spoofed version to see what has changed. I also have a variable that holds the original navigator object in case I want to look at the properties.

    Therefore my HTML page may want to inspect this object if it exists with some code on the page.

    // check to see if I can access the original Navigator object and the user agent string
    if(typeof(origNavigator) !== 'undefined' && origUserAgent !== null)
    {
    	// get the real Browser name with my Detect Browser function using the original Navigator user-agent
    	let realBrowser = Browser.DetectBrowser(origUserAgent);
    
    	// output on the page in a DIV I have
    	G("#RealBrowser").innerHTML = "<b>Real Browser: " + realBrowser "</b>";
    }

    This code just uses a generic Browser detection function for taking a user-agent and finding the Browser name. It even detects Brave by ruling out other browsers and if it is Chrome at the end I check for the Brave properties or mention of the word in the string, which they used to have but newer versions have removed it. 

    However there is hope in the community that they will create a unique user-agent with the word Brave in as at the moment people are having to do object detection which is the better method, and as Brave tries to hide, there are plenty of query strings and other API calls which can be made to find out whether the result indicates Brave rather than Chrome.

    However, at the moment, I am just using a simple detection on the window.navigator object that if TRUE indicates that it is actually Brave NOT Chrome. 

    A later article shows a longer function I developed with fallbacks in case the objects do not exist anymore as there used to be a brave object on the window e.g window.brave that no longer exists, so did many objects for Chrome such as window.google and window.googletag that no longer exist. However, this article explains all that.

    This is just the one line test you can do, it ensures it is not FireFox with a test for a Mozilla only object window.mozInnerScreenX and then checks that it is a Chromium browser with tests for window.chrome and that it's also webkit with a test for window.webkitStorageInfo before some tests for navigator.brave and navigator.brave.isBrave to ensure it's Brave not Chrome e.g:

    // ensure that a Chrome user-agent is not actually Brave by checking some properties that seem to work to identify the browser at the moment anyway...
    let isBrave = !("mozInnerScreenX" in window) && ("chrome" in window && "webkitStorageInfo" in window && "brave" in navigator && "isBrave" in navigator.brave) ? true : false


    However, this article is more about injecting a script into the HEAD of your HTML so that code on the page can access any properties within it.

    As my extension offers an Original Navigator object and a string holding the original/real user-agent before I overwrite it, then I want this code to be the first piece of JavaScript on the page.

    This doesn't have to be limited to extensions and you may have code you want to inject in the HEAD when the DOMLoads before any other code.

    This is a function I wrote that attempts to place a string holding your JavaScript into a new Script block I create on the fly and then insert before any other SCRIPT in the document.head.

    However, if the page is malformed, or has no defined head area it falls back to just appending the script to the document.documentElement object.

    If you pass false in for the 2nd parameter which tells the function whether or not to remove the script after inserting it then if you view the generated source code for the page you will see the injected script code in the DOM.

    The code looks within the HEAD for another script block and if found it inserts it before the first one using insertBefore() however if there is NO script block in the HEAD then the function will just insert the script into the HEAD anyway using the appendChild() method.

    An example of the function in action with a simple bit of JavaScript that stores the original navigator object and user-agent is below. You might find multiple uses for such code in your own work.

    // store the JavaScript in a string
    var code = 'var origNavigator = window.navigator; var origUserAgent = origNavigator.userAgent;";
    
    // now call my function that will append the script in the head before any other and then remove it if required. For testing you may want to not remove it so you can view it in the generated DOM.
    appendScript(code,true);
    
    // function to append a script first in the DOM in the HEAD, with a true/false parameter that determines whether to remove it after sppending it.
    function appendScript(s,r=true){
    
    	// build script element up
    	var script = document.createElement('script');
    	script.type = 'text/javascript';
    	script.textContent = s;
    	
    	// we want our script to run 1st incase the page contains another script e.g we want our code that stores the orig navigator to run before we overwrite it
    
    	// check page has a head as it might be old badly written HTML
    	if(typeof(document.head) !== 'undefined' && document.head !== null)
    	{	
    		// get a reference to the document.head and also to any first script in the head
    		let head = document.head;
    		let scriptone = document.head.getElementsByTagName('script')[0];
    
    		// if a script exists then insert it before 1st one so we dont have code referencing navigator before we can overwrite it		
    		if(typeof(scriptone) !== 'undefined' && scriptone !== null){	
    			// add our script before the first script
    			head.insertBefore(script, scriptone);
    		// if no script exists then we insert it at the end of the head
    		}else{
    			// no script so just append to the HEAD object
    			document.head.appendChild(script);
    		}
    	// no HEAD so fall back to appending the code to the document.documentElement
    	}else{
    		// fallback for old HTML just append at end of document and hope no navigator reference is made before this runs
    		document.documentElement.appendChild(script);
    	}
    	// do we remove the script from the DOM
    	if(r){
    		// if so remove the script from the DOM
    		script.remove();
    	}
    }
    



    I find this function very useful for both writing extensions and also when I need to inject code on the fly and ensure it runs before any other scripts by using a onDOMLoad method.

    Let me know of any uses you find for it.

    Useful Resource: Content Scripts for Chrome Extension Development. 


    By Strictly-Software

    Tuesday, 29 September 2015

    IE Bug with non Protocol Specific URLS

    IE Bug with non Protocol Specific URLS

    By Strictly-Software

    I have recently come across a problem that seems to only affect IE and non protocol specific URLS.

    These are becoming more and more common as they prevent warnings about insecure content and ensure that when you click on a link you go to the same protocol as the page you are on.

    This can obviously be an issue if the site you are on is on an SSL but the site you want to go to doesn't have an HTTPS domain or vice-versa. However most big sites have both domains and will handle the transfer by redirecting the user and the posted data from one domain to another.

    An example would be a PayPal button that's form posts to

    action="//www.paypal.com/"

    In the old days if you had a page on an SSL e.g https://www.mysite.com and had a link to an image or script from a non secure domain e.g http://www.somethirdpartysite.com/images/myimg.jpg you would get pop ups or warning messages in the browser about "non secure content" and would have to confirm that you wanted to load it.

    Nowadays whilst you don't see these popups in modern browsers if you check the console (F12 in most browsers) you will still see JavaScript or Network errors if you are trying to load content cross domain and cross protocol.

    S2 Membership Plugin

    I came across this problem when a customer who was trying to join one of my WordPress subscription sites (that uses the great free S2 Membership Plugin), complained to say that when he clicked a payment button that I have on many pages, was being taken to PayPal's homepage rather than the standard payment page that details the type of purchase, price and options for payment.

    It was working for him in Chrome and FireFox but not IE.

    I tested this on IE 11 Win7 and Win8 myself and found that this was indeed true.

    After hunting through the network steps in the developer toolbar section (F12) and comparing it to Chrome I found that the problem seemed to be IE doing a 301 redirect from PayPals HTTP domain to their HTTPS one. 

    After analysing the response and request headers I suspect it is something to do with the non UTF-8 response that PayPal was returning to IE for some reason, probably because Internet Explorer wasn't requesting it as such for some reason.

    Debugging The Problem


    For the techies. This is a breakdown of the problem with network steps from both Chrome and IE and relevant headers etc.

    First the Paypal button code which is encrypted by S2Member on the page. You get given the button content as standard square bracket short codes which get converted into HTML on output. Looking at the source on the page in both browsers on one button I could see the following.

    1. Even though the button outputs the form action as https://www.paypal.com it seems that one of my plugins OR WordPress itself, although I suspect a caching plugin obviously, is changing my links (I haven't been able to narrow it down - or WordPress) is removing any protocols conflicts by using non protocol specific URLS.

    So as my site doesn't have an SSL any HREF, SRC or ACTION that points to an HTTPS URL was being replaced with // e.g https://www.paypal.com on my page http://www.mysite.com/join was becoming //www.paypal.com in the source and generated source.

    2. Examining the HTML of one of the buttons you can see this in any browser. I have cut short the encrypted button code as it's pointless outputting it all.


    <form action="//www.paypal.com/cgi-bin/webscr" method="post">
    <input type="hidden" name="cmd" value="_s-xclick">
    <input name="encrypted" type="hidden" value="-----BEGIN PKCS7-----MIILQQYJKoZIhvcNAQcEoIILMjCCCy4CAQExgg..."
    


    3. Outputting a test HTML page on my local computer and running it in IE 11 WORKED. This is was probably because I explicitly set the URL to https://www.paypal.com so no redirects were needed.

    4. Therefore logically the problem was due to the lack of an HTTPS in the URL.

    5. Comparing the network jumps.

    1. Chrome

    Name - Method - Status    - Type                     - Initiator
    webscr - POST - 307      - x-www-form-urlencoded    - Other
    webscr - POST - 200 - document             - https://www.paypal.com/cgi-bin/webscr

    2. IE

    URL         - Protocol  - Method - Result - Type       - Initiator
    /cgi-bin/webscr         - HTTP      - POST   - 301       -           - click
    /cgi-bin/webscr         - HTTPS    - POST   - 302       - text/html  - click
    https://www.paypal.com/home - HTTPS   - POST   - 200       - text/html  - click

    Although the titles are slightly different you can see they just are different words for the same thing e.g Status in Chrome or Result in IE both relate to the HTTP Status code the response returned.

    As you can see Chrome also had to do a 307 (HTTP 1.1 successor to the 302 temporary redirect) from HTTP to HTTPS however it ended up on the correct page. Whereas in IE when I first clicked on the button it took me to the payment page in HTTP but then did a 301 (permanent) redirect to it in HTTPS and then a 302 (temporary) redirect to their home page.

    If you want to know more about these 3 redirect status codes this is a good page to read.

    The question was why couldn't IE take me to the correct payment page?

    Well when I looked at the actual POST data that was being passed along to PayPal from IE on the first network hop I could see the following problem.

    cmd=_s-xclick&encrypted=-----BEGIN----MIILQQYJKoZIhvcNAQcEoIILMjCCCy4CAQExgg...

    Notice the Chinese character after the BEGIN where it should say PKCS7?

    In Chrome however this data was exactly the same as the form e.g

    cmd:_s-xclick
    encrypted:-----BEGIN PKCS7-----MIILQQYJKoZIhvcNAQcEoIILMjCCCy4CAQExgg...

    Therefore it looked like for some reason the posted data was being misinterpreted by IE whereas in Chrome it was not. Therefore I needed to check what character sets and response was being sent and returned.

    Examining Request and Response Headers

    When looking at the HTTP Request headers on the first POST to PayPal in IE I could see that the Accept-Language header was only asking for en-GB e.g a basic ASCII character set. Also there was quite a lack of request headers compared to IE. I have just copied the relevant ones that can be compared between browsers.

    IE Request Headers

    Key         Value
    Content-Type: application/x-www-form-urlencoded
    Accept-Language: en-GB
    Accept-Encoding: gzip, deflate
    Referer: http://www.mysite.com/join-now/
    User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
    Request: POST /cgi-bin/webscr HTTP/1.1
    Accept: text/html, application/xhtml+xml, */*
    Host:         www.paypal.com

    Chrome Request Headers

    Key                 Value
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
    Accept-Encoding:gzip, deflate
    Accept-Language:en-GB,en-US;q=0.8,en;q=0.6
    Cache-Control: max-age=0
    Connection: keep-alive
    Content-Length:4080
    Content-Type: application/x-www-form-urlencoded


    And the responses for the Content-Type header which I think is key.

    IE

    Content-Type: text/html

    Chrome

    Content-Type: text/html; charset=UTF-8


    So whilst Chrome is saying it will accept more language sets and gets back a charset of UTF-8 IE is only saying it will accept only en-GB and gets back just text/html.

    I even tried seeing if I could add UTF-8 in as a language to accept in IE but there was no option to, so I tried adding Chinese which obviously use extended character sets and was the problematic character.
     However this made no difference even though the Accept-Language header was now:

    Accept-Language: en-GB,zh-Hans;q=0.9,zh-CN;q=0.8,zh-SG;q=0.7,zh-Hant;q=0.6,zh-HK;q=0.4,zh-MO;q=0.3,zh-TW;q=0.2,zh;q=0.1

    Conclusion


    Therefore I came to the conclusion that I could not force IE to change it's behaviour and I doubt any phone calls to IE HQ or even PayPal would solve the issue. Therefore to allow IE users to be able to still pay on my site I needed a workaround.

    1. I added to my technical test page which I make all users check before paying a test for IE and a warning about possible problems. This went alongside tests to ensure JavaScript and Cookies were enabled as both are needed for any modern JavaScript site.

    2. I added in some JavaScript code in my footer that run on DOM load that looped through all FORM elements and checked the ACTION attribute. Even though when examining the results in the console they showed http://www.paypal.com rather than what I saw in the source //www.paypal.com I added some code in to ensure they always said HTTPS.

    The function if you are interested is below and it seems to have fixed the problem in IE. If I view the generated source now I can see all form actions have HTTPS protocols.



    // on DOM load loop through all FORM elements on the page
    jQuery(document).ready(function () {	
    	// get all form elements
    	var o,,e=document.getElementsByTagName("FORM");
    	for(var i=0,l=e.length;i<l;i++)
    	{
    		// get the action attribute
    		o = e[i].action;
    
    		// if current action is blank then skip
    		if(o && o!="")
    		{
    			// if the start of the action is http (as non protocol specifc domains show up as http)
    			// then replace the http with https
    			if( /^http:\/\/www.paypal.com/.test(o) )
    			{
    				e[i].action = o.replace("http:","https:");
    			}		
    		
    		}
    	}	
    });
    


    So whilst this is a just a workaround for the IE bug it does solve the issue until Internet Explorer sorts itself out. Why they have this problem I have no idea.

    I am outputting all my content as a UTF-8 charset and Chrome is obviously handling it correctly (along with Firefox and Safari).

    So I can only presume it's an IE bug which isn't helped by an unknown (as yet) plugin (or WordPress) changing cross protocol URLs to the now standard //www.mysite.com format.

    Therefore if you come across similar problems with redirects taking you to the wrong place, check your headers, compare browsers and if you spot something strange going on, try a JavaScript workaround to modify the DOM on page load.


    © 2015 Strictly-Software

    Friday, 18 September 2015

    What is the point of client side security

    Is hacking the DOM really hacking?

    By Strictly-Software

    The nature of the modern web browser is that it's a client side tool.

    Web pages that are stored on web-servers when viewed in Chrome or FireFox are downloaded file by file (CSS, JavaScript, HTML, Images etc), and stored temporarily on your computer whilst your browser application puts them together so you can view the webpage.

    This is where your "browser cache" comes from. It is good to have commonly downloaded files such as the jQuery script or common images from frequently visited pages in your cache but when this folder gets too big it can become slow to traverse and load from. This is why a regular clean out is recommended by a lot of computer performance tools.

    So because of this putting any kind of security on the client side is pointless as anyone with a small working knowledge of Internet technology can bypass it. I don't want to link to a certain site in particular but it appeared as a google advert on my site the other day claiming to protect your whole website from theft, including your HTML source code.

    However if you have a spare 30 minutes on your hands, have Firebug installed (or any modern browser that allows you to visit and edit the DOM) and did a search for "code to protect HTML" you would be able to bypass the majority of the sites wonderful security claims with ease.

    Examples of such attempts to use client side code to protect code or content include:

    1. Trying to protect the HTML source code from being viewed or stolen. 

    This will include the original right mouse click event blocker.

    This was used in the old days in the vain hope that people didn't realise that they could just go to Tools > View Source instead of using the context menu which is opened with a right click on your mouse.

    The other option was just to save the whole web page from the File menu. 

    However you can now just view the whole generated source with most developer tools e.g Firebug - or hitting F12 in Chrome.

    Some sites will also generate their whole HTML source code with Javascript code in the first place. 
    Not only is this really really bad for SEO but it is easily bypassed.

    A lot of these tools pack, encode and obfuscate it on the way. The code is then run through a function to evaluate it and write it to the DOM

    It's such a shame that this can all be viewed without much effort once the page loads in the DOM. Just open your browsers Developer Toolbar and view the Generated Source and hey presto the outputted HTML is there.

    Plus there are many tools that let you run your scripts on any page e.g someone at work the other day didn't like the way news sites like the BBC always showed large monetary numbers as £10BN and added a regular expression into one of these tools to automatically change all occurrences to £10,000,000,000 as he thought the number looked bigger and more correct.  Stupid example I know but it shows that with tools like Fiddler etc that you can control the browser output.

    2. Using special classes to prevent users from selecting content

    This is commonly used on music lyric sites to prevent people copying and pasting the lyrics straight off the page by selecting the content and using the copy button.

    Shame that if you can modify the DOM on the fly you can just find the class in question with the inspect tool, blank it out and negate it's affect.

    3. Multimedia sites that show content from TV shows that will remain unnamed but only allow users from the USA to view them. 

    Using a proxy server sometimes works but for those flash loaded videos that don't play through a proxy you can use YSlow to find the base URI that the movie is loaded from and just load that up directly.

    To be honest I think these companies have got wise to the fact that people will try this as they now insert location specific adverts into the movies which they never used to do. However it's still better than moving to the states!

    4. Sites that pack and obfuscate their Javascript in the hope of preventing users from stealing their code. 

    Obviously minification is good practise for reducing file size but if you want to unpack some JavaScript then you have a couple of options and there maybe some valid reasons other than just wanting to see the code being run e.g preventing XSS attacks.

    Option 1 is to use my script unpacker form which lets you paste the packed code into a textarea, hit a button and then view the unpacked version in another textarea for you to copy out and use. It will also decode any encoded characters as well as well as formatting the code and handling code that has been packed multiple times.

    If you don't want to use my wonderful form and I have no idea why you wouldn't then Firefox comes to the rescue again. Copy the packed code, open the Javascript error console and paste the code into the input box at the top with the following added to the start of it:
    
    //add to the beginning eval=alert;
    eval=alert;eval(function(p,a,c,k,e,r){e=String;if(!''.replace(/^/,String)){while(c--)r[c]=k[c]||c;k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('3(0,1){4(0===1){2("5 6")}7{2("8 9")}',10,10,'myvar1|myvar2|alert|function|if|well|done|else|not|bad'.split('|'),0,{}))
    
    // unpacked returns
    function(myvar1,myvar2){if(myvar1===myvar2){alert("well done")}else{alert("not bad")}
    



    Then hit evaluate and the unpacked code will open in an alert box which you can then copy from.

    What the code is doing is changing the meaning of the function eval to alert so that when the packed code runs within its eval statement instead of executing the evaluated code it will show it in the alert message box.

    There are many more techniques which I won't go in to but the question then is why do people do it?

    Well the main reason is that people spend a lot of time creating websites and they don't want some clever script kiddy or professional site ripper to come along steal their content and use it without permission.

    People will also include whole sites nowadays within frames on their own sites or just rip the whole thing CSS, images, scripts and everything else with a click of a button. There are too many available tools to count and a lot of phishing sites will copy a banks layout but then change the functionality so that it records your account login details.

    I have personally seen 2 sites now that I have either worked on or know the person who did the work appear up on the net under a different URL with the same design, images, JS code, all the same apart from the wording was in Chinese .

    The problem is that every modern browser now has a developer tool set like Firebug, Chrome or Internet Explorers developer toolbar. For older browsers there are Operas dragonfly and even Firebug-lite which replicates Firebug functionality for those of you wanting to use it on older browsers like IE 6.

    Therefore with all these built in tools to override client side security techniques it seems pretty pointless trying to put any sort of security into your site on the client side.

    Even if you didn't want to be malicious and steal or inject anything you can still modify the DOM, run your own Javascript, change the CSS and remove x y and z.

    All security measures related to user input should be handled on the server to prevent SQL injection and XSS hacks but that's not to say that duplicating validation checks on the client isn't a good idea.

    For one thing it saves time if you can inform a user that they have inputted something incorrectly before the page is submitted.

    No one likes to fill in a long form submit it and wait whilst the slow network connection and bogged down server takes too long to respond only to show another page that says one of the following:
    • That user name is already in use please choose another one.
    • Your email confirmation does not match.
    • Your password is too short.
    • You did not complete blah or blah.
    Things like this should be done client side if possible, using Ajax for checks that need database look ups such as user name availability tests. Using JavaScript to test whether the user has JavaScript enabled is a good technique for deciding whether to rely purely on server side validation or to load in functions that allow for client side validation if possible.

    However client side code that is purely there to prevent content from being accessed without consent seems pointless in the age of any modern browser.

    Obviously there is a large percentage of web users out there that wouldn't know the first thing to do when it comes to bypassing client side security code and the blocking of the right click context menu would seem like black magic to them.

    Unfortunately for the people who are still wanting to protect their client side code the people that do want to steal the content will have the skills to bypass all your client side cleverness.

    It may impress your boss and seem worth the $50 for about 10 minutes until someone shows you how you can add your own Javascript to a page to override any functions already there for blocking events and checking for iframe positioning.

    My only question would be is it really hacking to modify the DOM to access or bypass certain features meant to keep the content on that page?

    I don't know what other people think about this but I would say no its not.

    The HTML, images, JavaScript and CSS are ALL on my computer at the point of me viewing them on whatever browser I am using. Therefore unless I am trying to change or inject anything into the web or database server to affect future site visitors or trying to bypass challenge responses then I am not really hacking the site just modifying the DOM.

    I'd be interested to know what others think about that question?

    By Strictly-Software

    © 2015 Strictly-Software

    Thursday, 22 August 2013

    Handle jQuery requests when you want to reference code that hasn't loaded yet

    Handle jQuery requests when you want to reference code that hasn't loaded yet

    As you should be aware it is best practise to load your JavaScripts at the bottom of your HTML for performance and to stop blocking or slow load times.

    However sometimes you may want to reference an object that is not yet loaded higher up in the page.

    If you are using a CMS or code that you cannot change then you may not be able to add your event handlers below any scripts that maybe needed to use them. This can cause errors such as:

    Uncaught ReferenceError: $ is not defined 
    Uncaught ReferenceError: jQuery is not defined

    If you cannot move your code below where the script is loaded then you can make use of a little PageLoader function that you can pass any functions to and which will hold them until jQuery (or any other object) is loaded before running the functions.

    A simple implementation of this would involve a setTimeout call that constantly polls the check function until your script has loaded.

    For example:

    
    PageLoader = { 
     
     // holds the callback function to run once jQuery has loaded if you are loading jQuery in the footer and your code is above
     jQueryOnLoad : function(){}, 
    
     // call this function with your onload function as the parameter
     CheckJQuery : function(func){
      var f = false;
    
      // has jQuery loaded yet?
      if(window.jQuery){
       f=true;
      }
      // if not we store the function if first time in loop otherwise and set a timeout
      if(!f){
       // if we have a function store it until jQuery has loaded
       if(typeof(func)=="function"){    
        PageLoader.jQueryOnLoad = func;
       }
       // keep looping until jQuery is in the DOM
       setTimeout(PageLoader.CheckJQuery,200);
      }else{
       // jQuery has loaded so call the function
       PageLoader.jQueryOnLoad.call();    
      }
     }
    }
    


    As you can see the object just holds the function passed to it in memory until jQuery has been loaded in the DOM. This will be apparent because window.jQuery will be true.

    If the object isn't in the DOM yet then it just uses a setTimeout call to poll the function until it has loaded.

    You could increase the length of time between the polls or even have a maximum limit so that it doesn't poll for ever and instead after 10 loops returns an error to the console. However this is just a simple example.

    You would call the function by passing your jQuery referencing function to the CheckJQuery function like so.

    
    <script>
    PageLoader.CheckJQuery(function(){
     $("#myelement").bind('click', function(e) { 
      // some code
      alert("hello");
     });
    });
    </script>
    


    It's just a simple way to overcome a common problem where you cannot move your code about due to system limitations but require access to an object that will be loaded later on.

    Saturday, 25 June 2011

    Loading Social Media Code Asynchronously

    Preventing Slow Page Loads By Loading Widgets Asynchronously

    I have noticed on a number of sites that use such as the popular Add This widget that for some reason many people using this code have caused problems for themselves by adding the <SCRIPT> tag that loads the widget in every place that they want the widget to display.

    On a news blog with 10 articles this means that the same <SCRIPT> could be referenced 10 times. Now I know browsers are clever enough to know what they have loaded and utilise caching but as every user of Google Adsense knows having to embed <SCRIPT> tags in the DOM at the place where you want the advert to display instead of just referencing it once at the bottom of the HTML or loading it in with Javascript can cause slow page loads as the browser will hang when it comes across a script until the content has been loaded.

    I have personally spent ages trying to hack Google AdSenses code about to utilise the same asynchronous loading that they now use for their Analytics code but to no avail. There code loads in multiple iframes and any hacking seems to trigger a flag their end that probably signifies to them some kind of fraudulent abuse.

    However for other kinds of widgets including the AddThis widget there is no need to reference the script multiple times and I am busy updating some of my sites to utilise another method which can be seen on the following test page >> http://www.strictly-software.com/AddThis_test.htm


    Loading addthis.com social media widgets asynchronously

    I wanted to keep the example as simple as possible so in that regards if you use IE it's best off to view it in IE 9 as the only cross browser code I have added is a very basic addEvent function and an override for document.getElementsByClassName which doesn't exist pre IE 9.

    Other browsers should work without a problem i.e Chrome, FireFox, Safari, Opera and any other standard compliant browser that supports the DOM 2 Event Model.


    Specifying where the Social Media widgets will appear

    HTML 5 makes use of custom attributes that validate correctly as long as they are prefixed by the name data- therefore I have utilised this much needed feature to specify the URL and the Title of the item that is to be bookmarked on the desired Social Media site.

    You might have a page with multiple items, blog articles or stories each with their own Social Media widget and instead of defaulting to the URL and Title of the current document it is best to specify the details of the article the Add This widget refers to.

    The HTML code for outputting a widget is below:


    <div class="addthis_wrapper" data-url="http://www.strictly-software.com/twitter-translator" data-title="Twitter Translator Tool" >


    Notice how the URL and Title are referenced by the

    data-url="http://www.strictly-software.com/twitter-translator" data-title="Twitter Translator Tool"

    attributes. You can read more about using custom HTML5 attributes as they are becoming more and more commonly used.
    Changing the placeholders into Social Media widgets

    Once the placeholder HTML elements are inserted into your DOM where you want the addthis widget to appear instead of doing what many Wordpress plugins and coders do and adding a reference to the hosted script next to each DIV you just need to add the following code in the footer of your file.

    You can either wrap the code in an on DOM load event or an on Window load or just as I have done wrap it in a self calling function which means it will run as soon as the Browser gets to it.

    You can view the code is more detail on my test page but to keep things simple I have just done enough cross browser tweaks to make it run in most browsers including older IE. There might be some issues with the actual addthis code that is loaded in from their own site but I cannot do anything about their dodgy code!

    The Javascript code to change the DIV's into Social Media Widgets

    (function(){
    // this wont be supported in older browsers such as IE 8 or less
    var els=document.getElementsByClassName('addthis_wrapper');

    if(els && els.length >0){

    // create a script tag and insert it into the DOM in the HEAD
    var at=document.createElement('script');at.type='text/javascript';

    // make sure it loads asynchronously so it doesn't block the DOM loading
    at.async=true;
    at.src=('https:'==document.location.protocol?'https://':'http://')+'s7.addthis.com/js/250/addthis_widget.js?pub=xa-4a42081245d3f3f5';

    // find the first <SCRIPT> element and add it before
    var s=document.getElementsByTagName('script')[0];s.parentNode.insertBefore(at,s);

    // loop through all elements with the class=addthis_container
    for(var x=0;x<els.length;x++){

    // store pointer
    var el = els[x];

    // get our custom attribute values for the URL to bookmark and the title that describes it defaulting to the placeholders that
    // will take the values from the page otherwise. By using data-title and data-url we are HTML 5 compliant
    var title = els[x].getAttribute("data-title") || "[TITLE]";
    var url = els[x].getAttribute("data-url") || "[URL]";

    // create an A tag
    var a=document.createElement('A');
    a.setAttribute('href','http://www.addthis.com/bookmark.php');

    // create an IMG tag
    var i=document.createElement('IMG');
    i.setAttribute('src','http://s7.addthis.com/static/btn/lg-share-en.gif');

    // set up your desired image sizes
    i.setAttribute('width','125');
    i.setAttribute('height','16');
    i.setAttribute('alt','Bookmark and Share');
    i.style.cssText = 'border:0px;';

    // append the image to the A tag
    a.appendChild(i);

    // append the A tag (and image) to the DIV with the class=addthis_container
    el.appendChild(a);

    // using DOM 2 event model to add events to our element - remember if you want to support IE before version 9 you will need to either use a wrapper addEvent
    // function that uses addEvent for IE (and Opera) and addEventListener for IE 9, Firefox, Opera, Webkit and any other proper broweser
    addEvent(a,"mouseover",function(e){if(!addthis_open(this, '', url, title)){StopEvent(e,a)}});
    addEvent(a,"mouseout",function(){addthis_close});
    addEvent(a,"click",function(e){if(!addthis_sendto()){StopEvent(e,a)}});

    // cleanup
    el=a=i=title=url=null;
    }
    }
    })();


    The code is pretty simple and makes use of modern browsers support for document.getElementsByClassName to find all elements with the class we identified our social media containers with. This can obviously be replaced with a selector engine such as Sizzle if required.

    First off the code builds a SCRIPT element and inserts it into the DOM in the HEAD section of the page. The important thing to note here is that as this code is at the bottom of the page nothing should block the page from loading and even if the SCRIPT block was high up in the DOM the code only runs once the DOM has loaded anyway.
    // create a script tag and insert it into the DOM in the HEAD
    var at=document.createElement('script');at.type='text/javascript';

    // make sure it loads asysnchronously so it doesn't block the DOM loading
    at.async=true;
    at.src=('https:'==document.location.protocol?'https://':'http://')+'s7.addthis.com/js/250/addthis_widget.js?pub=xa-4a42081245d3f3f5';

    // find the first <SCRIPT> element and add it before
    var s=document.getElementsByTagName('script')[0];s.parentNode.insertBefore(at,s);




    The code then loops through each node that matches creating an A (anchor) tag and an IMG (image) tag with the correct dimensions and attributes for the title and URL. If none are supplied then the system will default to the document.location.href and document.title if no values are supplied which might be fine if it's the only widget on the page but if not values should be specified.


    Events are then added to the A (anchor) tag to fire the popup of the AddThis DIV and to close it again and I have used a basic addEvent wrapper function to do this along with a StopEvent function to prevent event propagation and these are just basic cross browser functions to handle old cruddy browsers that no-one in their right mind should be using any more. As this is just an example I am not too bothered if this code fails in IE 4 or Netscape as its just an example of changing what is often plugin generated code.


    You can see an example of the code here >>

    http://www.strictly-software.com/addthis_test.htm

    This methodology is being used more and more by developers but there are still many plugins available for Wordpress and Joomla that insert remote loading SCRIPTs all throughout the DOM as well as using document.write to insert remote SCRIPTS. These methods should be avoided if at all possible especially if you find that your pages hang when loading and as you can see from the example code it is pretty simple to convert to your favourite framework if required.

    Sunday, 23 January 2011

    Why I hate debugging in Internet Explorer 8

    Debugging Javascript in Internet Explorer

    Now with IE 8 and it's new developer toolbar debugging Javascript should have become a lot easier. There is now no need to load up Firebug Lite to get a console or use bookmarklets to inspect the DOM or view the generated source and in theory this is all well and good.

    However in practise I have found IE 8 to be so unusable that I have literally stopped using it during development unless I really have to.

    When I do find myself having to test some code to ensure it works in IE I have to make a little prayer to the God of Geekdom before opening it up because I know that within the next 5 minutes I will have found myself killing the IE process within Task Manager a couple of times at the very least.

    Not only does IE 8 consume a large portion of all my CPU cycles it's very ill thought out event model makes debugging any kind of DOM manipulation a very slow painful chore.

    Unlike proper browsers IE's event model is single threaded which means that only one event can occur at any point in time during a pages lifetime. This is why IE has the window.event object as it holds the current event being processed at any point in time.

    Many developers over the years have moaned at IE for this and have hoped that with each new release of an IE browser that they would fix this odd behaviour. However every time a new version is rolled out a lot of web developers are always bitterly disappointed because apparently Microsoft feels this quirky event model is a design feature to be enjoyed rather than a bug to be suffered and they don't seem to have any intention whatsoever of fixing it or making it DOM 2 compliant at the least.

    I don't know why they cannot do what Opera does and just implement both event models. At least that way they can make all the proper web developers happy at the same time as all the sado masochists who enjoy IE development

    This event model really comes into its own when you are trying to debug something using the new developer toolbar as very often you want to pipe debug messages to the console.

    If you are just outputting the odd message whenever a button is clicked or form loaded then this can be okay but if you attempt to do anything that involves fast moving events such as moving elements around the page or tracking fast incrementing counters then you will soon suffer the pain that comes with waiting many minutes for the console to catch up with the browsers action.

    Whereas other browsers such as Chrome or Firefox are perfectly capable of outputting lots of debug messages to the console at the same time as mousemove events are being fired IE seems to either batch them all up and spit them out when the CPU drops below 50% (which might be never) or occasionally the whole browser will just crash.

    At first I thought it was just me as my work PC is not the fastest machine in the office but I have seen this problem on many other peoples computers and I have also experienced it at home on my Sony VAIO laptop.

    As a test I have created the following page which can be found here:


    Try it out for yourself in a couple of browsers including IE 8 and see what you think. I haven't had the chance to test IE 9 yet so I don't know if the problem is the same with that version but I would be interested to know if anyone could test this for me.

    The test is simple and just includes a mousemove event which collects the current mouse co-ordinates and pipes them to the console. It does this for a number of iterations which can be set with the input box.

    I have found that IE will manage when setting this counter to anything less than 100 but putting it to 1000 or above it just crashes or freezes my own browser as well as killing the CPU.

    Let me know what you think of this test and whether anyone else has major issues with IE and it's console logging capabilities.

    Sunday, 18 October 2009

    Using Document onLoad instead of Window onLoad

    Why we should use DOMReady instead of window.onload

    I know this is a topic that has been extensively covered by many developers but I thought I would give an example of a good reason to use a function that tests for the DOM to be ready to start running your JavaScript functions instead of the window.onload event to fire.

    JQuery users will be used to writing code like this:
    $(document).ready(function() {
    alert("DOM is ready")
    });
    Which will try to fire when the DOM is ready and if not when the window is ready. If you write code like this:
    $(window).load(function() {
    alert("Window and all Iframe and Image content is ready")
    });
    Then you are waiting for the window to be ready to fire your functions which maybe a safer option if you are wanting to manipulate image or iframe content or content that has been loaded externally but on most occasions you would want to use a onDOMReady function like the one below which I have taken from code I use on this site. It is pretty similar to how most major frameworks including JQuery have inside their own cross browser DOMReady functions, however I am still testing for old Opera and Webkit users whereas JQuery does not do this anymore.

    Also I have had to take the compressed version and then unpacked it due to not being on my own PC tonight due to problems with my leg but the code shouldn't be too bad and shouldn't be treated as copy and paste code but more like pseudo code.
    onDOMLoad: function () {
    // only look for modern browsers that haven't spoofed their agent (not perfect check but then its their own fault!)
    if (Browser.w3cDOM && !Browser.spoof) {
    // For old webkit and opera and KHTML we use a timer to test for readystate
    if ((Browser.webkit && Browser.webkitversion < 525) || Browser.khtml || (Browser.opera && Browser.version < 9) && document.readyState != "undefined") {
    PageLoader.DOMTimer = setInterval(function () {
    if (/loaded|complete/.test(document.readyState)) {
    PageLoader.RunDOMLoadFunctions()
    }
    },
    10)
    // For other standard compliant modern browsers apart from IE we can use the standard DOMContentLoaded function
    // making sure to remove the anonymous function straight away.
    } else if (document.addEventListener) {
    AddEvent(document, "DOMContentLoaded", function () {
    RemoveEvent(document, "DOMContentLoaded", arguments.callee);
    PageLoader.RunDOMLoadFunctions()
    },
    false)
    // For IE (and Opera which is why we do this last) we try two methods the first one can sometimes
    // fire very late on in the day.
    } else if (document.attachEvent) {
    AddEvent(document, "onreadystatechange", function () {
    if (document.readyState === "complete") {
    RemoveEvent(document, "onreadystatechange", arguments.callee);
    PageLoader.RunDOMLoadFunctions()
    }
    },
    false);

    // We also do this trick by Diego Perini to continually check for DOM readiness by
    // checking for an error which comes from the doScroll call. Once there is no error reported we
    // know the DOM is ready. This doesn't work for Iframes note the window comparison.
    if (document.documentElement.doScroll && window == window.top)(function () {
    if (PageLoader.DOMLoaded) return;
    try {
    document.documentElement.doScroll("left")
    } catch(e) {
    setTimeout(arguments.callee, 0);
    return
    };
    PageLoader.RunDOMLoadFunctions()
    })()
    }
    };
    // Always call a window onload function to run anything not fired already.
    PageLoader.AddWindowLoadEvent(function () {
    PageLoader.RunDOMLoadFunctions()
    });
    return true
    }
    }


    A good reason for doing this came to light very recently with a system we are working on updating the folder structure that holds site related files such as images, banners, logos etc that are loaded and used on the site. Because we are halfway through the process the folder structure has changed which means when a page loads all the images appear broken as they haven't been moved to the new structure yet. This is obviously on a development server!

    However on a few pages it has been noticeable that as the page is loading you may start filling in form fields for input and then suddenly halfway through your focus is removed from the field you are currently on and taken to another field, usually the first field on the page. The reason for this is that the code being used to run the set focus() is being called by a function that is using a window.onload event rather than an onDOMReady function.

    This doesn't happen in all places and I haven't fully looked into it to see if its mainly an Internet Explorer problem, which has well known issues with firing functions between the loading of the DOM and the Window, or an issue where the DOMReady doesn't fire for whatever reason and then falls back to window.onload. However its an example of the problem that when using window.onload the browser will wait until all images have been loaded before firing and due to the number of broken banner logos and images this is quite a number of failed checks which is why the delay seems so long.

    A quick thought has been to add an extra fallback call in between the DOMReady and window.onload which would fire once the Body has loaded. This could be similar to a function below of mine which polls the DOM looking for the Body element to be ready before running the desired function. Again this is taken directly from unpacked code with comments re-added so treat as pseudo code.
    onBodyLoad: function (fn) {
    // Check the function is a function e.g typeof(fn)=="function"
    if (S.isFunction(fn)) {
    // if we haven't already called this function
    if (!this.Body[fn]) {
    // if the body has already loaded and we can get a reference to it
    if (P.BodyLoaded || S.getBody()) {
    // Call the function
    fn.call();

    // Set some flags so we know we have called the functon and also
    // that the body has loaded in case other functions (window/dom) ready haven't
    P.Body[fn] = true;
    P.BodyLoaded = true
    }
    } else {
    // call a timeout passing in the function to check again in 50ms
    setTimeout(function () {
    P.onBodyLoad(fn)
    },
    50)
    }
    }
    }


    Depending on what you put between the close BODY tag and the close HTML tag which shouldn't be much this might help give the effect of a DOMReady if for whatever reason the original code doesn't fire.

    Tuesday, 21 July 2009

    Firebug 1.4.0 and Highlighter.js

    The mysterious case of the disappearing code

    Yesterday I posted an article relating to an issue with a code highligher script and Firebug 1.4.0. A rough overview is that:

    a) I use the highlighter.js code from software maniacs to highlight code examples.
    b) This works cross browser and has done for a number of months.
    c) On loading Firefox 3.0.11 I was asked to upgrade Firebug from 1.3.3 to 1.4.0.
    d) After doing so I noticed all my code examples were malformed with the majority of code disappearing from the DOM.
    e) On disabling the Firebug add-on the code would re-appear.
    f) This problem didn't occur on my other PC still using Firebug 1.4.0 and Firefox 3.0.11.

    Other people have contacted me to say they had similar issues and others who were using Firefox 3.5 did not have this problem so it seemed to be a problem with Firefox 3.0.11 and Firebug 1.4.0.

    So tonight I was planning to debug the highlight.js file to see what the crack was. The version of highlight,js I use on my site is compressed and minified. On uncompressing the file with my online unpacker tool. I thought I would just try some problematic pages on my site with this uncompressed version and lo and behold it worked. The code didn't disappear!

    So I have re-compressed the JS file with my own compressor tool and changed the references throughout the site to use http://www.strictly-software.com/scripts/highlight/highlight.sspacked.js instead of the original file http://www.strictly-software.com/scripts/highlight/highlight.pack.js and it all seems to work (at least for me).

    If anyone manages to get to the bottom of this problem then please let me know but it seems there must be some sort of conflict occurring between these 2 codebases and I think its very strange!

    I have created a copy of yesterdays posting that still uses the original compressed file so that the problem can be viewed. You can view the problem code here.

    Sunday, 14 September 2008

    My growing love for Javascript

    A shaky start to the love affair

    I have no shame in admitting that I am currently in love with JavaScript. It may be that for the last eon or two I have been working with clunky old scripting languages such as ASP classic and whenever I get a chance to do some JavaScript I grab it with open arms as an opportunity to do what seems like proper programming. I will be the first to admit that I never used to see JavaScript in the way I do now and as well as never understanding its full potential I never even thought of it as a proper object orientated language which it most certainly is. When I first swapped over from client / server applications in the early 90's to web development using ASP/COM/SQL Server 6 JavaScript was just a nice little scripting language that could set the focus of inputs, validate input client side and other little tweaks that didn't seem of much importance in the grand scheme of things. The "proper" coding was done server side and I would have traded my ciggies by the dozen with anyone to work on a stored procedure or COM component than have to fiddle round with a scripting language that I had little time or patience for.

    Coming from a VB background I hated the problems of case sensitivity and having to remember that equality tests involved more than one equals sign, sometimes even three. Trying to debug a script was always a nightmare having to sit there clicking away all those alert boxes before realising you were in some sort of never ending loop and having to kill the browser and start again. Yes I really didn't think much of it and I am certainly not alone in feeling like that.


    The love that grew from necessity.

    So over the years my views on JavaScript changed from hatred to a mild respect still outweighed by all the annoyances that come with trying to write a fully functional script that is complex and cross browser at the same time. I still didn't have to work with it that much apart from replicating any server side form validation on the client and some mild DOM manipulation. My annoyance with the language itself had disappeared after I had learnt Java and C# and the scripts that I did have to knock out were not that complex however I still had an attitude that if it worked in the browser that I was using which was always the latest version of Internet Explorer then the script was fine by me. If it didn't work in Netscape or Safari then I would just ask the office "Javascript Guru" to have a look and the code I was given usually seemed to work even if I didn't know what it was doing. Then the other year I wanted to implement a WYSIWYG editor for the system I was working on. The system was currently using the FCKEditor and I wanted to implement what seemed like a simple request at the time a character counter so that as the user typed the number of characters used in the HTML source was available for viewing. I remember trying to edit the FCK source and realising what a huge beast it was. The size of its directory was ten times the size of the rest of the site. I was sure that half the code and possible functionality was not required for my systems requirements. I had a look at some other widgets including OpenWYSIWYG and another one that our Javascript guru had used and then I decided to write my own combining the best bits of each, stripping out anything not needed, adding my bullet counter and making it as flexible as possible. It seemed like a straight forward task on paper but it was the start of a painstaking development but more importantly it was the start of a long learning process which although extremely painful at the time opened my eyes to the wonders of cross browser coding and all the different caveats and pitfalls that were waiting for me to discover.


    Items of interest discovered along the way.

    Whilst developing this widget some of the most seemingly simple things turned out to be some of the most complex. Who would have thought just putting a cursor at the end of any content in the IFrame editor would be such a tall order. So as well as learning far too much about browser differences and the history of Netscape and Mozilla and why User-Agents seem to make little sense I found out some very important information and came across some specific problems that everyone comes across at some stage when developing cross browser script.

    1. How Firefox is indeed a wonderful invention with all those extensions especially Firebug which made my debugging life pain free once again. Not only that but Firebug lite brings most of that joy to debugging in IE. No more tired fingers from closing alert buttons.

    2. The problems with relative URIs displayed within Iframes in Internet Explorer. Read this article for an explanation. The solution was to write out the Iframe content with document.write.

    3. Different implementations of content editable HTML between browsers. Issues setting design mode on in Mozilla and disappearing event listeners. All good fun for the clueless I can assure you.

    4. All the fun involved in learning about the event model and the problem of "this" keyword in IE as well as the memory leakage in older IE versions and the illogical ordering IE fires events in.

    5. Differences between browsers when trying to calculate the size of the viewport and window dimensions for making my editor appear in a floating div.

    6. Trying to make the content outputted by the editor as consistent as possible across browsers and XHTML compliant. IE seems to love capital letters and forgetting to close LI and DT elements for some reason.

    7. Much much more.

    So as you can see if you have yourself covered all those topics in detail, which means you will most certainly have read Dead Edwards competition blog article from start to finish as well as follow most of the links it leads to, this is a lot of information to take in and understand. However rather than put me off JavaScript for life its actually made me come to love the bloody thing.


    Conclusion

    So whereas in the 90's I used to hate all those cross browser problems they are more of a challenge to be overcome now and I love it when I get a complicated piece of code working in the main 4 browsers as well as many old versions as possible. In fact I may get a little too keen sometimes and often need my colleagues to tell me that the widget doesn't actually need to work in Netscape Navigator 4 or IE 4.

    I am one of those people who will readily admit that I don't know everything but I like finding out about those missing chunks of knowledge and when given the choice of an easy life by implementing someone else's code as is will now often choose the more painful but also more enjoyable option of trying to write my own version. I will have a look at some of the best examples out on the web and try to put them all together which is usually the best way of learning about all those little nice little cross browser intricacies on the way.

    As the saying goes nothing worthwhile in life comes easily and this seems to be particularly true with writing cross browser JavaScript code.