Thursday, November 27, 2008

JMX-HTTP + MBeans exploration = STRUTS-enabled CC

So now I found this nifty JMX-HTTP adapter, which enables for an HTML view of MBeans interface's! Victoly! You would say, BUT! the application doesn*t seem to work as it should.
It's STRUTS-based, and makes use of custom tag libraries. It came as a WAR file, so I had to extract it (no problem at all) to the CC server. Now, when I navigate to the site, I get the following error :
Failed to load or instantiate TagExtraInfo class: com.cj.string.StringVariable

Where is the problem, you might ask - good question. The problem is of course in Cruisecontrol : it comes with its own JMX server. Now, the CC control panel is based on MBeans. We can then assume that CC's JMX server has NOT support for STRUTS.
So what I could try to do is to enable the STRUTS on the CC JMX server; which I don't know if it is actually possible, so I will first of all try to gather more information about the CC webserver.

Monday, November 24, 2008

Information Radiator #4 - The deeps of EJB's

Who the hell uses EJBs anyway.
I mean, we have Webservices. They handle things so well. Why oh why use EJBs..Bah.

Anyway, having thrown away the idea of direct accessing webpages from the IR, I will do so by fetching them from specific PHP adapter pages. This approach has two main positive things:

a) security is (probably) more manageable
b) Collecting information and then displaying it in my own page and own style is always nice

So there is this page that we want to have, which should display in a very simple way the status of the current running Cruise Control projects. Not considering the fact that we have three different CruiseControl servers in here, I am currently researching how to obtain the status of the single projects through the bean interface. Easier said than done, I never used beans since University, and even there they only mentioned them vaguely (yes really!).

Thursday, November 20, 2008

Information Radiator #3 - the judgement!

So now after testing the system I have come up with I discovered three things :
  1. Mozilla and IE show totally different behaviour when fetching from remote pages (as is non-localhost pages)
  2. IE has a problem with XML Dom attributes for some strange reasons
  3. Images are not displayed correctly (because of relative paths in the page sources)
The next step will then be to first fix the IE XML bug, and then fix the Mozilla remote pages fetching. Something tells me the reason for the problem may be that IE's Javascript objects (in contrary to Firefox) may not be persistent..

The XML Dom problem for IE was basically an (stupid) incorrectly checked unasigned.

As for the remote page fething, After some investigations, I came up with some hints :
a) Cross-domain fetching does not always work - Firefox has it disabled by default and fails the fetch, IE can enable it but will show a dialog asking for user confirmation when it tries to fetch the page.
b) It doesnt seem to work from Firefox even if I am in the same domain. IE behaves somehow strange : if I add the site to the "trusted" site's list, it denies acces to the page. Otherwise it prompts for confirmation and -WOW- works..But the refreshing of the IE page seems to have some troubles. This is not a primary goal (to get it to work on IE) since it will be displayed on an Opera browser anyway so I'll skip this for now.

There is a way to enable cross-domain fetches in Firefox (apparently Firefox doesn't recognize Windows being part of a Domain if you don't log on with a domain username..). Firefox has a run-time privilege structure for scripts (based on the old Netscape's Communicator one). By adding a call to the following function in a Javascript code :

netscape.security.PrivilegeManager.enablePrivilege()
the script will be granted the privilege if:

a) the signature of the script is valid
b) codebase principal are enabled

Intresting page for prefs.js customization :
http://www.zachleat.com/web/2007/08/30/cross-domain-xhr-with-firefox/

The privileges that you can enable at runtime are :

UniversalBrowserRead Reading of sensitive browser data.
This allows the script to pass the same origin check when reading from any document.
UniversalBrowserWrite Modification of sensitive browser data.
This allows the script to pass the same origin check when writing to any document.
UniversalXPConnect
Unrestricted access to browser APIs using XPConnect.
UniversalPreferencesRead Read preferences using the navigator.preference method.
UniversalPreferencesWrite Set preferences using the navigator.preference method.
CapabilityPreferencesAccess Read/set the preferences which define security policies, including which privileges have been granted and denied to scripts. (You also need UniversalPreferencesRead/Write.)
UniversalFileRead
  • window.open of file:// URLs.
  • Making the browser upload files from the user's hard

Wednesday, November 19, 2008

Information radiator part #2 - Javascript design + AJAX = CRAWLER!

So now that it's clear how to actually realize or classes, we can step back once and dig ourselves deeper into the design (again).
What I want is to share one single XMLHttp object over multiple requests. That is, we need to configure the object request with the correct (request, eventstateready function) for every request.
So we will need a way to couple the XmlHttp object with a readystate handler, for an example :
XMLHttpFactory.prototype.setReadyStateChange = function (onReadyStateChange) {
this.__xmlObj.onreadystatechange = onReadyStateChange;
}
Another thing to consider is the request itself : different requests require fetching of different server or static HTML pages for working (which will generate the result, this is a basic part of the AJAX technology which is not mentioned often enough as it should imo).
For sending a request to the Server, the XMLHttp object provides a basic "open" method for preparing the request, as well as a specific "send" method for forwarding it to the server. For an
example :
xmlHttp.open("GET","time.asp",true);
xmlHttp.send(null);
So let's add a specific method "submitRequest" in our Javascript prototype:
XMLHttpFactory.prototype.submitRequest = function (nature, target, onReadyStateChange) {
this.__xmlObj.open (nature, target, onReadyStateChange;
this.__xmlObj.send (null);
}
Our "submitRequest" method expects the kind of submitting (nature, "POST" or "GET"), the target server page, and the onReadyStateChange event handler. Notice that this makes the onreadyStateChange method we defined earlier unneeded. Thus I removed it.

Next is a class for managing the sequence configuration file (sequence & timings logic) and the configuration itself.

The configuration is represented in xml format to increase readability and understandability.
The various "display" tags represent a the single page that will be displayed. In this case "test1.php" will be fetched and displayed for 10 seconds and then it will be "test2.php"'s turn.

Considering the fact that we want to actually show webpages from different sources, we will first need to fetch them and then process them. Following this mindmap, we can come up with a straightforward structure :

  • on the bottom, a transport level which takes care of the XMLHttp handlign
  • on top of it, a presentation level, which displays pages based on the configuration file we feed it with
Easy.
The transport level is very basic. It's interface includes :
  1. a target URL to fetch
  2. a target element to insert the fetched webpage into
What I've come up with is (the code is pretty much self-explanatory, so I won't get more into detail):

//Class prototype for the XMLHttp transport facility.
function XMLHttpTransport(){
try
{
// Firefox, Opera 8.0+, Safari
this.__xmlObj = new XMLHttpRequest();
}
catch (e)
{
// Internet Explorer
try
{
this.__xmlObj = new ActiveXObject("Msxml2.XMLHTTP");
}
catch (e)
{
try
{
this.__xmlObj = new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e)
{
alert("Your browser does not support AJAX!");
return false;
}
}
}
}

//Variable for XMLHTTP Object
XMLHttpTransport.prototype.__xmlObj;
//Variable for Item Id - attached to xmlObj!
XMLHttpTransport.prototype.__item;


//Request submitter
XMLHttpTransport.prototype.submitRequest = function (nature, target, item) {
this.__xmlObj.__item = item;
this.__xmlObj.__target = target;
this.__xmlObj.onreadystatechange = this.ReadyStateChange;

try {
this.__xmlObj.open (nature, target, true);
}
catch (e) {
alert ("XMLHTTP Open Didn't work as expected! Exception : " + e.description);
}

try {
this.__xmlObj.send (null);
}
catch (e) {
alert ("XMLHTTP Send Didn't work as expected! Exception : " + e.description);
}
}

//Static event handler for basic display - assigns an element its inner html! - xmlHttpTransport is used as reference to the xmlhttp object. the context of invocation is from the xmlhttpobject, not xmlHttpTransport
XMLHttpTransport.prototype.ReadyStateChange = function (){
switch (xmlHttpTransport.__xmlObj.readyState)
{
case 0:
elText = "";
break;
case 1:
elText = "Request ready";
break;
case 2:
elText = "Request sent";
break;
case 3:
elText = "Processing request..";
break;
case 4:
if (xmlHttpTransport.__xmlObj.status == 200)
elText = xmlHttpTransport.__xmlObj.responseText;
else
elText = "Page Not Found: " + this.__target;
break;

}
document.getElementById(this.__item).innerHTML = elText;
}

While for the presentation level we will have :

//-----------------------------------------------------------------------------------------------------
//Timed execution
function SequenceConfig(configData){
//Loads the document
try
{
//All other browsers
parser = new DOMParser();
this._xmlDoc = parser.parseFromString(configData, "text/xml");
}
catch (e)
{
try
{
this._xmlDoc = new ActiveXObject("Microsoft.XMLDOM");
this._xmlDoc.async = "false";
this._xmlDoc.loadXML(configData);
}
catch (e)
{
alert('No support for DOM in your browser?');
}
}
this._currentDisplay = 0;
}

//Configuration properties
SequenceConfig.prototype._xmlDoc;
SequenceConfig.prototype._currentDisplay;

SequenceConfig.prototype.displayNextPage = function (xmlHttpTransport){
//Initialize if needed

if (this._currentDisplay == -1)
{
el = this._xmlDoc.documentElement.firstChild;
this._currentDisplay = this._get_nextNode (el);
}
//time next display!
t = setTimeout("config.displayNextPage (xmlHttpTransport)", this._currentDisplay.getAttribute ('time') * 1000);
//Display it!
xmlHttpTransport.submitRequest ("GET", this._currentDisplay.getAttribute ('source'), "root");
//get next page
this._currentDisplay = this._currentDisplay = this._get_nextNode (this._currentDisplay.nextSibling);
}

SequenceConfig.prototype._get_nextNode = function (node){
x = node;
if (x == undefined)
{
return -1; //Stop if there are no more nodes
}

while (x.nodeType != 1)
{
x = x.nextSibling();
alert (x);
if (x == undefined)
{
return -1; //Stop if there are no more nodes
}
}
return x;
}

The configuration file is loaded into a dedicated variable at statup with help of some PHP code snipped (which loads the content of the configuration file and removes the newlines) :


//Configuration data will be included through a server-side include directly to the variable.
var configData = '';
var xmlHttpTransport = "";
var config;
So all what is missing now is a Javascript initiator (a method to start the whole process), which will be executed when the page loads (in the onLoad event of the "body" tag).
function init()
{
xmlHttpTransport = new XMLHttpTransport();
config = new SequenceConfig (configData);
config._currentDisplay = -1;
config.displayNextPage (xmlHttpTransport);
}
Easy eh? ;)

Tuesday, November 18, 2008

Information radiator - Javascript a Go-go

Well, looks like we want to improve the Scrum in our company.
We need a nail-new information radiator.

An Information radiator is actually a whiteboard, or a big piece of paper, even better a monitor, sometimes also a true traffic light! and its meaning is to provide status information on the hot projects running in a development team, in a packed, essential and easily understandable form.

You could compare an information radiator to an information cache. CruiseControl's main page is a good example of information radiator.

The nature of the single sources from where the informations that have to be displayed are fetched can be totally inhomogeneousm. In our case, for an example, we want to get our latest burndown graph displayed, the results of the currently run tests, analysis tools results, bug graphs, current build status, and even coffee machine status, or toilet paper-o-meter.

So what we actually want, is kind of an information center about the activites of our company, which should display status of X projects in a choosen format and a specified sequence.

Especially it should be minimalistic. Totally minimalistic. Showing only the essential, that is. For an example, we are not intrested in showing any log data in it. Or error / warning messages that occoured through the build process, even execution information would be skipped for an example. That kind of detailed information can be fetched from the primary source of information, for example the builder /compiler reports themself (in our case the CruiseControls project status pages). The information radiator only resembles a quick view of what is going on in our company.

Now that the idea is clear - how do we get this beast together?

The ratio of the above is :

We want to present information fetched from inhomogeneous sources in a compact, distilled and easily understandable form on a single medium easy-accessible medium.

There will thus be a need for

a) homogenization of information (from the different sources)
b) presentation of the homogenized information
c) last but not least : scalability

Suppose our company has 1000 projects going on, with 100000 employees, 34000 test cases and 3000 different test tools. We need a system which could merge the relevant status information from all of the source that have been included in the display cycle on one single information radiator.

The first thing that comes to my mind is CORBA, for some strange reason. It is scalable, customizable (thus can homogenize different information sources), and can be supplied with a good presentation logic quite easily. The reason why I thought of CORBA is actually that the behaviour of our information radiator shows up many parallels with the behaviour of CORBA
applications.

But of course, we don't have CORBA nor do we plan to use it. Our needs can be easily fullfilled with simple webpages sa well. We will "borrow" some of CORBA's paradigm pieces to find an easy solution, for an example by separating presentation logic from the information sources through the use of adapters.

Im a big fan of XP, so since I have enough design to start with, I'll start right away.

First I'm going with the presentation logic (the easy part, and the one that gives more satisfaction :) ). The idea is to use Ajax to load the pages from a specified location, and Javascript to refresh the page's content. So first we are looking at the timed events in javascript.

Javascript supports timed events through two methods :

setTimeout
clearTimeout

Since the presentation logic will be based on Javascript, this is a good occasion to dive into Javascript's classes - which I haven't been doing since I was at university.

Javascript is very powerfull. But that we know already. Fact is that Javascript doesn's support classes. Java is a language for prototype-based object modeling. Everything in Javascript is an object. Everything. Every object in Javascript has a so-called "prototype", which can be raffly compared to the generic class definition of common OOP languages like Delphi, C#, C++, and (yes sadly) VB. The objects in javascript are dynamic objects, which means that you can add new properties, methods, or anything you like at any point of execution to any of the objects you have defined. object's prototype. Confusing, ha? But nifty. This actually gives you a total freedom about structure control. On the other hand, the definition of a "class" in the classic sense is a bit different in Javascript. For instance, as I told you already, classes need to be defined differently than from other programming languages, starting from an object's prototype, more than from a single (static) class definition. What this means practically is :

a) you will find no "class" keyword in Javascript or whatsoever
b) The most common way to define a class is to construct it starting from an object's prototype

Point a) is pretty much straightforward. Point b) is the obscure part.
But just think: if anything in Javascript is an object, and thus has a prototype, we can construct a class out of everything. The most common way is to use a function prototype. In our case, for an example, I want to make use of the Ajax technology for updating the slideshow on the Information radiator. I will thus need a class to group this kind of behaviour, and of course I need some code (a function) that initializes the XmlHttp Object (depending on the browser etcetcetc..). So suppose our function will look like this :

function XMLHttpFactory()
{
try
{
// Firefox, Opera 8.0+, Safari
this._xmlObj = new XMLHttpRequest();
}
catch (e)
{
// Internet Explorer
try
{
this._xmlObj = new ActiveXObject("Msxml2.XMLHTTP");
}
catch (e)
{
try
{
this._xmlObj = new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e)
{
alert("Your browser does not support AJAX!");
return false;
}
}
}
}
This function will not only instantiate the XmlHttp Object but also be our "spinal" for the XMLHTTPFactory class. The "self" item references the object's prototype.
In fact, we can add more behaviour to the function's prototype by adding a (private) variable for storing the reference to the XmlHttp object:
//Variable for XMLHTTP Object
XMLHttpFactory.prototype.__xmlObj;

//Getter
XMLHttpFactory.prototype.getXmlObj = function () {
return this.__xmlObj;
}
In the same way, we want to provide more functionality for our XMLHTTP factory to react to different requests issued to the server in specific ways.

Tuesday, November 11, 2008

MS VC++6.0 Custom Build anyone?

Have you ever been fiddling around with custom build steps in Microsoft Visual Studio?
Lots of people find them totally usefull. Others think they are merely another lock-in for Microsoft customers.

Custom build rules are THE solution for integrating compiling of legacy source codes such as assemblers, image generators, scripting engines and legacy tools (for example for md5 computing) and so on in one single project (Microsoft's dev env, that is). They are fast, but not straightforward. Well, they ARE actually in VS2005, but not in VC++ 6.0..Don't ask.

Custom build rules are expressed with batch script. Or so it seems. So far, so good you say, if it wouldnt be for only one problem, and that is : it uses obscure macros in practically all configurations. Well, the names of the macros help you alot with understanding the meaning of each of them of course (for an example $(InputDir)), but still relying on your immagination isn't always a correct thing to do.
Echoing them out with a plain batch "echo" doesnt seem to be working as it shows up nothing (at least not by compiling the file in the environment itself), and I am too lazy to try it on the stdout / stderr.

Got an idea since they are project-dependent they probably are stored in the project configuration files. If you open the .dsp file with an editor, you will find them defined there - well, most of them.
But where / how are they defined? And when? How can we change them?

A quick google on them shows up no satisfactory result. It's probably a too old topic, or I am just blind and don't see it.

This article from MS shows a complete list of the macros. Took some time to find it ;).

Here is a brief list of them :

Label Macro Description
Intermediate $(IntDir) Path to the directory specified for intermediate files, relative to the project directory.
Output $(OutDir) Path to the directory specified for output files, relative to the project directory.
Target $(TargetDir) Fully qualified path to the directory specified to output files.
Input $(InputDir) Relative path to the project directory.
Project $(ProjDir) Fully qualified path to the project directory.
Workspace $(WkspDir) Fully qualified path to the project directory.
Microsoft Developer $(MSDevDir) Fully qualified path to the installation directory for Microsoft Visual C++.
Remote Target $(RemoteDir) Fully qualified path to the remote output file.
Target Path $(TargetPath) Fully qualified name for the project output file.
Target Name $(TargetName) Base name for the output file.
Input Path $(InputPath) Fully qualified name for the input file.
Input Name $(InputName) Base name for the input file.
Workspace Name $(WkspName) Name of the project workspace.
Remote Target Path $(RemoteTargetPath) Fully qualified name for the remote output file.

Friday, November 7, 2008

Thursday, November 6, 2008

xslt & Cruisecontrol #2 - how to get HTML code straight to the Cruise control log!

I was just wondering if there is a way to highlight specific texts in the Cruisecontrol tests result display panels...
Of course, I think adding a simple html font tag / color tag to the test assertion text would do the job, but on the other hand, as it is logged, the plain text would be echoed on syslog as well. Which would on the other hand make it unreadable, or hardly readable.

So what we need to know is how nosetests / nosexunit processes the test result.
A quick nosetests --help shows us that nosetests features the following logging facilities (see under the -l / --debug switch):
  • nose
  • nose.importer
  • nose.inspector
  • nose.plugins
  • nose.result
  • nose.selector
nose is the root logger, nose.result could be the result producing log (we are not sure yet!)

Also, we know that the result of the test gets echoed to Syslog by our homegrown syslog plugin (which I fixed some posts ago). This happens thanks to the implementation of the formatFailure function, which overrides standard behaviour and formats the output by fetching the cause from the last exception (remember, assertion failures raise exceptions!).

A quick test on our Cruisecontrol host shows that the test's text results (which are merged by the Cruisecontrol agent at the end of the build cycle in the Cruisecontrl log test pages) are displayed as plain text. That is, HTML commands are uninterpreted. How comes?

The answer is simple : the text that is outputted by nosetests is interpreted by nosexunit as plain text, that is "<" and ">" signs are not interpreted as xml tags but as simple characters. Thus, when they get merged into the Cruisecontrol presentation panel through the XSLT transformations, those signs are converted in the & lt; and & gt; placeholders. That is, the "<" and ">" are still interpreted as plain text.

Specifically, this is the result of a merged nosexunit output (message is cutted down due to size constraints) :

<![CDATA[The test button <p style="color:red">'button 4'</p> has not .....;]]>

So it is actually merged as CDATA (obviously). The whole can be arranged with a small change in the XSLT transformation docs : check the \cruisecontrol\webapps\cruisecontrol\xslt folder, it contains all of the XSLT transformation schemes.
The one you are looking for is the errors.xsl. But first open the buildresults.xsl, and make sure that the entry which includes the errors.xsl is uncommented. You can tell so if by navigating to a build's log (by navigating on the project's name link) the Errors / Warnings section is displayed on the right part of the screen.

In the errors.xsl file you have to look for the match="message[@priority='warn']" pattern, and add the disable-output-escaping="yes" flag to the value-of instruction.