↓ Archives ↓

Category → testing

Sparkleshare

Today a release candidate for sparkleshare 0.2 came out. Thanks to @jan for making me notice this great software a few days ago. It’s kind of a dropbox or ubuntu one clone but completely open source. You can use it to store your local data on a public server, your own server, your intranet or wherever you trust to put our data. All you need is a git server, ssh and a certificate to access it. No matter if it’s your local data storage, github or plan.io.

Comparing it to dropbox of course there still are a lot of downsides given its early state. There is no sophisticated encryption, only the data transfer is encrypted using ssl certificates. There is no GUI for easy sharing of data: whoever has access to your git repository can access your data, but no one else. However there are already a few nice features working quite well. Among them the easy GUI-based setup, the automatic syncing and the nice way to access old versions o data: right-click on a file, choose a version and a new file appears with the old date in its file name.

There is the source code for linux and mac versions available, binaries and a windows version are expected at a later point of time. In case you are on Ubuntu or Debian I recommend the following steps for installation after getting and unpacking the source from hbons’ github:

# install dependencies for the build process
sudo apt-get install gtk-sharp2 mono-runtime mono-devel monodevelop \
libndesk-dbus1.0-cil-dev nant libnotify-cil-dev libgtk2.0-cil-dev \
libwebkit-cil-dev intltool libtool python-nautilus libndesk-dbus-glib1.0-cil-dev

# using the prefix you enable the nautilus extension
./configure --prefix=/usr

# compile
make

# use checkinstall instead of make install to install it as a clean debian package
# that can be easily uninstalled using your favorite package manager
sudo checkinstall

# start the service
sparkleshare start

# You possibly have to restart nautilus to enable the plugin
nautilus -q

So go try it out!

JavaScript Testing and Continuous Integration Part II

In my previous post I described how to use the YUI Test Framework to write JavaScript tests for your web application. This part continues with how to run the tests in different browsers and how this can be used with a continuous integration server.

Executing your tests in different browsers

A framework that is very famous when it comes to browser testing is Selenium. It automates the execution of click streams in different browsers using two major components: the selenium IDE and the selenium remote control. The IDE is a Firefox plugin for recording the click streams. Originally, the click streams are written as plain HTML tables, but over the time selenium added drivers for the most popular programming languages. This way, you may record a click stream and have it transformed into e.g. Java / JUnit, where you then can add several assertions or high level programming constructs. A simple selenium JUnit test may then look like this:

public class MyTest extends SeleneseTestCase {
	protected Selenium selenium;

	public void setUp() {
		selenium = new DefaultSelenium("192.168.1.111", 4444,
                                       "*firefox", "http://you.web.server/");
		selenium.start();
	}

	public void testSomething() {
		selenium.open("http://you.web.server/path/to/your/tests");
		selenium.windowMaximize();
		selenium.windowFocus();
		// wait for...
		selenium.isVisible("unittestsFinished");
	}
}

Having the test, the selenium remote control come into play. It is used to open a browser, inject some JavaScript code that triggers all the desired events, and click through your app. In our case we use selenium to open a firefox, load the HTML page containing our YUI tests, and wait for them to be finished. (The wait is not shown in here, you have to write it yourself.) When our tests finish, we write an invisible <div> containing the string ‘unittestsFinished’ which indicates selenium that the browser may be closed.

Iterate over your browsers

Now that we have selenium to trigger our JavaScript tests we still need to iterate over all browsers. We also need to trigger selenium somehow. Since the continuous integration server at pidoco is a hudson server, and hudson can execute an ant build script, we use ant to do the job:

<property name="browsers" value="firefox,safari,iexplore"/>
<for list="${browsers}" param="browser" delimiter=",">
	<sequential>
		<junit haltonfailure="false" dir="antbuild" fork="yes"
                               showoutput="yes">
			<sysproperty key="browser" value="@{browser}"/>
			<formatter type="xml" />
			<test name="tests.selenium.SeleniumYUITestsLauncher"
                              todir="tests"
                              outfile="@{browser}-selenium-result.xml"/>
		</junit>
		<get src="http://your.test.server/path/to/junitxml"
                     dest="tests/TEST-@{browser}-yui-result.xml"/>
		<replace file="tests/TEST-@{browser}-yui-result.xml"
                         token="testsuite name=&quot;"
                         value="testsuite name=&quot;@{browser}."/>
	</sequential>
</for>

Ant iterates over a list of browsers that we want the tests to be executed in. It then triggers the selenium test via JUnit. The test result in the *selenium-result.xml is not very interesting since it is only one test which opens the yui tests. Using a system property ant tells selenium which browser it should open.

Next, we download the reported YUI test result from our test server to the integration server. You may also use selenium to read the result from the HTML page directly and save it to a file within the JUnit test. However, we had a test report server from the beginning, so we used that to get the results from the browser to our integration server. Once we have the xml file we use a regular expression to add a fake package to the test suite such that the integration server can tell us in which browser a test may be failing.

Continuous Integration

The only thing that is left for the continuous integration server is to call the ant target and read all the xml files in the ‘tests’ folder as JUnit test results. This is pretty easy but allows you to analyse your JavaScript tests, which were executed automatically in different browsers, with all the power of the continuous integration server of your choice.

JavaScript Testing and Continuous Integration Part I

Adding your Tests into a continuous integration system for test automation is quite common among professional software development. However, when it comes to testing JavaScript code, many people lack the experience and best practices to set up a productive infrastructure. We at pidoco have invested quite some time to solve this problem and came up with the setup I describe in this blog post.

The JavaScript Testing Framework

There are countless frameworks out there to use for testing JavaScript code. And I have to admit it is quite confusing to find the best for one’s project. From what I saw up to now, many frameworks look very similar to each other. They do differ in some details, which make people like one or the other. To us, it was important to have a comprehensive documentation of the frameworks capabilities. Therefore, we chose the Yahoo UI test framework. It comes with an extensive documentation, just as you know from the Yahoo UI framework. You can transform a JavaScript Object into a test case, which makes it looking quite similar to JUnit tests:

var testCase = new Y.Test.Case({

    name: "TestCase Name",

    //traditional test names
    testSomething : function () {
        //...
    },

    testSomethingElse : function () {
        //...
    }
});

Within your test functions you have a broad variety of assertions that you can use. In addition, you can use the wait() and resume() functions to interrupt a test and wait for asynchronous events to happen. This is very useful if you need to wait for some UI components to be rendered or some server calls to return. Next, you can also trigger dom events using the YUI Event object:

Y.one("body").simulate("click");

Once you have written your first test case you want to execute the test and review the results. YUI Test comes with a nice test result viewer that you may include into your test site. On the other hand, the tests should be executed in several browsers and you may want to review the results on just one page. Therefore, the framework offers a function to send the result to a URL where a server may collect and show the results. In our first attempt we sent the result as a JSON string to the server, who transformed the collected results in a simple table with the tests in rows and the browsers in columns.

var reporter = new Y.Test.Reporter("http://www.yourserver.com/path/to/target",
                                   Y.Test.Format.JSON);
reporter.report(results);

Since many continuous integration servers like to read test results in an xml file, YUI allows for several formats to get your results:

  • Y.Test.Format.XML (default)
  • Y.Test.Format.JSON
  • Y.Test.Format.JUnitXML
  • Y.Test.Format.TAP

Oh, and if you asked yourself ‘where the heck is this Y coming from?”, here is the solution:

var Y = YUI();
Y.use('test');

Being able to write a JavaScript unit or integration test the next question is coming up:

How to test your web app

Following a test driven development a lot of JavaScript has to be written just for the tests. When deploying the web app into the productive environment, we don’t want to include the testing code. But how to separate testing code from productive code? We found two solutions to be possible. First, we use a plugin that loads, triggers, and reports the tests, but is only included into the system during development and test. This way, we simply disable the plugin and have a productive system without any testing code. However, not every web app has a plugin system that supports this solution. Therefore, we have thought of a second way to test your app: deploy a second app that contains a simple HTML page which loads the testing code plus an iFrame containing the web app to be tested.

Please have in mind the same origin policy for the HTML page containing the iFrame and the content of the iFrame. If both are delivered from the same domain, the outer page may access the JavaScript code of the iFrame content. This enables us to call the productive code from the testing code. In case your code either of the application or the test checks for e.g. instances of Array, it fails if the array was created inside the iFrame and checked outside or vice versa. You may create an array using new window.frames[0].Array() and test for an array with a instanceof window.frames[0].Array.

Using the iFrame approach you may enhance the HTML page such that it loads you app and the test code, automatically runs the tests, and listening to the Y.Test.Runner.COMPLETE_EVENT to upload the test result to your reporting server.

How the tests are triggered in different browsers and how this goes with continuous integration is subject of the second part of this blog post.