One of my Cape Clear Studio team colleagues works remotely (very remotely, the other side of the world in fact). I installed Jing a few days ago and used it for the first time in anger today, while working out a problem with a new feature. Jing allows you to instantly share screenshots and desktop movie captures. Haven’t tried the movie sharing yet, but the screenshots work great. Its free right now, but clearly they (Techsmith) are going to charge for it in the future. I like it, I might well sign up.

We’re very test driven here at Cape Clear, we develop automated tests for everything we do. We’re not strict about writing our tests first, I for one write my tests with my code, iterating between one and the other in the course of realising a story (we follow Scrum, a lightweight wrapper process on agile/XP). I wouldn’t dream of writing code without some level of automated test coverage, to me it is meaningless – how do I know the feature code works if I don’t have something that proves it works now, and as I refactor the code base through iterations of the product? Writing tests makes me many orders of magnitude more productive as a developer. I still hear the “lack of time and resources to write automated tests” excuse from developers I know in other companies. Sometimes I argue with them, sometimes I just smile benignly: you don’t need more resources or time to write tests, writing tests gives you more resources and time, and of course results in far superior quality software. But the fly in the ointment for us has been automating the GUI testing of our Eclipse-based tools. We have extensive junit tests for the non-GUI parts of the tools, like our (WTP) facet install delegates, our builders, our models etc. Eighteen months ago we chose Eclipse TPTP, the GUI recorder and playback toolkit, to automate our GUI tests. Maybe others have had more success with TPTP than we have, but our experience was less than satisfactory. In the end we only achieved a tiny amount of coverage with it, and it is difficult to keep these kinds of tests passing and running continuously across multiple branches of the product. In general GUI recorder/playback tests are very brittle to even minor changes in the user interaction. Several things happened recently that made me realise we could and should drop that approach. We started to push our (PDE) junit tests up into the UI, specifically in relation to testing our GMF-based SOA Assembly Editor. We wrote tests that did things like clicking on all the tools in the palette and checking that the edit parts and model elements were created correctly. PDE junit tests run in the UI thread. It struck me that we already had 99% of what we needed to automate our GUI tests from junit. What we did not have was:

  • a test framework, test APIs, which read like a GUI test specification
  • the ability to automate the testing of blocking UI elements, namely wizards and dialogs

The first was easy, I took a couple of our WSDL-to-Java project wizard GUI test cases and prototyped the kind of APIs I wanted, they read just like we write our test specs. Then I implemented the APIs, most of which were very thin (but more test friendly) facades on existing APIs and existing test code. That left me with the wizards and dialog problem. When you launch an SWT wizard or dialog from a PDE junit test, it blocks because its waiting on input from the SWT event queue. The blocking happens in the open() method of org.eclipse.jface.window.Window, from which WizardDialog is derived. In an automated test, we want the input to come from the junit test code, not from the SWT event queue. Fortunatly, open() is public. I will resist going off on a tangent here about one of my pet gripes: the excessive marking of methods as private and classes as final etc. – let me decide how I want to specialise your code, you cannot see all ends and mostly I know what I’m doing. Anyway, back on topic, so now we have our own CcWizardDialog (which extends WizardDialog), and the code looks like this:

public int open() {
  if (this.cctest == null) {
  else {
    return doTestOpen();
protected int doTestOpen() {
  Shell shell = getShell();
  if (shell == null || shell.isDisposed()) {
    shell = null;
  shell = getShell();
  ICcWizard ccWizard = new CcWizardImpl(getWizard(), this);
  return getReturnCode();

The cctest member is an object that implements a simple callback interface, typically its the junit test itself. There is no difference in adding these kinds of test hooks to code and doing test-driven development. We write our code to be testable by code. Remember that we’re not trying to test SWT or core Eclipse platform components, we know they are well covered, stable, mature – basically: we know they work. And of course we do manually test and use our tools too. The ICcWizard interface looks like this:

public interface ICcWizard {
  ICcWizardPage getCurrentPage();
  ICcWizardPage next();
  ICcWizardPage back();
  void finish();
  void cancel();
  IWizard getWizard();
  boolean isNextEnabled();
  boolean isFinishEnabled();

And (an abbreviated) ICcWizardPage interface looks like this:

public interface ICcWizardPage {
  void setTextValue(...);
  void selectComboValue(...);
  void setCheckBox(...);
  void setRadioButton(...);

Now you can start to see how the junit test reads, just like you’d write a GUI test spec: launch the wizard, set a value in a text box, select a value in a combobox, go to the next page in the wizard, select a radio button, press finish. After which the call stack unwinds back to the junit test, which can then use project APIs to verify the results. I’ve been deliberately vague about ICcWizardPage. How that works is also quite interesting, and very simple. I will detail this and more in another posting. What I really like about the whole approach is that the tests are quick to write and simple to maintain.

The invisible shed

November 1, 2007

Rarely do I read something which really makes me laugh out loud: invisible shed.