Search This Blog

Wednesday, June 30, 2010

Website: User's Manual being converted to online version

Good news! Manon started working on the online documentation. This will avoid including a large PDF (more than 100 pages and a few MBytes) in the distribution package. But more importantly, this will allow keeping the documentation maintained/updated much easily and often!
The complete User's Manual should be available in 3 weeks.

Release of XStudio 1.3b2

Beta 2 of XStudio 1.3 has been released today !

Saturday, June 19, 2010

XStudio 1.3: New test dependencies graph

In XStudio 1.3, a new test dependencies graph will be released. This is using a new hierarchical layout much clearer than the former circular layout:


When some cycles do exist they are indicated in red/orange so that you can fix them. And again the new layout does a tremendous job:


Tuesday, June 8, 2010

XStudio 1.3: New manual launcher

One comment that many people made about the 2 manual launchers already available was: "It's nice but how can I execute some tests, then stop, close XStudio and continue the same session the day after?".
Another question was: "what if I want to run again one individual test already executed a long time ago in a campaign session?".


In XStudio 1.3, we would like to introduce a new manual launcher and a new feature to answer these 2 questions.

1) A third manual launcher
This launcher will be aimed at giving the more flexibility as possible to the operator so that:
  • he can execute the tests in the order he wants
  • he can re-run the same tests several times
  • it's trivial and fast to run the tests
The goal being: "as simple as executing an Excel/Word test plan writing down the result manually with a pen"

2 options:

Option1:
Here is a mock-up of what could be the design of the new launcher:

This solution is pretty elegant and provides a good overview of the current status to the test operator but it's not exactly as simple as running test cases "on paper": The test operator has to:
  • select a test in the test tree (A)
  • select a test case in the sub tree (B)
  • description of the test and test case are updated
  • click on Succeeded or Failed buttons
  • use the Previous test case and Next test case buttons
  • optionally submit a bug or link to an already existing bug, post a comment on the fly
At any time the test operator can select a specific test (A) and test case (B) to execute it (even if already executed). The clock and progress bar are still triggered only if the test has the timeout attribute (no change from the current launchers).

Option2:
The launcher would just display tests and test cases in a table.
Columns would be:
  • Id
  • Path
  • Name
  • Priority
  • Result
  • Comment

The Result column of test case rows would contain a combo-box to manually select the results. This is much simpler but the test operator would have to rely on a printed copy of the test plan. All the results would be updated when the test operator clicks on the Submit button.

Option3:
Maybe something in the middle?

I'm interested in your views about it...



2) An option to re-execute stopped campaign session.
This is a delicate point as cheating must be avoided. Indeed, who never got to the following situation:
A tester runs a very time consuming test campaign (several days) on a product, getting all tests succeeding... except one. Really unfortunate isn't it? Hopefully the day after the tester receives a new version of the SUT and (because he's running out of time) re-runs only the failed test overwriting a failure with a success (hence ignoring all potential risk of regression included in the new version o the SUT) :( It is very common unfortunately and should be authorized only in very specific cases.

So, a new right will be introduced so that the users who are granted with this right are the only ones having the ability to re-run a stopped campaign session.

In a next version, the ability the select a specific version of the test to run will be added. This will be part of the flagging system.

Tuesday, May 18, 2010

XStudio 1.3: Accelerated startup

In XStudio 1.3, a new innovative system is implemented to pre-fetch icons and compute (using the Java 2D API) derived icons (based on the following algorithms: concatenation, HSB filter, opacity, grey-out, overlay, rescale etc.) much faster (3 times faster than downloading them individually).

This will drastically increase start-up time (especially in Java Web Start mode).

XStudio 1.3: Proxy launcher

One big issue in testing distributed environment is how to publish scripts execution on each component.

Let's imagine you have to test a VoIP framework that comprises 3 different machines: a
Client, a Gateway and a Server. To test the data flow, you will need to deploy for each test (at least) 3 scripts on each of those and run them at the same time. And that's were Test Management systems (even very expensive ones!) generally fails.

XStudio 1.3 includes a new launcher called the proxy launcher and a new kind of agent called the XSubAgent. An XSubAgent is basically a kind of XAgent instance embedding an XML-RPC server and which communicates only through this media.

Here is the flow:
  • You create your test campaign from XStudio and start it
  • The tests in this campaign are associated to the proxy launcher which can be running on any host
  • The proxy launcher is configured with a list of individual XSubAgents, launchers and configurations
  • The proxy launcher will just contact individually all the XSubAgents and will forward the requests to execute extracted sub-tests
  • Each XSubAgent will execute its specific script and will return results to the proxy
  • The proxy consolidates results and logs and returns it to XStudio.

Hence, one single test is split by the proxy launcher in different scripts that are executed simultaneously on different XSubAgents located on the Client, the Gateway and the Server of our VoIP Framework.

XStudio 1.3: Tests and test cases versioning



One feature requested for long was the test and test cases versioning. Indeed, it was possible to track the progression and regression of test campaigns in time but there were not yet any way to manage the versions of the tests/test cases in parallel. Hence, if you are looking at an old campaign session and see that the test A did fail, you were not able to know what was the test A at this time since it may have changed afterward.

Starting in XStudio 1.3, all changes in tests and test cases are tracked and versioned so that you can get their states at any point in time.

Information versioned are:

Tests:
  • Description
  • Priority
  • Canonical path
  • Prerequisites
  • Additional information
Test cases:
  • Procedure tree
  • Description
  • Use description as testplan flag
  • Additional information

Currently, the system associate to each revision a date of submission and an revision (an automatically incremented number). Here is how the GUI looks like:


In the future, a flagging system will be added so that the user can flag ALL test at a T time with a personalized label. It will also be possible to run a campaign session with the tests as they were defined in the past.

Monday, May 17, 2010

XStudio 1.3: TRAC integrated

In XStudio 1.3, a new third-party bug-tracking database is integrated: the very popular TRAC.

This integration has been done using TRAC's XML-RPC API.

Many other bug-tracking system provide XML-RPC interface (i.e. JIRA) so this will ease very much of those in the future as well.

Wednesday, April 28, 2010

XStudio 1.3: UTF-8 Support

Starting from XStudio 1.3a3, support of UTF-8 character set and encoding is supported. This means that any kind of accentuated character or symbol can be used (French, German, Chinese, Arab, Hebrew etc.).

The only remaining forbidden characters are now: " $ * \ and `

Any textual element on any item in XStudio is now utf-8 supported excepted usernames and paswwords that still need to use only alphanumeric characters (for security reasons).
Enjoy,

Saturday, April 24, 2010

XStudio 1.3: New refactored progress details window

It's a long time that I did not post a new thread in my blog (was pretty busy working on some dev. for a customer)!

Anyway, I'm glad to be back and with a pretty exciting new feature: I completely re-designed the way tests are executed (especially the threading part). This will allow in the future to support execution of tests from several agents at the same time. In addition, we should be able to run several instances of the same tests on each of those agents.

Anyway, the first step to get to this point was to redesign the progress details window so that it's much easier to get information on the current state of the execution.

Here is the new design:


As you can see, information are now much more readable than before and this new layout will make it easy to integrate several agents and several instances in the future.
So, what has changed exactly?

Tabbing
Each category is now displayed in an independent tab.
The same way, each agent executing some tests are displayed in a separate tab (for now, you can have only 1 agent though)

Stats consolidation
Specific tabs contains specific information. But you may want to get only "the big picture". With this new design, you can get exactly the information you want:
  • global stats of the campaign session (at the very bottom of the window)
  • stats related to one category of tests (all results of the tests belonging to the same category)
  • stats related to one category and executed from one specific agent
  • stats related to one category, executed from one specific agent and from one specific instance
In the future, there will be several rows in the instances table showing the details of each instance execution. Some additional settings will also be available in the future to synchronize all the instances together which will make XStudio a great tool for stressing SUTs.

Adaptive scale
In XStudio 1.2, the graph was showing a sliding window of the results. So, you were able to see only the results of the last 5 minutes of execution. Now, the graph is displaying ALL the results, and the scale is constantly adapting to do that. Much more practical (and there is still the zooming function available in case your campaign sessions last hours or days and you want to see a specific region of the results).

Offline display of the progress details window
This was something I wanted to do for a long time and it will be in XStudio 1.3a1: it is now possible to redraw this progress details information of any campaign session at any time (even if you executed the session two month ago).

Trends are now appropriately colored
- Green = success
- Red = failure
- Blue = relative
- Grey = not executed/skipped

Tuesday, February 16, 2010

XStudio 1.2: GUI for the execution options

One of the latest features I added was the possibility to reorder the tests by dependency. One thing was missing though: the logic to apply in case one parent test has failed or has not been executed (even if you did not ordered you tests by dependencies, you may want to take advantage of this information).

This is done now through this GUI (available in "create a session", "copy a session" and "create a schedule" operations):

Monday, February 8, 2010

XStudio 1.2: Automatic generation of specifications and tests

The right process in the lifecycle of a product is to start writing requirements. Then, a list of specifications can be refined from these requirements. Then, the testers write their tests from the specifications.

In XStudio, the process is exactly the same and it can be quite time-consuming to create all these items in the requirements, specifications and tests tree.

Requirements MUST be entered anyway. This is an absolute prerequisite. But then, a new feature will help in creating the specifications and tests repositories. Here is how it works:

1) Select a folder (that is parent of a full tree of sub-folders and requirements) in the requirements tree

2) Click on the "Generate specifications" button

3) Pick a destination folder in the specifications tree


4) Optionally change the default set of options:


4) Submit


The selected tree of requirements is "duplicated" in the specifications tree. Names and descriptions will be identical and the requirement and its peer generated specification will be linked together. Hence, each requirement has one "default" specification associated. The user has now just to edit/modify the specification (name, description, status etc.) and add some additional ones if necessary.

The equivalent feature will be available to generate default tests from the specifications.

Friday, January 15, 2010

XStudio 1.2: Refined reordering of tests within a campaign

One of the most complex but valuable interest in using a test manager is to be flexible enough to execute tests in a specific order.

When you create a campaign, the wizard asks you to select some tests in a tree, hence, there is no concept of order at this point. However, after the campaign is created you can select the "Order" tab and enjoy two new buttons:

1) Reorder using dependencies
If you press this button, tests will be ordered so that a test is always executed after ALL his parents. The algorithm is pretty complex and obviously supposes there is no cycle in the dependencies. In case you left some cycling dependencies, XStudio will show you exactly what's wrong by displaying all the cycling dependencies:

Then, you can easily correct your dependencies (in the test tree) and reorder your campaign.

Before pressing the "Reorder using dependencies" button, ensure the checkbox "Execute tests with dependencies first" is correctly set. If this checkbox is not selected, tests without any dependencies will be executed after all the tests having dependencies.

2) Reorder using priorities
If you press this button, tests will be ordered so that tests with higher priorities will be executed before tests with lower priorities. Note that a "priority" column has been added so that it's visually easier to check the priorities of all the tests in the list.

Manual ordering:
Once the system has reordered your tests using one of the 2 methods, you still have the flexibility to manually reorder some of the tests using the usual manual reordering toolbar. As before, multiple selection is available so that you can move several tests at the same time.

Column width persistence:
A recently added feature is the fact that after this table is refreshed, the columns are left at the exact same position. This was a pretty annoying restriction.

Sunday, January 10, 2010

XStudio 1.2: More detailed SUT report

The SUT report has been improved to include much more information including:

In the folder summary
- for each SUT, the number of defects found on this SUT
- for each SUT, the number of defects fixed on this SUT
- for each SUT, the number of sessions executed on this SUT

In the SUT Details section:
- the details of the defects found on this SUT
- the details of the defects fixed on this SUT
- some basic information (name, start date and stop date) on the sessions executed on this SUT

Of course, by clicking on the links, the users gets to the details of the selected item (requirement, specification, test or defect

Saturday, January 9, 2010

XStudio 1.2: Copy a testcase

When some testcases of a tests are complex to describe but very similar it's good to be able to copy a reference and just modifying what's strictly necessary to be changed.

This will be possible in XStudio 1.2 through this interface (allowing to modify on the fly the testplan):