Search This Blog

Thursday, December 1, 2011

XStudio 1.6a1: Mixed test campaigns

Mixed campaign is a long story. This feature has been requested for long but it was technically requiring so many changes in the code architecture that it had been delayed several times. 

But now it's done!

In terms of GUI, the changes are minimal though. you will notice only a couple of new radio buttons in the campaign's "Order" panel:


When you create a new test campaign, just select all the tests you want to be part of that campaign. These tests can be of different kinds (i.e. 3 Selenium scripts + 2 JUnit tests + 4 manual tests). You can reorder those tests manually by using the order toolbar but you can also use one of the two automatic ordering buttons (this will reorder automatically your tests by dependencies or by priorities).

Here comes the news: you have now 2 new options:

1) execute tests from different categories in parallel
In this case, tests from each category will be gathered and executed in a separate and independant thread. Within a single category/thread, all the tests will be executed following the order globally defined but each category will be running on its own and only its set of tests. In this case, there is NO synchronization between categories/launchers.

2) execute tests from different categories in sequence exactly as defined in the order you just set
In this case, the system will execute the tests exactly in the order you defined. At one time there will be only one launcher working.

This second option makes it possible to create sequences of tests mixing completely automated tests with manual tests. For instance, this could allow to manually configure a system then run all the automated tests and complete the campaign by performing some additional manual cleaning operations. This also allows mixing different types of automated tests in one single sequence.

I Cannot think yet about all this brings but it definitely brings a lot of opportunities in your way to test!

2 comments:

  1. Current implementation seems to enforce the category-Launcher relation even more than it was - I at least was hoping this will vanish...
    1. Above does not present which test is executed by which launcher, nor does it allow selection of the desired launcher.
    In order not to change the structure too much, I would suggest that category launcher setting will remain as the "Default" value, but user will be able to over-ride it per test (later on in a multi-selection action) - so above abilities are not harmed nor become more complicated, but we gain more flexibility.

    Of course, Session results should also indicate which part was executed with which launcher.

    Integrating with Manual Launchers, while seems like complicating things a bit, are useful in HW production floors "testing".
    Though the solution above does not answer Automation-Assisted (semi-manual) needs of Manual/Exploratory testers, where some automatic functions aid the manual execution, but these are not always in a context of a full test, but rather a sub-part or even just for injecting stimuli or validating some of the expected results.

    2. Another issue, is allowing an abstraction layer of the interfaces or test equipment used - running one test or a part of it through IE and another through Chrome using same or different launcher, and defining/over-riding these as execution parameters at execution stage.

    ReplyDelete
  2. > Current implementation seems to enforce the category-Launcher relation
    > even
    > more than it was
    !? No, it's not. It keeps exactly the same architecture. It just allows, in addition, to run tests in sequence rather than in parallel if they are from different categories.

    > I at least was hoping this will vanish...
    I know you don't like the categories but I think this is a pretty useful feature being able to change in block the way to execute some bunch of tests. Some people like it. Some others don't.

    > 1. Above does not present which test is executed by which launcher, nor does it allow selection of the desired launcher.
    This is a completely different feature that again only a very few users are asking for. I agree that it would be useful to see in the list which category/launcher will be used for each test though. I agree this should be added.

    > In order not to change the structure too much, I would suggest that
    > category launcher setting will remain as the "Default" value, but user
    > will
    > be able to over-ride it per test (later on in a multi-selection action) -
    > so above abilities are not harmed nor become more complicated, but we gain
    > more flexibility.
    Yes this is probably what we're going to do at one point. But this is at the moment a low-priority feature. I'm sure you have good reasons for it but people generally don't need it.

    > Of course, Session results should also indicate which part was executed
    > with which launcher.
    If there is the ability to change the launcher individually yes it would make sense.


    > Integrating with Manual Launchers, while seems like complicating things a
    > bit, are useful in HW production floors "testing".
    > Though the solution above does not answer Automation-Assisted
    > (semi-manual)
    > needs of Manual/Exploratory testers, where some automatic functions aid
    > the
    > manual execution, but these are not always in a context of a full test,
    > but
    > rather a sub-part or even just for injecting stimuli or validating some of
    > the expected results.
    This does not help for exploratory testing (this feature does not exist yet in XStudio). But it definitely answers the semi-automated testing when tests are clearly defined before they are performed.


    > 2. Another issue, is allowing an abstraction layer of the interfaces or
    > test equipment used - running one test or a part of it through IE and
    > another through Chrome using same or different launcher, and
    > defining/over-riding these as execution parameters at execution stage.
    The need to pick a specific launcher to run a test by configuration/test parametrization sounds very marginal to me and it would be possible only when we implement the "override launcher" feature stated above which is
    again low priority at the moment. It's just a matter of priorization, features like risk-based testing, Oracle support, new coverage metrics are much more requested than the "override launcher" one.

    Thanks for your feedback

    ReplyDelete