Quantcast
Channel: Official Serenity BDD Automated Acceptance Testing Blog » Tips and Tricks
Viewing all articles
Browse latest Browse all 31

Serenity 1.1 is out!

$
0
0

A brand new version of Serenity is out, with bug features and some very cool new features, including fully-integrated feature coverage reporting and the ability to include both manual and automated tests in your reports!

The Serenity team is proud to announce the release of Serenity version 1.1 (the current exact version is 1.1.1, but as is custom with the Serenity project, we like to release in a regular stream of minor features and bug fixes, so check the latest release numbers on Bintray or Maven Central.)

This new release has several major new features, along with a number of bug fixes and improvements. Major new features include:

  • Smooth integration between test reporting and requirements reporting (living documentation)
  • You can now flag tests (in JUnit, Cucumber or JBehave) as manual (more on this further down)

Fully-integrated Requirements Reporting

Serenity is an automated testing and reporting library that tries to implement many key concepts from the world of Behavior Driven Development (BDD) such as Living Documentation and Feature Coverage. One of the main principles behind Serenity reporting is that the value of an automated test is directly related to the value of the feature it is testing. Automated tests are most useful when they demonstrate both that a feature works and that it is doing something valuable for the customer.

Serenity distinguishes between two distinct views of your test results. The first, the test reports, presents the results from the point of view of what tests were executed:

Test reports

These reports also give you an overview of the test results, in terms of the number of passing and failing tests:

Test reports

This representation is a classic list of test results that will be familiar to testers.

The second focuses less on what tests were executed, and more on what features were delivered. If the definition of done ifor the features you want to deliver is accurately described by the acceptance criteria (one of the cornerstones on BDD), and if you automate these acceptance criteria, then you can get a good idea of whether a feature has indeed been delivered from the results of the automated acceptance criteria.

For this to work, Serenity needs to know how your requirements are structured. A flat requirements structure is a poor representation for all but the most simple projects. Well-designed requirement structures help a reader understand what business goal each feature is helping to deliver, or what capability the feature is helping to provide. For this reason, teams often organize features by capability or in some other meaningful functional grouping.

Doing this in Serenity allows you to present a hierarchical view of the requirements, as illustrated here:

Requirements reports

The simplest way to represent a requirements structure in Serenity is to use a hierarchical directory structure, where each top level directory describes a high level capability or functional domain. You might break these directories down further into sub directories representing more detailed functional areas, or just place feature files directly in these directories:

Requirements directory structure using Cucumber

The same thing works for JUnit, except you use packages instead of directories:

Requirements directory structure using JUnit

To get the JUnit package structure to work, you need to set the serenity.test.root system property to the top level package containing your requirements:

serenity.test.root=net.thucydides.showcase.junit.features

When you organize your tests this way, Serenity will show you where each test belongs in the requirements hierarchy:

Test result showing parent requirements

These breadcrumbs let you go directly to the corresponding requirements pages, as illustrated here:

Test result showing parent requirements

Manual tests It is often useful to be able to flag certain tests as manual tests, which are not consider cost-efficient to automate, but which you would still like to see in the overall test reports.

You can mark a JUnit test as a manual one simply by using the @Manual annotation. For example, the following Serenity test class has two automated tests, and one flagged as manual:

@RunWith(SerenityRunner.class)
public class SearchByKeyword {

    @Managed
    WebDriver driver;

    @Steps
    BuyerSteps buyer;

    @Test
    public void search_for_articles_by_keyword() {
        buyer.opens_home_page();
        buyer.searches_by_keyword("wool");
        buyer.should_see_results_summary_containing("wool");
    }

    @Test
    public void search_for_articles_by_shop_name() {
        buyer.opens_home_page();
        buyer.searches_for_shop_called("docksmith");
        buyer.should_see_shop_search_result_summary_of("1 shop found for docksmith");
    }

    @Test
    @Manual
    public void should_respect_standard_look_and_feel() {}

}

In Cucumber, just use the @manual tag on a scenario:

@manual
Scenario: Should respect standard look and feel

You can add the usual Given/When/Then steps if you want some instructions about how to test this scenario in the living documentation, or leave it simply as a place-holder for the tester later on.

And in JBehave, use the @manual tag in the Meta: section of a scenario:

Scenario: Display social media links for a product
Meta:
@manual

Given I have searched for 'Docking station' in my region
When I select item 1
Then I should see social media links

In all cases, the manual test appear in the reports as a special kind of “Pending” test, indicated by a special icon:

Manual tests

Manual tests also have their on tag, making it easy to get a view of all of the manual tests in one place.

Bug fixed and enhancements

There are also a few important bug fixes and enhancements, including:

  • Fixed a thread leak that sometimes caused problems on build servers for large projects.
  • Move the caption for the screenshots to the top of the screen for better readability.
  • Added the deep.step.execution.after.failures system property. This allows you to decide whether @Step methods should simply be skipped after a previous step has failed (the default: this is faster, but it means that only top-level steps will be reported), or if the subsequent steps will be executed in “dry-run” mode (will report on nested steps as well as the top level ones)
  • Upgrades to Appium 3.1.0
  • Improved error and exception reporting for RestAssured tests.

Coming up in the not-so-distant future will be deep JIRA integration, including integration with Zephyr. Older versions of Thucydides supported this integration using the old SOAP API for JIRA and a beta version of the Zephyr API. This has now been completely rewritten using the latest REST APIs for both tools.



Viewing all articles
Browse latest Browse all 31

Trending Articles