Acceptance Tests

The Test Directory Structure

This is the structure of the acceptance directory inside the core repository’s tests directory:

tests
├── acceptance
│   ├── config
│   │   └── behat.yml
│   ├── features
│   │   ├── apiTags (example suite of API tests)
│   │   │   └── feature files (behat gherkin files)
│   │   ├── bootstrap
│   │   │   └── Contexts and traits (php files)
│   │   ├── cliProvisioning (example suite of CLI tests)
│   │   │   └── feature files (behat gherkin files)
│   │   ├── lib
│   │   │   └── Page objects for webUI tests (php files)
│   │   └── webUILogin (example suite of webUI tests)
│   │       └── feature files (behat gherkin files)
│   ├── filesForUpload
│   └── run.sh

Here’s a short description of each component of the directory.

config/

This directory contains behat.yml which sets up the acceptance tests. In this file we can add new suites and define the contexts needed by each suite. Here’s an example configuration:

default:
  autoload:
    '': '%paths.base%/../features/bootstrap'
  suites:
    apiMain:
      paths:
        - '%paths.base%/../features/apiMain'
      contexts:
        - FeatureContext: &common_feature_context_params
            baseUrl:  http://localhost:8080
            adminUsername: admin
            adminPassword: admin
            regularUserPassword: 123456
            ocPath: apps/testing/api/v1/occ
        - AppManagementContext:
        - CalDavContext:
        - CardDavContext:

    apiCapabilities:
      paths:
        - '%paths.base%/../features/apiCapabilities'
      contexts:
        - FeatureContext: *common_feature_context_params
        - CapabilitiesContext:

features/

This directory contains sub-directories for each of the test suites.

features/suiteName

This directory stores Behat’s feature files for the test suite. These contain Behat’s test cases, called scenarios, which use the Gherkin language.

feature/bootstrap

This folder contains all the Behat contexts. Contexts contain the PHP code required to run Behat’s scenarios. Every suite has to have one or more contexts associated with it. The contexts define the test steps used by the scenarios in the feature files of the test suite.

filesForUpload/

This folder contains convenience files that tests can use to upload.

run.sh

This script runs the test suites. It is called by the make commands that are used to run acceptance tests.

The Testing App

The testing app provides an API that allows the acceptance tests to set up the environment of the system-under-test. For example, running occ commands to set system and app config settings. The testing app must be installed and enabled on the system-under-test.

The testing app also provides skeleton folders that the tests can use as the default set of files for new users.

apps/testing/data/apiSkeleton/

This folder stores the initial files loaded for a new user during API or CLI acceptance tests.

apps/testing/data/webUISkeleton/

This folder stores the initial files loaded for a new user during webUI acceptance tests.

Running Acceptance Tests

Preparing to Run Acceptance Tests

This is a concise guide to running acceptance tests on ownCloud 10. Before you can do so, you need to meet a few prerequisites available; these are

  • ownCloud

  • Composer

  • MySQL

In php.ini on your system, set opcache.revalidate_freq=0 so that changes made to ownCloud config.php by test scenarios are implemented immediately.

After cloning core, run make as your webserver’s user in the root directory of the project.

Now that the prerequisites are satisfied, and assuming that $installation_path is the location where you cloned the ownCloud/core repository, the following commands will prepare the installation for running the acceptance tests.

# Remove current configuration (if existing)
sudo rm -rf $installation_path/data/*
sudo rm -rf $installation_path/config/*

# Remove existing 'owncloud' database
mysql -u root -h localhost -e "drop database owncloud"
mysql -u root -h localhost -e "drop user oc_admin"
mysql -u root -h localhost -e "drop user oc_admin@localhost"

# Install ownCloud server with the command-line
sudo -u www-data $installation_path/occ maintenance:install \
  --database='mysql' --database-name='owncloud' --database-user='root' \
  --database-pass='mysqlrootpassword' --admin-user='admin' --admin-pass='admin'

Types of Acceptance Tests

There are 3 types of acceptance tests; API, CLI and webUI.

  • API tests test the ownCloud public APIs.

  • CLI tests test the occ command-line commands.

  • webUI tests test the browser-based user interface.

webUI tests require an additional environment to be set up. See the UI testing documentation for more information. API and CLI tests are run by using the test-acceptance-api and test-acceptance-cli make commands.

Running Acceptance Tests for a Suite

Run a command like the following:

make test-acceptance-api BEHAT_SUITE=apiTags
make test-acceptance-cli BEHAT_SUITE=cliProvisioning

Running Acceptance Tests for a Feature

Run a command like the following:

make test-acceptance-api BEHAT_FEATURE=tests/acceptance/features/apiTags/createTags.feature
make test-acceptance-cli BEHAT_FEATURE=tests/acceptance/features/cliProvisioning/addUser.feature

Running Acceptance Tests for a Tag

Some test scenarios are tagged. For example, tests that are known to fail and are awaiting fixes are tagged @skip. To run test scenarios with a particular tag:

make test-acceptance-api BEHAT_SUITE=apiTags BEHAT_FILTER_TAGS=@skip
make test-acceptance-cli BEHAT_SUITE=cliProvisioning BEHAT_FILTER_TAGS=@skip

Displaying the ownCloud Log

It can be useful to see the tail of the ownCloud log when the test run ends. To do that, specify SHOW_OC_LOGS=true:

make test-acceptance-api BEHAT_SUITE=apiTags SHOW_OC_LOGS=true

Optional Environment Variables

If you want to use an alternative home name using the env variable add to the execution OC_TEST_ALT_HOME=1, as in the following example:

make test-acceptance-api BEHAT_SUITE=apiTags OC_TEST_ALT_HOME=1

If you want to have encryption enabled add OC_TEST_ENCRYPTION_ENABLED=1, as in the following example:

make test-acceptance-api BEHAT_SUITE=apiTags OC_TEST_ENCRYPTION_ENABLED=1

How to Write Acceptance Tests

Each acceptance test is a scenario in a feature file in a test suite.

Feature Files

Each feature file describes and tests a particular feature of the software. The feature file starts with the Feature: keyword, a sentence describing the feature. This is followed by more detail explaining who uses the feature and why, in the format:

  As a [role]
  I want [feature]
  So that [benefit]

For example:

Feature: upload file using the WebDav API
  As a user
  I want to be able to upload files
  So that I can store and share files between multiple client systems

This detail is free-text and has no effect on the running of automated tests.

The rest of a feature file contains the test scenarios.

Make small feature files for individual features. For example "the Provisioning API" is too big to be a single feature. Split it into the functional things that it allows a client to do. For example:

  • addGroup.feature

  • addUser.feature

  • addToGroup.feature

  • deleteGroup.feature

  • deleteUser.feature

  • disableUser.feature

  • editUser.feature

  • enableUser.feature

  • removeFromGroup.feature

Test Scenarios

A feature file should have up to 10 or 20 scenarios that test the feature. If you need more scenarios than that, then perhaps there really are multiple features and you should make multiple feature files.

Each scenario starts with the Scenario: keyword followed by a description of the scenario. Then the steps to execute for that scenario are listed.

There are 3 types of test steps:

  • Given steps that get the system into the desired state to start the test (e.g. create users and groups, share some files)

  • When steps that perform the action under test (e.g. upload a file to a share)

  • Then steps that verify that the action was successful (e.g. check the HTTTP status code, check that other users can access the uploaded file)

A single scenario should test a single action or logical sequence of actions. So the Given, When and Then steps should come in that order.

If there are multiple Given or When steps, then steps after the first start with the keyword And.

If there are multiple Then steps, then steps after the first start with the keyword And or But.

Writing a Given Step

Given steps are written in the present-perfect tense. They specify things that "have been done". For example:

  Scenario: delete files in a sub-folder
    Given user "user0" has been created
    And user "user0" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "user0" has created a folder "/FOLDER/SUBFOLDER"
    And user "user0" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"

Given steps do not mention how the action is done. They can mention the actor that performs the step, when that matters. For example, creating a user must be done by something with enough admin privilege. So there is no need to mention "the administrator". But creating a file must be done in the context of some user. So the user must be mentioned.

The test code is free to achieve the desired system state however it likes. For example, by using an available API, by running a suitable occ command on the system-under-test, or by doing it with the webUI. Typically the test code for Given steps will use an API, because that is usually the most efficient.

Writing a When Step

When steps are written in the simple present tense. They specify the action that is being tested. Continuing the example above:

  Scenario: delete all files in a sub-folder
    Given user "user0" has been created
    And user "user0" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "user0" has created a folder "/FOLDER/SUBFOLDER"
    And user "user0" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"
    When user "user0" deletes everything from folder "/FOLDER/" using the WebDAV API

In ownCloud there are usually 2 or 3 interfaces that can implement an action. For example, a user can be created using an occ command, the Provisioning API or the webUI. Files can be managed using the WebDAV API or the webUI. File shares can be managed using the Sharing API or the webUI. So When steps should end with a phrase specifying the interface to be tested, such as:

  • using the occ command

  • using the Sharing API

  • using the Provisioning API

  • using the WebDAV API

  • using the webUI

Writing a Then Step

Then steps describe what should be the case if the When step(s) happened successfully. They should contain the word should somewhere in the step text.

  Scenario: delete all files in a sub-folder
    Given user "user0" has been created
    And user "user0" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "user0" has created a folder "/FOLDER/SUBFOLDER"
    And user "user0" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"
    When user "user0" deletes everything from folder "/FOLDER/" using the WebDAV API
    Then user "user0" should see the following elements
      | /FOLDER/           |
      | /PARENT/           |
      | /PARENT/parent.txt |
      | /textfile0.txt     |
      | /textfile1.txt     |
      | /textfile2.txt     |
      | /textfile3.txt     |
      | /textfile4.txt     |
    But user "user0" should not see the following elements
      | /FOLDER/SUBFOLDER/              |
      | /FOLDER/welcome.txt             |
      | /FOLDER/SUBFOLDER/testfile0.txt |

Note that there are often multiple things that should or should not be the case after the When action. For example, in the above scenario, various files and folders (that are part of the skeleton) should still be there. But other files and folders under FOLDER should have been deleted.

Where it makes the scenario read more easily, use the But as well as And keywords in the Then section.

Then steps should test an appropriate range of evidence that the When action did happen. For example:

  Scenario: admin creates a user
    Given user "brand-new-user" has been deleted
    When the administrator sends a user creation request for user "brand-new-user" password "%alt1%" using the provisioning API
    Then the OCS status code should be "100"
    And the HTTP status code should be "200"
    And user "brand-new-user" should exist
    And user "brand-new-user" should be able to access a skeleton file

In this scenario we check that the OCS and HTTP status codes of the API request are good. But it is possible that the server lies, and returns HTTP status 200 for every request, even if the server did not create the user. So we check that the user exists. However maybe the user exists according to some API that can query for valid user names/ids, but the user account is not really valid and working. So we also check that the user can do something, in this case that they can access one of their skeleton files.

Specifying the Actor

Test steps often need to specify the actor that does the action or check. For example, the user.

The acceptance test code can remember the "current" user with a step like:

    Given as user "user0"
    And the user has uploaded file "abc.txt"
    When the user deletes file "abc.txt"
    ...

So that later steps can just mention the user.

Or you can mention the user in each step:

    Given user "user0" has uploaded file "abc.txt"
    When user "user0" deletes file "abc.txt"
    ...

Either form is acceptable. Longer tests with a single user read well with the first form. Shorter tests, or sharing tests that mix actions of multiple users, read well with the second form.

When the actor is the administrator (a special user with privileges) then use the administrator in the step text. Do not write When user "admin" does something. The user name of the user with administrator privilege on the system-under-test might not be admin. The user name of the administrator needs to be determined at run-time, not hard-coded in the scenario.

Referring to Named Entities

When referring to specific named entities on the system, such as a user, group, file, folder or tag, then do not put the word the in front, but do put the name of the entity. For example:

    Given user "user0" has been added to group "grp1"
    And user "user0" has uploaded file "abc.txt" into folder "folder1"
    And user "user0" has added tag "aTag" to file "folder1/abc.txt"
    When user "user0" shares folder "folder1" with user "user1"
    ...

This makes it clearer to understand which entity is required in which position of the sentence. For example:

    And "user0" has uploaded "abc.txt" into "folder1"
    ...

would be less clear that the required entities for this step are a user, file and folder.

Scenario Background

If all the scenarios in a feature start with a common set of Given steps, then put them into a Background: section. For example:

  Background:
    Given user "user0" has been created
    And user "user1" has been created
    And user "user0" has uploaded file "abc.txt"

  Scenario: share a file with another user
    When user "user0" shares file "abc.txt" with user "user1" using the sharing API
    Then the HTTP status code should be "200"
    And user "user1" should be able to download file "abc.txt"

  Scenario: share a file with a group
    Given group "grp1" has been created
    And "user1" has been added to group "grp1"
    When user "user0" shares file "abc.txt" with user "user1" using the sharing API
    Then the HTTP status code should be "200"
    And user "user1" should be able to download file "abc.txt"

This reduces some duplication in feature files.

Controlling Running Test Scenarios In Different Environments

A feature or test scenario might only be relevant to run on a system-under-test that has a particular environment. For example, a particular app enabled.

To allow the test runner script to run the features and scenarios relevant to the system-under-test the feature file or individual scenarios are tagged. The test runner script can then filter by tags to select the relevant features or scenarios.

For general information on tagging features and scenarios see the Behat tags documentation.

Tagging Features By API, CLI and webUI

Tag every feature with its major acceptance test type api, cli or webUI, as in the following examples. Doing so allows the tests of a particular major type to be quickly run or skipped.

@api

@api
Feature: add groups
  As an admin
  I want to be able to add groups
  So that I can more easily manage access to resources by groups rather than individual users

@cli

@cli
Feature: add group
  As an admin
  I want to be able to add groups
  So that I can more easily manage access to resources by groups rather than individual users

@webUI

@webUI
Feature: login users
  As a user
  I want to be able to log into my account
  So that I have access to my files

Tagging Scenarios That Require An App

When a feature or scenario requires a core app to be enabled then tag it like:

@comments-app-required
@federation-app-required
@files_trashbin-app-required
@files_versions-app-required
@notifications-app-required
@provisioning-app-required
@systemtags-app-required

The above apps might be disabled on a system-under-test. Tagging the feature or scenario allows all tests for the app to be quickly run or skipped.

For tests in an app repository, do not tag them with the app name (e.g., files_texteditor-app-required). It is already a given that the app in the repository is required for running the tests!

Tagging Scenarios That Need to Be Skipped

Skip UI Tests On A Particular Browser

Some browsers have difficulty with some automated test actions. To skip scenarios for a browser tag them with the relevant tags:

@skipOnCHROME
@skipOnFIREFOX
@skipOnINTERNETEXPLORER
@skipOnMICROSOFTEDGE

Skip Tests On A Particular Version Of ownCloud

The acceptance test suite is sometimes run against a system-under-test that has an older version of ownCloud. When writing new test scenarios for a new or changed feature, tag them to be skipped on the previous recent release of ownCloud. Use tag formats like the following to skip on a particular major, minor or patch version.

@skipOnOcV10
@skipOnOcV10.0
@skipOnOcV10.0.10
@skipOnOcV10.1

Skip Tests In Other Environments

Annotation Description

@skipOnLDAP

skip the scenario if the test is running with the LDAP backend. For example, some user provisioning features may not be relevant when LDAP is the backend for authentication.

@skipOnStorage:ceph

skip the scenario if the test is running with ceph backend storage.

@skipOnStorage:scality

skip the scenario if the test is running with scality backend storage.

@skipOnEncryption

skip the scenario if the test is running with encryption enabled.

@skipOnEncryptionType:masterkey

skip the scenario if the test is running with masterkey encryption enabled.

@skipOnEncryptionType:user-keys

skip the scenario if the test is running with user-keys encryption enabled.

Tags For Tests To Run In Special Environments

Annotation Description

@smokeTest

this scenario has been selected as part of a base set of smoke tests.

@TestAlsoOnExternalUserBackend

this scenario is selected as part of a base set of tests to run when a special user backend is in place (e.g., LDAP).

@local_storage

this scenario requires and tests the local storage feature.

Special Tags for UI Tests

Annotation Description

@insulated

this makes the browser driver restart the browser session between each scenario. It helps isolate the browser state. When the browser session is recording, there is a separate video for each scenario. Use this tag on all UI scenarios.

@disablePreviews

generating previews/thumbnails takes time. Use this tag on UI test scenarios that do not need to test thumbnail behavior.

Writing Scenarios For Bugs

If you are developing a new feature, and the scenarios that you have written do not pass, or existing scenarios are failing, then fix the code so that they pass.

If you are writing scenarios to cover features and scenarios that are not currently covered by acceptance tests then you may find existing bugs.

If the bug is easy to fix, then provide the bugfix and the new acceptance test scenario(s) in the same pull request.

If the bug is not easy to fix, then:

  • create an issue describing the bug.

  • write a scenario that demonstrates the existing wrong behavior.

  • include commented-out steps in the scenario to document what is the expected correct behavior.

  • write the scenario so that it will fail when the bug is fixed.

  • tag the scenario with the issue number.

  @issue-32385
  Scenario: Change email address
    When the user changes the email address to "new-address@owncloud.com" using the webUI
    # When the issue is fixed, remove the following step and replace with the commented-out step
    Then the email address "new-address@owncloud.com" should not have received an email
    #And the user follows the email change confirmation link received by "new-address@owncloud.com" using the webUI
    Then the attributes of user "user1" returned by the API should include
      | email | new-address@owncloud.com |

The above scenario is an example of this. When the bug is fixed then the step about should not have received an email will fail. CI will fail, and so the developer will notice this scenario and will have to correct it.

How to Add New Test Steps

See the Behat User Guide for information about writing test step code.

In addition to that, follow these guidelines.

Given Steps

The code of a Given step should achieve the desired system state by whatever means is quick to execute. Typically use a public API if available, rather than running an occ command via the testing app or entering data in the webUI.

If there is a simple way to gain confidence that the Given step was successful, then do it. Typically this will check a status code returned in the API response. Doing simple confidence checks in Given steps makes it easier to catch some unexpected problem during the scenario Given section.

Here’s example code for a Given step:

/**
 * @Given the administrator has changed the password of user :user to :password
 *
 * @param string $user
 * @param string $password
 *
 * @return void
 * @throws \Exception
 */
public function adminHasChangedPasswordOfUserTo(
    $user, $password
) {
    $this->adminChangesPasswordOfUserToUsingTheProvisioningApi(
        $user, $password
    );
    $this->theHTTPStatusCodeShouldBe(
        200,
        "could not change password of user $user"
    );
}

The code calls the method for the When step and then checks the HTTP status code.

When Steps

The code of a When step should perform the action but not check its result. A When step should not ordinarily fail. Often a When step will save the response. It is the responsibility of later Then steps to decide if the scenario passed or failed.

Here’s example code for a When step:

/**
 * @When the administrator changes the password of user :user to :password using the provisioning API
 *
 * @param string $user
 * @param string $password
 *
 * @return void
 * @throws \Exception
 */
public function adminChangesPasswordOfUserToUsingTheProvisioningApi(
    $user, $password
) {
    $this->response = UserHelper::editUser(
        $this->getBaseUrl(),
        $user,
        'password',
        $password,
        $this->getAdminUsername(),
        $this->getAdminPassword()
    );
}

The code saves the response so that later Then steps can examine it.

Then Steps

The code of a Then step should check some result of the When action. Often it will find information in the saved response and assert something.

Here’s example code for a Then step:

/**
 * @Then /^the groups returned by the API should include "([^"]*)"$/
 *
 * @param string $group
 *
 * @return void
 */
public function theGroupsReturnedByTheApiShouldInclude($group) {
    $respondedArray = $this->getArrayOfGroupsResponded($this->response);
    PHPUnit_Framework_Assert::assertContains($group, $respondedArray);
}

However, a Then step may need to do actions of its own to retrieve more information about the state of the system. For example, after changing a user password we could check that the user can still access some file:

/**
 * @Then /^as "([^"]*)" (file|folder|entry) "([^"]*)" should exist$/
 *
 * @param string $user
 * @param string $entry
 * @param string $path
 *
 * @return void
 * @throws \Exception
 */
public function asFileOrFolderShouldExist($user, $entry, $path) {
    $path = $this->substituteInLineCodes($path);
    $this->responseXmlObject = $this->listFolder($user, $path, 0);
    PHPUnit_Framework_Assert::assertTrue(
        $this->isEtagValid(),
        "$entry '$path' expected to exist but not found"
    );
}

In the above example, listFolder is called and does an API call to access the file and then asserts that the response has a valid ETag.

References

For more information on Behat, and how to write acceptance tests using it, see the Behat documentation. For background information on Behaviour-Driven Development (BDD), see Dan North resources.