Acceptance Tests

The Test Directory Structure

This is the structure of the acceptance directory inside the core repository’s tests directory:

tests
├── acceptance
│   ├── config
│   │   └── behat.yml
│   ├── features
│   │   ├── apiTags (example suite of API tests)
│   │   │   └── feature files (behat gherkin files)
│   │   ├── bootstrap
│   │   │   └── Contexts and traits (php files)
│   │   ├── cliProvisioning (example suite of CLI tests)
│   │   │   └── feature files (behat gherkin files)
│   │   ├── lib
│   │   │   └── Page objects for webUI tests (php files)
│   │   └── webUILogin (example suite of webUI tests)
│   │       └── feature files (behat gherkin files)
│   ├── filesForUpload
│   └── run.sh

Here’s a short description of each component of the directory.

config/

This directory contains behat.yml which sets up the acceptance tests. In this file we can add new suites and define the contexts needed by each suite. Here’s an example configuration:

default:
  autoload:
    '': %paths.base%/../features/bootstrap
  suites:
    apiMain:
      paths:
        - %paths.base%/../features/apiMain
      contexts:
        - FeatureContext: &common_feature_context_params
            baseUrl:  http://localhost:8080
            adminUsername: admin
            adminPassword: admin
            regularUserPassword: 123456
            ocPath: apps/testing/api/v1/occ
        - AppManagementContext:
        - CalDavContext:
        - CardDavContext:

    apiCapabilities:
      paths:
        - %paths.base%/../features/apiCapabilities
      contexts:
        - FeatureContext: *common_feature_context_params
        - CapabilitiesContext:

features/

This directory contains sub-directories for each of the test suites.

features/suiteName

This directory stores Behat’s feature files for the test suite. These contain Behat’s test cases, called scenarios, which use the Gherkin language.

feature/bootstrap

This folder contains all the Behat contexts. Contexts contain the PHP code required to run Behat’s scenarios. Every suite has to have one or more contexts associated with it. The contexts define the test steps used by the scenarios in the feature files of the test suite.

filesForUpload/

This folder contains convenience files that tests can use to upload.

run.sh

This script runs the test suites. It is called by the make commands that are used to run acceptance tests.

The Testing App

The testing app provides an API that allows the acceptance tests to set up the environment of the system-under-test. For example, running occ commands to set system and app config settings. The testing app must be installed and enabled on the system-under-test.

The testing app also provides skeleton folders that the tests can use as the default set of files for new users.

[[apps/testing/data/apiSkeleton]] === apps/testing/data/apiSkeleton/

This folder stores the initial files loaded for a new user during API acceptance tests.

[[apps/testing/data/webUISkeleton]] === apps/testing/data/webUISkeleton/

This folder stores the initial files loaded for a new user during webUI acceptance tests.

Preparing to Run Acceptance Tests

This is a concise guide to running acceptance tests on ownCloud 10.0. Before you can do so, you need to meet a few prerequisites available; these are

  • ownCloud

  • Composer

  • MySQL

In php.ini on your system, set opcache.revalidate_freq=0 so that changes made to ownCloud config.php by test scenarios are implemented immediately.

After cloning core, run make as your webserver’s user in the root directory of the project.

Now that the prerequisites are satisfied, and assuming that $installation_path is the location where you cloned the ownCloud/core repository, the following commands will prepare the installation for running the acceptance tests.

# Remove current configuration (if existing)
sudo rm -rf $installation_path/data/*
sudo rm -rf $installation_path/config/*

# Remove existing 'owncloud' database
mysql -u root -h localhost -e "drop database owncloud"
mysql -u root -h localhost -e "drop user oc_admin"
mysql -u root -h localhost -e "drop user oc_admin@localhost"

# Install ownCloud server with the command-line
sudo -u www-data $installation_path/occ maintenance:install \
  --database='mysql' --database-name='owncloud' --database-user='root' \
  --database-pass=` --admin-user='admin' --admin-pass='admin'

Types of Acceptance Tests

There are 3 types of acceptance tests; API, CLI and webUI.

  • API tests test the ownCloud public APIs.

  • CLI tests test the occ command-line commands.

  • webUI tests test the browser-based user interface.

webUI tests require an additional environment to be set up. See the UI testing documentation for more information. API and CLI tests are run by using the test-acceptance-api and test-acceptance-cli make commands.

Running Acceptance Tests for a Suite

Run a command like the following:

make test-acceptance-api BEHAT_SUITE=apiTags
make test-acceptance-cli BEHAT_SUITE=cliProvisioning

Running Acceptance Tests for a Feature

Run a command like the following:

make test-acceptance-api BEHAT_FEATURE=tests/acceptance/features/apiTags/createTags.feature
make test-acceptance-cli BEHAT_FEATURE=tests/acceptance/features/cliProvisioning/addUser.feature

Running Acceptance Tests for a Tag

Some test scenarios are tagged. For example, tests that are known to fail and are awaiting fixes are tagged @skip. To run test scenarios with a particular tag:

make test-acceptance-api BEHAT_SUITE=apiTags BEHAT_FILTER_TAGS=@skip
make test-acceptance-cli BEHAT_SUITE=cliProvisioning BEHAT_FILTER_TAGS=@skip

Displaying the ownCloud Log

It can be useful to see the tail of the ownCloud log when the test run ends. To do that, specify --show-oc-logs:

make test-acceptance-api BEHAT_SUITE=apiTags SHOW_OC_LOGS=true

Optional Environment Variables

If you want to use an alternative home name using the env variable add to the execution OC_TEST_ALT_HOME=1, as in the following example:

make test-acceptance-api BEHAT_SUITE=apiTags OC_TEST_ALT_HOME=1

If you want to have encryption enabled add OC_TEST_ENCRYPTION_ENABLED=1, as in the following example:

make test-acceptance-api BEHAT_SUITE=apiTags OC_TEST_ENCRYPTION_ENABLED=1

How to Write Acceptance Tests

Each acceptance test is a scenario in a feature file in a test suite.

Feature Files

Each feature file describes and tests a particular feature of the software. The feature file starts with the Feature: keyword, a sentence describing the feature. This is followed by more detail explaining who uses the feature and why, in the format:

  As a [role]
  I want [feature]
  So that [benefit]

For example:

Feature: upload file using the WebDav API
  As a user
  I want to be able to upload files
  So that I can store and share files between multiple client systems

This detail is free-text and has no effect on the running of automated tests.

The rest of a feature file contains the test scenarios.

Make small feature files for individual features. For example "the Provisioning API" is too big to be a single feature. Split it into the functional things that it allows a client to do. For example:

  • addGroup.feature

  • addUser.feature

  • addToGroup.feature

  • deleteGroup.feature

  • deleteUser.feature

  • disableUser.feature

  • editUser.feature

  • enableUser.feature

  • removeFromGroup.feature

Test Scenarios

A feature file should have up to 10 or 20 scenarios that test the feature. If you need more scenarios than that, then perhaps there really are multiple features and you should make multiple feature files.

Each scenario starts with the Scenario: keyword followed by a description of the scenario. Then the steps to execute for that scenario are listed.

There are 3 types of test steps:

  • Given steps that get the system into the desired state to start the test (e.g. create users and groups, share some files)

  • When steps that perform the action under test (e.g. upload a file to a share)

  • Then steps that verify that the action was successful (e.g. check the HTTTP status code, check that other users can access the uploaded file)

A single scenario should test a single action or logical sequence of actions. So the Given, When and Then steps should come in that order.

If there are multiple Given or When steps, then steps after the first start with the keyword And.

If there are multiple Then steps, then steps after the first start with the keyword And or But.

Writing a Given Step

Given steps are written in the present-perfect tense. They specify things that "have been done". For example:

  Scenario: delete files in a sub-folder
    Given user "user0" has been created
    And user "user0" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "user0" has created a folder "/FOLDER/SUBFOLDER"
    And user "user0" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"

Given steps do not mention how the action is done. They can mention the actor that performs the step, when that matters. For example, creating a user must be done by something with enough admin privilege. So there is no need to mention "the administrator". But creating a file must be done in the context of some user. So the user must be mentioned.

The test code is free to achieve the desired system state however it likes. For example, by using an available API, by running a suitable occ command on the system-under-test, or by doing it withh the webUI. Typically the test code for Given steps will use an API,, because that is usually the most efficient.

Writing a When Step

When steps are written in the simple present tense. They specify the action that is being tested. Continuing the example above:

  Scenario: delete all files in a sub-folder
    Given user "user0" has been created
    And user "user0" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "user0" has created a folder "/FOLDER/SUBFOLDER"
    And user "user0" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"
    When user "user0" deletes everything from folder "/FOLDER/" using the WebDAV API

In ownCloud there are usually 2 or 3 interfaces that can implement an action. For example, a user can be created using an occ command, the Provisioning API or the webUI. Files can be managed using the WebDAV API or the webUI. File shares can be managed using the Sharing API or the webUI. So When steps should end with a phrase specifying the interface to be tested, such as:

  • using the occ command

  • using the Sharing API

  • using the Provisioning API

  • using the WebDAV API

  • using the webUI

Writing a Then Step

Then steps describe what should be the case if the When step(s) happened successfully. They should contain the word should somewhere in the step text.

  Scenario: delete all files in a sub-folder
    Given user "user0" has been created
    And user "user0" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "user0" has created a folder "/FOLDER/SUBFOLDER"
    And user "user0" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"
    When user "user0" deletes everything from folder "/FOLDER/" using the WebDAV API
    Then user "user0" should see the following elements
      | /FOLDER/           |
      | /PARENT/           |
      | /PARENT/parent.txt |
      | /textfile0.txt     |
      | /textfile1.txt     |
      | /textfile2.txt     |
      | /textfile3.txt     |
      | /textfile4.txt     |
    But user "user0" should not see the following elements
      | /FOLDER/SUBFOLDER/              |
      | /FOLDER/welcome.txt             |
      | /FOLDER/SUBFOLDER/testfile0.txt |

Note that there are often multiple things that should or should not be the case after the When action. For example, in the above scenario, various files and folders should have been deleted. But other files and folders (that are part of the skeleton) should still be there.

Where it makes the scenario read more easily, use the But as well as And keywords in the Then section.

Then steps should test an appropriate range of evidence that the When action did happen. For example:

  Scenario: admin creates a user
    Given user "brand-new-user" has been deleted
    When the administrator sends a user creation request for user "brand-new-user" password "%alt1%" using the provisioning API
    Then the OCS status code should be "100"
    And the HTTP status code should be "200"
    And user "brand-new-user" should exist
    And user "brand-new-user" should be able to access a skeleton file

In this scenario we check that the OCS and HTTP status codes of the API request are good. But it is possible that the server lies, and returns HTTP status 200 for every request, even if the server did not create the user. So we check that the user exists. However maybe the user exists according to some API that can query for valid user names/ids, but the user account is not really valid and working. So we also check that the user can do something, in this case that they can access one of their skeleton files.

Specifying the Actor

Test steps often need to specify the actor that does the action or check. For example, the user.

The acceptance test code can remember the "current" user with a step like:

    Given as user "user0"
    And the user has uploaded file "abc.txt"
    When the user deletes file "abc.txt"
    ...

So that later steps can just mention the user.

Or you can mention the user in each step:

    Given user "user0" has uploaded file "abc.txt"
    When user "user0" deletes file "abc.txt"
    ...

Either form is acceptable. Longer tests with a single user work well with the first form. Shorter tests, or sharing tests that mix actions of multiple users, work well with the second form.

When the actor is the administrator (a special user with privileges) then use the administrator in the step text. Do not write When user "admin" does something. The user name of the user with administrator privilege on the system-under-test might not be admin. The user name of the administrator needs to be determined at run-time, not hard-coded in the scenario.

Referring to Named Entities

When referring to specific named entities on the system, such as a user, group, file, folder or tag, then do not put the word the in front, but do put the name of the entity. For example:

    Given user "user0" has been added to group "grp1"
    And user "user0" has uploaded file "abc.txt" into folder "folder1"
    And user "user0" has added tag "aTag" to file "folder1/abc.txt"
    When user "user0" shares folder "folder1" with user "user1"
    ...

This makes it clearer to understand which entity is required in which position of the sentence. For example:

    And "user0" has uploaded "abc.txt" into "folder1"
    ...

would be less clear that the required entities for this step are a user, file and folder.

Scenario Background

If all the scenarios in a feature start with a common set of Given steps, then put them into a Background: section. For example:

  Background:
    Given user "user0" has been created
    And user "user1" has been created
    And user "user0" has uploaded file "abc.txt"

  Scenario: share a file with another user
    When user "user0" shares file "abc.txt" with user "user1" using the sharing API
    Then the HTTP status code should be "200"
    And user "user1" should be able to download file "abc.txt"

  Scenario: share a file with a group
    Given group "grp1" has been created
    And "user1" has been added to group "grp1"
    When user "user0" shares file "abc.txt" with user "user1" using the sharing API
    Then the HTTP status code should be "200"
    And user "user1" should be able to download file "abc.txt"

This reduces some duplication in feature files.

How to Add New Test Steps

To do - write this section. The following is some code that was already in this document:

The first thing we need to do is create a new file for the context; we’ll name it TaskToTestContext.php. In the file, we’ll add the code snippet below:

<?php

use Behat\Behat\Context\Context;

require __DIR__ . '/../../vendor/autoload.php';

/**
 * Example Context.
 */
class ExampleContext implements Context {
  use Webdav;
}

Here’s example code for a scenario:

/**
 * @When Sending a :method to :url with requesttoken
 *
 * @param string $method
 * @param string $url
 */
public function exampleFunction($method, $url) {

References

For more information on Behat, and how to write acceptance tests using it, see the Behat documentation. For background information on Behaviour-Driven Development (BDD), see Dan North resources.