Projects

Under Projects view users are able to create, remove and manage their projects. By clicking ‘+’ button new projects can be created after the type of project is selected. The types of available projects may vary depending on your device cloud installation.

NOTE! Appium client side projects are generated on the fly and automatically detected. These projects cannot be created manually from the user interface.

image0

On the right hand side of the view, users can create project specific test runs, or share projects with other valid Bitbar user accounts.

In test run section users can edit, insert tags or delete test runs. The view also shows the percentage of successful test runs, success of tests, the status of test run (how many devices have finalized the test run), date and time information, as well as application specific information (e.g. the name of test and application file).

Create a New Test Run

For creating a new project with a new test run one can select the “Test Run Creator” menu option from the left menu. You will need to fill in the basic information described below and provide the new project with a name in the advanced settings as described below.

For an existing project, create a new test run by clicking on the ‘+’ sign on top right of the Project view. You will need to provide the following information:

  1. Select Target Operating System

    Select whether you plan on running your tests using Android or iOS devices. Based on what is selected here, only supported frameworks and devices are later presented.

    select-os

  2. Select a Framework

    Depending on the previously selected operating system, the available frameworks are presented here. Depending on your cloud setup the names and available frameworks may differ from the ones in this screenshot. On public cloud the following frameworks are enabled:

    • iOS: Calabash iOS, Appium iOS Server Side, XCTest, XCUITest, AppCrawler
    • Android: Calabash Android, UI Automator, AppCrawler Android, Android Instrumentation, Appium Android Server Side

    select-framework

  3. Choose Files

    When setting up new automation test run, you will typically need to upload an application file (.apk or .ipa) and your tests in a zipped package. Up to three different files can be uploaded and installed or copied onto the device.

    choose-files

  4. Choose Devices

    The test run has now the required configuration, but is still missing the devices on which the tests are to be run. You have the option of picking a device from existing device groups, select some devices (and create a new group from these) or pick any currently available device for test execution.

    choose-devices

  5. Additional Settings

    In addition to the above described basic settings there are plenty of additional configuration options for project and selected framework.

    1. Project name. When creating a new project and test run using the Test Run Creator menu item, you can define the name for the project to use. Of course you can always rename your project later on.

      project-name

    2. Test run name. You can name your test run’s name already before starting the run. Give you test run a name that describes what you are testing with this run, eg. build number, fixed bug id, time and date etc.

      test-run-name

    3. Data file. If your test needs additional data during the execution then you can upload your data file here. The uploaded file needs to be a ZIP package and the extracted files, keeping structure are available on the SD card during test.

      advanced-data-file

    4. Test run name. It is possible to set a new name for this test run. Maybe this run is for a newer build with specific fixes? So why not name the test run accordingly? Eg. “Test run fixes jira-5643”

      test-run-name

    5. Language. Define the device language that should be set before starting the test run. Note that it is possible some language is not available on all devices.

      advanced-language

    6. Test time-out period. Some tests are slow to execute for what ever reason. The default test time-out for public test runs is 10 mins, but the user can here set the duration to 5, 10, 15, 20 or 30 mins. Now you can leave that slow test to run and not worry it being cut while still executing. Private and on-premise customers have the possibility to further customize this setting.

      advanced-timeout

    7. Scheduling. In cloud it is possible to set a scheduling rule for the test run. This parameter allows the user to define how and when the test should be started on the selected devices. Options are:

      • Simultaneously - test is started on all available devices at the same time. Test on the devices not currently available is started as devices become available.
      • One device at a time - test is started sequentially on all of the selected devices
      • First available device only - run on first available device of the selected device group

      advanced-scheduling

    8. Use test cases from. For Android Instrumentation test runs it is possible to define which test class or package should be executed, if one does not want to run the whole test suite.

      advanced-test-cases-from

    9. Test finish hook. As the test run is finished, it is possible to make a POST call to the specific URL at the end of the test run. Note that in addition to this hook URL, you can also use email, Slack or HipChat integrations to get notified of finished test runs.

      advanced-finish-hook

    10. Screenshots configuration. By default, for Calabash test runs screenshots are stored on the device’s SD card at /sdcard/test-screenshots/. If for any reason the screenshots should be stored elsewhere it can be configured here.

      advanced-screenshots

    11. Test user credentials. For AppCrawler test runs it is possible to provide user name and password combination that should be used during the AppCrawler test run.

      advanced-user-credentials

    12. Tags. It is possible to provide each test run with tags (eg. bug IDs, Jira issue number, keywords etc.). These tags are great when using the Bitbar API to query test runs and sort the runs that are most interesting.

      advanced-tags

    13. Custom Key/Value pair. Public cloud supports a number of Shell environment variables that are made available to each test run. These can be used for test case sharding or selecting an execution logic for Calabash runs.

      advanced-key-value

      Espresso test sharding, the variables numShard and shardIndex are supported out of the box. Use these the same way as you would locally.

      Calabash references environment variables to control its runtime behavior. There are two pre-defined environment variables CALABASH_TAGS and CALABASH_PROFILE that can be defined. These can be used to better orchestrate the test execution during the test run.

      Xcode based test suites can be controlled with XCODE_SKIP_TESTING and XCODE_ONLY_TESTING keys.

      • XCODE_SKIP_TESTING should take the value of -skip-testing command line flag. This allows one to skip some named test case or class.
      • XCODE_ONLY_TESTING takes the value of command line flag -only-testing. This flag allows one to define of running a single test method or all tests from a test class.

      On-premise and private cloud setups can allow users to create their own key-value pairs. For customers with advanced plans it is possible to create keys for specific tasks to be done before, during or after a test run also on public cloud.

Start the test run by clicking Start button. You are redirected to Test Run view.

Test run / Overview

User can access Test Run view either by starting a test run or clicking any of older test runs in projects view. This view presents test run execution information, execution time as well as a summary of test runs.

A Bitbar Testing test run starts always with device cleaning (removing all content from devices, cleaning SD card and rebooting device), followed by installation and launch of the app and tests.

The first widget in Test Run View shows a summary about device sessions and their success ratio.

Picture. Test run view summary -widget Picture. Test run view devices summary -widget

Tests success status

  • shows percentage of successful tests
  • shows number of passed tests / number of total tests in test run

Overall device execution status

  • Finished - No errors, everything went fine
  • Finished with failures - finished but some of test cases failed.
  • Finished with errors - finished but errors in test execution. This is for example application or device crash.

In addition to summary information you can download the application and test files as well as all log files. You can also access screenshots comparison views, compare screenshots by devices or compare screenshots by test steps (for Calabash runs only) from the Summary widget. Device specific logs are available by going to each device run.

Test run view details -widget contains summary information for each device.

Picture. Test run view details -widget

  • On the top-right of the test run widget, user can filter data shown in widget. For example, user can filter passed, failed, excluded, and not executed devices for the widget.

  • More specific error of device execution from test run can be seen when clicking on the info icon on device line.

  • User can focus on each device run by clicking the device row.

  • User can also retry the test run for a single device by clicking the retry button at the and of a device line. Note! Previous test run information for the device will be overwritten!

  • Clicking on the checkbox column title, user can select also not visible devices for retry.

    Picture. Retry.

Screenshot comparison

The Screenshot Comparison view makes it easy to compare captured screenshots from every device test run. The Compare By Test Steps comparison is available for Calabash and JUnit tests enabling comparison of test steps between devices.

Users can select screenshots from dropdown menu that gets automatically shown on each device in the test run. Screenshots can be browsed also with arrow button on the top right-corner of the widget. For full screen mode, user can click ‘Full screen button’ and for downloading all screenshots ‘Download screenshots’ button.

image9

Test Run Details

After clicking any row on project test runs view, user is directed to list of device runs. Clicking on any device run gives details about that specific device run. The view is divided into view of all devices, a main control panel with device specific information and different result windows. Let’s go through all the different views and what you can do with them.

  1. Device Session Browser. Shows the devices used in the test run. Each device shows the current device state (eg. waiting, running, succeeded, warning or failures). Additionally the device icon shows the test method success rate for that device. Hovering your mouse over the icons gives you more information.

    ../../../_images/device-session-browser.png
  2. Control Panel. This panel gives you important data about how the test run succeeded and how the selected device performed with respect to other devices in this run.

    ../../../_images/control-panel-main.png
  3. Tests and Steps. If your test contains test steps or test functions, these are presented here in order of execution. View shows the order in which tests were executed and whether the functions executed successfully or no.

    ../../../_images/control-panel-steps.png
  4. Issues. After the test execution we run some statistics against the devices in the test run. This allows us to highlight some execution issues in this tab that we believe are interesting to you.

    ../../../_images/control-panel-issues.png
  5. Output Files. All generated and captured files from the test execution. You can control which files get saved and picked to be presented here by configuring your run-tests.sh. Screenshots and video files are visible in next widget, while log files are also viewable here below with search and highlight support. To download any created files, click on the arrow pointing down at the end of the line.

    ../../../_images/control-panel-files.png

Media Widget

The Screenshots and test video recording view shows all captured screenshots during the selected test. Screenshots are presented in the order of capturing.

Screenshots can be downloaded by clicking “Download screenshots” icon on the top right-corner of the widget. Video file is also downloadable by clicking the video’s download icon.

../../../_images/media.png

Performance Widget

The Performance widget provides details of the test run for CPU and memory usage. User can click any given step/time to get more specific information about resource consumption.

image11

Logs Widget

The log view provides line-by-line information about the test run. It allows showing the different types of log data available from the test run. These include logcat, Appium and Calabash logs. The logs can be searched by browser search or the widget’s own search box. For easier debugging the view can be enlarged to full screen.

image12