Projects

Under Projects view users are able to create, remove and manage their projects. By clicking ‘+’ button new projects can be created after the type of project is selected. The types of projects are: Android, iOS UI Automation, Android UIAutomator, Calabash Android, and Calabash iOS.

NOTE! Appium projects are generated on the fly and automatically detected by Bitbar Testing. These projects cannot be created manually in this view.

image0

On the right hand side, users can create project specific test runs, reports or share projects with other valid Bitbar Testing user accounts.

In test run section users can edit, insert tags or delete test runs. The view also shows the percentage of successful test runs, success of tests, the status of test run (how many devices have been finalized the test run), date and time information, as well as application specific information (e.g. the name of test and application file). Create a new test run (multi-page widget) Step #1: Upload your application

Create a new test run (multi-page widget)

  1. Upload your application

    Click Choose File button to locate APK or IPA file from your local hard disk. For further configuration, click Next (either on bottom or top of the view).

    image1

  2. Select the test type

    Bitbar Testing provides a feature called App Crawler that automatically crawls through the app and tests its functionality by exercising UI components (e.g. button clicks, opening menus, changing views). App Crawler automatically handles the screenshot taking, log writing and generally it keeps the record of application’s status.

    When user selects ‘File’ instead of App Crawler the view asks user to locate test script files for the app.

    When proper test method has been selected, click Next (either on bottom or arrow on top).

    image2

  3. Select devices for the test run

    The view shows all created device groups. User can also create a new device group by clicking the left-top icon with a ‘+’ button. Device groups are either for Android or iOS. By default, Trial device group is selected.

    NOTE! Device groups can include only Android or iOS devices.

    image3

  4. Test Run Advanced Options

    In the advanced options of a test run, user can configure a number of options to make the test run work better.

    1. Data file. If your test needs additional data during the execution then you can upload your data file here. The uploaded file needs to be a ZIP package and the extracted files, keeping structure are available on the SD card during test.

      advanced-data-file

    2. Test run name. It is possible to set a new name for this test run. Maybe this run is for a newer build with specific fixes? So why not name the test run accordingly? Eg. “Test run fixes jira-5643”

      test-run-name

    3. Language. Define the device language that should be set before starting the test run. Note that it is possible some language is not available on all devices.

      advanced-language

    4. Test time-out period. Some tests are slow to execute for what ever reason. The default test time-out for public test runs is 10 mins, but the user can here set the duration to 5, 10, 15, 20 or 30 mins. Now you can leave that slow test to run and not worry it being cut while still executing. Private and on-premise customers have the possibility to further customize this setting.

      advanced-timeout

    5. Scheduling. In cloud it is possible to set a scheduling rule for the test run. This parameter allows the user to define how and when the test should be started on the selected devices. Options are:

      • Simultaneously - test is started on all available devices at the same time. Test on the devices not currently available is started as devices become available.
      • One device at a time - test is started sequentially on all of the selected devices
      • First available device only - run on first available device of the selected device group

      advanced-scheduling

    6. Use test cases from. For Android Instrumentation test runs it is possible to define which test class or package should be executed, if one does not want to run the whole test suite.

      advanced-test-cases-from

    7. Test finish hook. As the test run is finished, it is possible to make a POST call to the specific URL at the end of the test run. Note that in addition to this hook URL, you can also use email, Slack or HipChat integrations to get notified of finished test runs.

      advanced-finish-hook

    8. Screenshots configuration. By default, for Calabash test runs screenshots are stored on the device’s SD card at /sdcard/test-screenshots/. If for any reason the screenshots should be stored elsewhere it can be configured here.

      advanced-screenshots

    9. Test user credentials. For AppCrawler test runs it is possible to provide user name and password combination that should be used during the AppCrawler test run.

      advanced-user-credentials

    10. Tags. It is possible to provide each test run with tags (eg. bug IDs, Jira issue number, keywords etc.). These tags are great when using the Bitbar API to query test runs and sort the runs that are most interesting.

      advanced-tags

    11. Custom Key/Value pair. Public cloud supports a number of Shell environment variables that are made available to each test run. These can be used for test case sharding or selecting an execution logic for Calabash runs.

      advanced-key-value

      Espresso test sharding, the variables numShard and shardIndex are supported out of the box. Use these the same way as you would locally.

      Calabash references environment variables to control its runtime behavior. There are two pre-defined environment variables CALABASH_TAGS and CALABASH_PROFILE that can be defined. These can be used to better orchestrate the test execution during the test run.

      Xcode based test suites can be controlled with XCODE_SKIP_TESTING and XCODE_ONLY_TESTING keys.

      • XCODE_SKIP_TESTING should take the value of -skip-testing command line flag. This allows one to skip some named test case or class.
      • XCODE_ONLY_TESTING takes the value of command line flag -only-testing. This flag allows one to define of running a single test method or all tests from a test class.

      On-premise and private cloud setups can allow users to create their own key-value pairs. For customers with advanced plans it is possible to create keys for specific tasks to be done before, during or after a test run also on public cloud.

Start the test run by clicking Start button. You are redirected to Test Run view.

Test run / Overview

User can access Test Run view either by starting a test run or clicking any of older test runs in projects view. This view presents test run execution information, execution time as well as a summary of test runs.

A Bitbar Testing test run starts always with device cleaning (removing all content from devices, cleaning SD card and rebooting device), followed by installation and launch of the app and tests.

The first widget in Test Run View shows a summary about device sessions and their success ratio.

Picture. Test run view summary -widget Picture. Test run view devices summary -widget

Tests success status

  • shows percentage of successful tests
  • shows number of passed tests / number of total tests in test run

Overall device execution status

  • Finished - No errors, everything went fine
  • Finished with failures - finished but some of test cases failed.
  • Finished with errors - finished but errors in test execution. This is for example application or device crash.

In addition to summary information you can download the application and test files as well as all log files. You can also access screenshots comparison views, compare screenshots by devices or compare screenshots by test steps (for Calabash runs only) from the Summary widget. Device specific logs are available by going to each device run.

Test run view details -widget contains summary information for each device.

Picture. Test run view details -widget

  • On the top-right of the test run widget, user can filter data shown in widget. For example, user can filter passed, failed, excluded, and not executed devices for the widget.
  • More specific error of device execution from test run can be seen when clicking on the info icon on device line.
  • User can focus on each device run by clicking the device row.
  • User can also retry the test run for a single device by clicking the retry button at the and of a device line, or by selecting checkbox of one or multiple devices.
  • Clicking on the checkbox column title, user can select also not visible devices for retry.

Picture. Retry.

Note! Previous test run information for the device will be overwritten!

Screenshot comparison

The Screenshot Comparison view makes it easy to compare captured screenshots from every device test run. The Compare By Test Steps comparison is available for Calabash and JUnit tests enabling comparison of test steps between devices.

Users can select screenshots from dropdown menu that gets automatically shown on each device in the test run. Screenshots can be browsed also with arrow button on the top right-corner of the widget. For full screen mode, user can click ‘Full screen button’ and for downloading all screenshots ‘Download screenshots’ button.

image9

Device run details / Test cases

After clicking any row on test run view, user is directed to view generic information about tests is shown. By selecting the name of test (presented either as green/success or red/failure) the test steps will be shown. The device run view presents all main information about the run. This is also the place where user is able to download for example logs and videos of the run. The test steps and errors tabs can be enlarged to full screen making it easier to debug test steps and errors.

image10

When there are more devices clicking on the “Browse all devices” button allows the user to switch the device under inspection.

Device run details / Screenshots

The Screenshots view shows all captured screenshots during the selected test. The number on the right-top corner of each screenshot indicates the step when screenshot was taken. For example, test run may have 22 steps and each step could include a screenshot. If the step includes multiple screenshots, those are named as .. (e.g. 6.1, 6.2).

User can download all captured screenshots by clicking “Download screenshots” on the top right-corner of the widget.

Device run details / Performance

The Performance view provides details of the test run for CPU and memory usage. User can click any given step/time to get more specific information about resource consumption.

image11

Device run details / Logs

The log view provides line-by-line information about the test run. It allows showing the different types of log data available from the test run. These include logcat, Appium and Calabash logs. The logs can be searched by browser search or the widget’s own search box. For easier debugging the view can be enlarged to full screen.

image12

AppCrawler

AppCrawler functionality provides easy self-contained option to test mobile application against automated test procedure. Bitbar testing provides solution to run AppCrawler for both Android and iOS environments. AppCrawler tries to mimic human behaviour in order to get reliable and valuable tests. AppCrawler navigates through the application under test and interacts with app elements.

Usage

There are two possible ways to use AppCrawler in Bitbar Testing.

  • Either navigate through projects and look for a project of type Android or iOS. Then proceed to ‘New run’, add the application under test and on second step select the AppCrawler option. Once the device group is selected test can be started.
  • Second option would be to navigate directly to AppCrawler button on main menu. Upload application project and devices and execute test.

In both cases results of test execution with screenshots are presented in the test run view as for normal test executions.