The Sweet testing framework provides a tool and methodology for testing applications built with the Sweet Builder.

The Sweet Builder enables business rules to be configured and built into an application. The number and types of business rules can make Sweet configurations complex; therefore, a mechanism has been provided to help ensure that new rules have not broken existing configurations and that the app continues to exhibit the expected behaviour.

The Sweet Automated Testing Framework is a project built on industry standard software packages, that can be tailored as required. It is intended to be used for a variety of scenarios where baseline recordings of interactions within a Sweet application can be recorded and compared to the results of the same interactions after some change has been made to the application (but not changes to the data).

A possible scenario for using the testing framework might include recording a baseline of Sweet interactions before upgrading to the latest Sweet release, then using the test tool to confirm that the output of interactions with the new release of Sweet matches the results of the baseline recording. Another scenario might include testing the results of a rule change; which might involve creating a baseline recording, changing a rule, and then running the test tool to confirm that the new rule does not have an adverse impact.

It is important to note that, at its core, the testing framework simply compares the data generated by Sweet when you make a recording with the data generated by Sweet when you run the test. Therefore, if you record the creation of a specific shape on a map, when the test is run it will attempt to create the exact same shape in the exact same place on the same map, after which it will compare the output from that action with the output from the recording. The recording and test output are like what might be written to a feature server when a change is made. Any changes made to the data between the time that the baseline recording is captured, and the test tool is executed may cause the test to fail, even if the Sweet application and rules are performing correctly. With those limitations in mind, it may be advisable to create a copy of the data you would like to use for your tests, disable saving changes when creating recordings using the copy of the data, and run your tests on the unmodified copy of your data after any system or rule changes have been implemented.

Testing process overview

Prepare data and application

  1. Prepare a snapshot of your data. All tests will use the same snapshot.
  2. Configure your Sweet application with the rules and settings for your business domain.
  3. Update the settings of your application to never save to database. This ensures that the snapshot is not corrupted and should be set when recording baselines and when executing tests.

Record tests

  1. Start the test application (with the Recorder Widget installed).
  2. Record tests using the Recorder Widget.
  3. Save each test in a directory.

Play back / run tests

  1. Run the tests using a command line tool.
  2. Inspect results in the command line tool or in the HTML report.

Software requirements

SoftwareVersionURL
Node.js and npm (usually installed together)LatestLink
OpenJDKLatestLink
Chrome BrowserAny recent versionLink
Sweet On PremiseLatestAs part of the installation media

If you would like to use Oracle Java you will need to bring your own licence.

Installation guide

Sweet’s Automated Testing Framework is provided as a sample project. This project is a folder with a set directory structure as specified by Intern.js.
To use the sample project:

Install project dependencies

  1. Confirm that Node.js, npm and the Java JRE or JDK are installed.
  2. Unzip the Sweet On Premise Build to a folder on your hard drive.
  3. Navigate to the Sweet Automated Testing Framework folder (testautomation) using a terminal or command line tool.
  4. Type the command npm install. This will install all the software components needed to run the software (administrative or sudo privileges may be required during install).

Install Sweet command line tools

See the command line guide for further information.

Set up a web server for hosting the HTML report

In IIS, set a virtual folder to your project’s testautomation/tests/report folder.

Configure Sweet application to host the recording widget

  1. Open the editor to the Sweet app.
  2. Add a new chart panel called Test Recording.
  3. Add a HTML panel with the following attributes:
    • Chart title: Record Test
    • Data – see parameters (the below is an example):
      return {
          "url": "./testsupport/recorder.html",
          "method": "pipe", 
          "display": "launch", 
          "preserve": true, 
          "query": null, 
          "form": { 
              "fieldComparisons": { 
                  "<INSERT LAYER NAME>": { 
                      "dontcompare": [ 
                          "<ATTRIBUTE TO IGNORE DURING TESTS>",
                          "<ATTRIBUTE TO IGNORE DURING TESTS>"
                      ]
                  }, 
                  "<INSERT LAYER NAME>": { 
                      "compare": [ 
                          "<ATTRIBUTE TO EVALUATE DURING TESTS>"
                      ]
                  }, "<INSERT LAYER NAME>": {
                      "dontcompare": [
                          "<ATTRIBUTE TO IGNORE DURING TESTS>", 
                          "<ATTRIBUTE TO IGNORE DURING TESTS>"
                      ]
                  }
              }
          },
          "notify": { 
              "protocol": "postmessage", 
              "receiveVerbs": [], 
              "sendVerbs": [
                  "operation-selected",
                  "operation-completed", 
                  "operation-started", 
                  "selection-changed", 
                  "mode-changed", 
                  "environment", 
                  "launch-init",
                  "extent"
              ]
          }
      };
      
    • Replace the text with the names of the layers in the web map. If a layer name is not included in the fieldComparisons section of the arcade script, then that layer’s attributes will be ignored when a test is run
    • Prevent specific attributes of a feature from being evaluated while testing including OBJECT ID, Date fields and GUID fields. To do this include dontcompare in the fieldComparisons section of the arcade script. Change the text to the name of an attribute that should not be evaluated during a test. You can add additional fields.
    • If you would prefer to specify which fields should be evaluated during a test instead of which attributes should be excluded, use the attribute compare and replace the text with the name of an attribute you would like to evaluate during the testing process.
    • Click OK in the script editor.
    • Under No data, select If there is no data to show: Hide Chart.

Creating and running tests

Create recordings

  1. Launch the Sweet application that contains your Test Recording feature panel.
  2. Before performing any operations within Sweet, open the Test Recording panel and click the Start Recording button.
  3. Perform the operations that you would like to record. These operations can include creating new features, manipulating features, panning and zooming the map, etc.
  4. When you have completed your operations click the Stop Recording button.
  5. To save the recording, click the Save button. This will cause your recording to download to the default download location on your computer.
  6. Once the recording has downloaded, it can be moved to a location of your choosing on your computer. It is recommended that you rename the recording to something that will help you remember what the recording is meant to test, since the name of the file will be used as a reference in the test results display. The recording file must have a file extension of .json (for example: delete_road_segment.json).
  7. If at any time you would like to discard the current recording and start a new one, simply click the Reset Recorder button
  8. You can make several recordings. Each recording is a test.
  9. You may notice a counter below the Reset Recorder button that says [n] events recorded. This information can be useful to confirm that your actions within Sweet are being recorded. Nearly every interaction in Sweet, including clicking on tools in the toolbar, manipulating data on a map, changing the map’s extent, or undoing and redoing actions, will be recorded and will cause the counter to increment.
  10. It is recommended that you run the test on your baseline recording (as described in the next section) to verify the recording’s integrity before making any configuration changes to the Sweet application.

Run tests

  1. Set up the test by changing the following values in the settings.json file located in the Sweet testing tool’s root folder:
    • applicationUrl – should point to the URL of the Sweet application you would like to test (e.g. https://YOUR-DOMAIN/index.html). Please note: query string parameters should be removed from this URL.
    • applicationTimeout – in milliseconds, sets how long the testing framework will wait for the application to load into the web browser before it times out. N.B. – this value will only set the timeout for initially loading the Sweet application in a Chrome browser. If you need to change the timeout for individual tests that run once the application has loaded you will need to change the defaultTimeout value in \testautomation\intern.json.
    • testFolder – should point to the directory where the test recordings have been stored. On Windows computers this path must be formatted correctly for JSON, including escaping any backslashes with an additional backslash (for example, C:\tests\recordings). Please note: all recordings saved in the selected directory will be tested, so if there are several recordings that should be tested they should all be located within the directory. Each recording will be tested in alphabetical order, with the application restored to its initial state after each recording has been tested.
    • applicationId – the ID of the application that will be used during the test. This can be determined by opening the application and copying the query string value for appid, which might look something like 8483822d74e643e1af4585a64dba2aac.
    • clientId – the ID of the client who will be accessing the application. Again, this can be determined by opening the application and copying the query string value for instanceclientid, which might look something like yDfIbLy3BRaN7pMZ.
    • tokenUsername – the username that is used when logging into the application.
    • tokenPassword – the password that is used when logging into the application.
  2. Prepare credentials for use during the test:
    • The testing framework will use the information in the settings.json file to request an authentication token for the specific application identified by the applicationId. However, before requesting the token it checks the expiration date of any token that may already be stored. If you are switching between applications, be sure to delete the token.json file located at testautomation\tests\functional\token.json so a new token can be generated. *If you get a valid token error in the console you can manually add a token to this file.
    • If the token.json file does not exist or if the token has expired, the testing framework will request a new token and store it in the token.json file.
  3. Close any browser windows that may be accessing the Sweet application you would like to test, otherwise a test might fail due to an element in the application being locked.
  4. Open a command prompt or terminal window and navigate to the Sweet testing tool’s root folder.
  5. Type the command npm test.
  6. The test will now run. The following steps will occur automatically:
    • During the test, a chrome browser window will be opened and the Sweet application launched.
    • The actions recorded earlier will be automatically performed in the application, though these actions will not be saved.
    • At the end of the test, the browser window will automatically close.
    • Output from the test process will be displayed in the command prompt/terminal window.

View results

After a set of tests has been run, the results can be reviewed within the terminal/command prompt or via a Sweet testing tool web page.

  1. Viewing results in the terminal/command prompt – details of each tested recording will be displayed in the terminal window as the tests are run. If a specific recording does not produce the expected output, an error message will be displayed in the console and some additional information will be provided. At the conclusion of the testing process, a summary of the tests will be displayed with a message similar to the following: TOTAL: tested 1 platforms, 13 passed, 3 failed, 1 skipped, -1 not run. Depending on the results of the test, not all the above items will be displayed. The results can be interpreted as follows:
    • tested n platforms – this shows how many different browsers were used to run the test. Since all tests are run only in Chrome, this number should always be 1.
    • n passed – this shows how many recordings were run that successfully returned the expected results.
    • n failed – this shows how many recordings were run that did not return the expected results.
    • n skipped – this typically occurs if an operation times out, either due to the application hanging or due to an unexpected error while executing the recording.
    • n not run – this shows how many tests were not run. If the number displayed is -1 then all tests were run.
  2. Viewing results in the Sweet testing tool web page – a summary of test results will be displayed in the testing tool’s webpage. This summary will display the results of each recording that was included in the test, including the number of tests that passed, failed or were skipped. Clicking on a specific result will provide more information about which specific recordings passed, failed or did not run.
    • If you would like to discard the results from previous tests before running a new test, simply delete the testoutput.json file located in \testautomation\tests\report.