Skip to end of metadata
Go to start of metadata
Icon

This method of creating tests is deprecated.

We strongly recommend that you write your tests using test-config.yaml and pluggable modules rather than in Python. If you find yourself limited by the pluggable modules, we'd all be better off if you updated an existing module or created a new one to handle your scenario.

Overview

While the Asterisk Test Suite can execute a test written in any scripting language, Python has become the de facto language of choice. The Asterisk Test Suite contains a number of modules written in Python to help with writing tests; as such, we strongly encourage people to make use of the existing infrastructure - and, of course - add to it as necessary!

The following walkthrough produces a test similar to the tests/skeleton_test, which is included in the Asterisk Test Suite and provides a template for a Python test. You can use that test as a starting point for tests that you write.

Developing a test can be broken down into the following steps:

  1. Define the #Test Layout and Asterisk Configuration
  2. Describe the test in Test-Config.yaml
  3. Write the run-test script
  4. Execute the test

This walkthrough will create a test (sample) that makes Asterisk playback tt-monkeys.

Test Layout and Asterisk Configuration

  1. Create a new folder for the test in the appropriate location. In general, this will be a folder in the /tests directory. You may want to provide a similar structure to Asterisk by grouping related tests together, e.g., application tests should have folder(s) under the /tests/apps directory. For now, we'll assume that we're creating a test called sample, located in tests/sample.
  2. In the sample folder, create the following:
    • A run-test file, which will contain the python script to execute. The file should have execute permissions, and should not have the ".py" extension. The Test Suite looks for files named run-test and executes them; the fact that we are choosing Python as our language is an implementation decision that the Test Suite does not care about.
    • test-config.yaml, which will contain the test information and its dependency properties
    • A configs directory. The configs directory should contain subfolder(s) for each instance of Asterisk that will be instantiated by the test, named ast#, where # is the 1-based index of the Asterisk instance. For now, create a single folder named ast1.
    • In each ast# subfolder, the Asterisk config files needed for the test. At a minimum, this will be extensions.conf.

      Icon

      The asterisk class automatically creates an asterisk.conf file, and installs it along with other basic Asterisk configuration files (see the configs directory). You can override their behavior by providing your own .conf.inc files. Any configuration files not provided in the configs directory are installed from the subfolders for each test.

  3. Edit your extensions.conf to perform some test in Asterisk. For our test, we'll simply check that we can dial into Asterisk and play back a sound file.

At the end of this, you should have:

  • A folder in tests named sample
  • An empty file in tests/sample named run-test
  • An empty file in tests/sample named test-config.yaml
  • A subfolder in sample named configs
  • A subfolder in sample/configs named ast1
  • A populated extensions.conf in sample/configs/ast1

Describing the test in Test-Config.yaml

Each test has a corresponding yaml file that defines information about the test, the dependencies the test has, and other optional configuration information. The fields that should be filled out, at a minimum, are:

  • testinfo:
    • summary: A summary of the test
    • description: A verbose description of exactly what piece and functionality of Asterisk is under test.
  • properties:
    • minversion: The minimum version of Asterisk that this test applies to
    • dependencies:
      • python: Any python based dependencies. Often, this will be noted twice, once for 'twisted' and once for 'starpy'
      • custom: Custom dependencies, e.g., 'soundcard', 'fax', etc.
      • app: External applications that are needed, i.e., 'pjsua'
Icon

See the Test Suite's README.txt for all of the possible fields in a test configuration file

The test-config.yaml file for our sample test is below.

While we've created our test description, we haven't yet told the Test Suite of its existence. Upon startup, runtests.py checks tests/tests.yaml for the tests that exist. That file defines the folders that contain tests, where each folder contains another tests.yaml file that further defines tests and folders. In order for the Test Suite to find our sample test, open the tests/tests.yaml file and insert our test:

Writing run-test

Now we start to actually write the meat of our test. Each test in the Test Suite is spawned as a separate process, and so each test needs an entry point. First, lets import a few libraries and write our main.

There are a few things to note from this:

  • We're going to use the twisted reactor for our test. This is usually useful as we typically will use asynchronous AMI events to drive the tests.
  • We've told the python path where the Test Suite libraries are, and imported the TestCase class. Our test case class, SampleTest, will end up deriving from it.
  • We have a logging object we can use to send statements to the Test Suite log file

Moving on!

Defining the Test Class

We'll need a test the inherits from TestCase. For now, we'll assume that the basic class provides our start_asterisk and stop_asterisk methods and that we don't need to override them (which is a safe assumption in most cases). We'll fill in some of these methods a bit more later.

At the end of this, we have the following:

  • A class that inherits from TestCase. In its constructor, it calls the base class constructor and creates an instance of Asterisk by calling the TestCase.create_asterisk() method. The base class provides us a few attributes that are of particular use:
    • passed - a boolean variable that we can set to True or False
    • ast - a list of asterisk instances, that provide access to a running Asterisk application
    • ami - a list of AMI connections corresponding to each asterisk instance
    • reactor_timeout - the amount of time (in seconds) that the twisted reactor will wait before it stops itself. This is used to prevent tests from hanging.
    • TestCase has a method we can call called create_asterisk(), that, well, creates instances of Asterisk. Yay!
    • TestCase has another method we can call called create_ami_factory() that creates AMI connections to our previously created instances of Asterisk. We do this after the twisted reactor has started, so that Asterisk has a chance to start up.
  • An entry point for the twisted reactor called run(). This calls the base class's implementation of the method, then spawns an AMI connection. Note that in our main method, we start up the created Asterisk instances prior to starting the twisted reactor - so when run() is called by twisted, Asterisk should already be started and ready for an AMI connection.
  • A method, ami_connect, that is called when an AMI connection succeeds. This same method is used for all AMI connections - so to tell which AMI connection you are receiving, we can check the ami.id property. Each AMI connection corresponds exactly to the instance of Asterisk in the ast list - so ast[ami.id] will reference the Asterisk instance associated with the ami object.

Making the Test do something

So, we have a test that will start up, spawn an instance of Asterisk, and connect to it over AMI. That's interesting, but doesn't really test anything. Based on our extensions.conf, we want to call the s extension in default, hopefully have monkeys possess our channel, and then check that the UserEvent fired off to determine if we passed. If we don't see the UserEvent, we should eventually fail. Lets start off by adding some code to ami_connect.

What we've now instructed the test to do is, upon an AMI connection, originate a call to the s extension in context default, using a Local channel. starpy's originate method returns a deferred object, which lets us assign a callback handler in case of an error. We've used the TestCase class's handleOriginateFailure method for this, which will automagically fail our test for us if the originate fails.

Now we need something to handle the UserEvent when monkeys inevitably enslave our phone system. Let's add that to our ami_connect method as well.

Now we've registered for the UserEvent that should be raised from the dialplan after monkeys are played back. We make the assumption in the handler that we could have other UserEvents that return failure results - in our case, we don't have failure scenarios, but many tests do. Regardless, once we receive a user event we stop the twisted reactor, which will cause our test to be stopped and the results evaluated.

We should now be ready to run our test.

Running the test

From a console window, browse to the base directory of the Test Suite and type the following:

You should see something similar to the following:

We can inspect the log files created by the Test Suite for more information. The Test Suite makes two log files - full.txt and messages.txt - by default, DEBUG and higher are sent to full.txt, while INFO and higher are sent to mssages.txt. The following is a snippet from messages.txt - yours should look similar.

Sample Test

extensions.conf

test-config.yaml

run-test

  • No labels