GitHub user duderino opened a pull request:
https://github.com/apache/trafficserver/pull/131
Add initial functional test framework with example test case.
Hi guys,
This is a first stab at a functional test framework. My intention is to
stimulate discussion for the test framework talk at the ATS Summit. I won't
be shattered if this pull request is never be merged.
This currently consists of a bit of framework code and an example
functional test that uses the framework.
All code is written in nodejs. That may be the most controversial part.
# prerequisites
* Install nodejs. Example:
<pre>
$ wget http://nodejs.org/dist/v0.10.32/node-v0.10.32.tar.gz
$ tar xvfz node-v0.10.32.tar.gz
$ cd node-v0.10.32
$ ./configure
$ make
$ sudo make install
$ node -v
v0.10.32
$ npm -v
1.4.28
</pre>
* Install the mocha unit test framework. Example:
<pre>
$ sudo npm install mocha -g
</pre>
# tests/framework/atstf.js:
This is the test framework's process manager. The process manager is
responsible for starting and stoping the various processes used by the test
suite.
It is driven by a JSON config file that describes the processes used by the
test case. Individual test cases can also read this config file to get server
ports, hostnames, etc.
## configure it
The process manager currently supports the following process types:
* ats. This can start the traffic_server in the local source repository
or a traffic_server anywhere on the local file system. The traffic_server will
look for its config files in the 'root' directory and put its log files there
too.
Example:
<pre>
{
"servers": {
"ats1": {
"type": "ats",
"root": "ats1",
"interfaces": {
"http": {
"type": "http",
"hostname": "localhost",
"port": 8080
}
}
}
}
}
</pre>
* origin. A simple HTTP/1.1 origin with behavior determined by the test
JSON config file. Its behavior can be configured on a per method, per
abs_path basis. The following behavior can currently be configured:
1. response status code
1. response headers
1. number of chunks to send in the response (only chunked xfer
encoding currently supported)
1. size of each chunk
1. byte value of each byte in chunk
1. milliseconds to wait before sending the first chunk
1. milliseconds to wait before sending all subsequent chunks
Example:
<pre>
{
"servers": {
"origin1": {
"type": "origin",
"interfaces": {
"http": {
"type": "http",
"hostname": "localhost",
"port": 8082
}
},
"actions": {
"GET": {
"/foo/bar": {
"status_code": 200,
"headers": {
},
"delay_first_chunk_millis": 0,
"chunk_size_bytes": 1024,
"num_chunks": 10,
"delay_between_chunk_millis": 0,
"chunk_byte_value": 99
}
}
}
}
}
}
</pre>
* node. An adhoc nodejs script. When the configurable origin doesn't
give you the control you need, you can whip out a little nodejs origin. Or
perhaps you could introduce a nodejs proxy that simulates a slow connection?
Example:
<pre>
{
"servers": {
"origin2": {
"type": "node",
"script": "origin.js",
"args": [
"origin8082.json"
],
"interfaces": {
"http": {
"type": "http",
"port": 8082
}
}
}
}
}
</pre>
* other. Any executable that exits when you send it a SIGTERM, exits 0
on success, and non-zero on error.
Example:
<pre>
{
"servers": {
"origin3": {
"type": "other",
"executable": "/usr/local/apache2/bin/httpd",
"args": [
"-d", "/usr/local/apache2"
]
}
}
}
</pre>
## use it
This is how you can instantiate the process manager:
<pre>
var repo_root = '../..';
var tf = require(repo_root + '/tests/framework/atstf.js').atstf_new({
ats_path: repo_root + '/proxy/traffic_server',
origin_path: repo_root + '/tests/framework/origin.js',
config_path: './config.json',
log_cb: function(thresh, data) {
console.error(thresh + ': ' + data);
}
});
</pre>
While the process manager can be used with any nodejs test framework, the
following examples use the [mocha](http://visionmedia.github.io/mocha/) unit
test framework.
The process manager should be started before any test cases are run, and
stopped after all finish:
<pre>
describe('Example', function() {
before(function(done) {
tf.start(done);
});
after(function(done) {
tf.stop(done);
});
it('test_case', function() {
...
});
});
</pre>
# Example
## tests/example/config.json
In our example we ask the process manager to start three processes for us;
2 distinct ATS processes (ats1, ats2) listening on different ports and a
configurable origin (origin1). ats1 is configured to proxy to origin1, but
ats2 is not.
<pre>
{
"servers": {
"ats1": {
"type": "ats",
"root": "ats1",
"interfaces": {
"http": {
"type": "http",
"hostname": "localhost",
"port": 8080
}
}
},
"ats2": {
"type": "ats",
"root": "ats2",
"interfaces": {
"http": {
"type": "http",
"hostname": "localhost",
"port": 8081
}
}
},
"origin1": {
"type": "origin",
"interfaces": {
"http": {
"type": "http",
"hostname": "localhost",
"port": 8082
}
},
"actions": {
"GET": {
"/foo/bar": {
"status_code": 200,
"headers": {
},
"delay_first_chunk_millis": 0,
"chunk_size_bytes": 1024,
"num_chunks": 10,
"delay_between_chunk_millis": 0,
"chunk_byte_value": 99
}
}
}
}
}
}
</pre>
## tests/example/test.js
Our example test case is pretty contrived. It asserts that ats1 correctly
proxies every byte in every chunk from origin1. It also asserts that ats2
returns a 404 for the same request.
Note that everything here is asynchronous. All test cases are passed a
'done' callback and have to invoke it (i.e., done()) to tell the mocha test
framework that the test case has finished. Mocha will mark the test as failed
if done isn't called in 2 seconds, but this timeout can be increased.
<pre>
var log = function(thresh, data) {
// print to stderr to keep distinct from generate test report
console.error(thresh + ': ' + data);
};
var repo_root = '../..';
var tf = require(repo_root + '/tests/framework/atstf.js').atstf_new({
ats_path: repo_root + '/proxy/traffic_server',
origin_path: repo_root + '/tests/framework/origin.js',
config_path: './config.json',
log_cb: log
});
var assert = require("assert");
var client = require('http').request;
// See http://visionmedia.github.io/mocha for details on this test
framework.
describe('Example', function() {
before(function(done) {
tf.start(done);
});
after(function(done) {
tf.stop(done);
});
it('ats1', function(done) {
// Where is ats1?
var host = tf.config.servers.ats1.interfaces.http.hostname;
var port = tf.config.servers.ats1.interfaces.http.port;
// Path to origin1
var path = '/foo/bar';
// What is origin1 configured to send?
var action = tf.config.servers.origin1.actions.GET[path];
var status_code = action.status_code;
var bytes_to_receive = action.chunk_size_bytes * action.num_chunks;
var chunk_byte_value = action.chunk_byte_value;
var req = client(
{
hostname: host,
port: port,
path: path,
method: 'GET'
},
function(res) {
log('DEBUG', 'STATUS: ' + res.statusCode);
log('DEBUG', 'HEADERS: ' + JSON.stringify(res.headers));
var bytes_received = 0;
assert.equal(status_code, res.statusCode);
res.on('data', function(chunk) {
for (var i = 0; i < chunk.length; ++i) {
assert.equal(chunk_byte_value, chunk[i]);
}
bytes_received += chunk.length;
if (bytes_received >= bytes_to_receive) {
// End test (or timeout and fail)
done();
}
});
});
req.on('error', function(e) {
assert.fail(e, null, 'socket error');
});
req.end();
});
it('ats2', function(done) {
var host = tf.config.servers.ats2.interfaces.http.hostname;
var port = tf.config.servers.ats2.interfaces.http.port;
var path = '/foo/bar';
var req = client(
{
hostname: host,
port: port,
path: path,
method: 'GET'
},
function(res) {
log('DEBUG', 'STATUS: ' + res.statusCode);
log('DEBUG', 'HEADERS: ' + JSON.stringify(res.headers));
assert.equal(404, res.statusCode);
done();
});
req.on('error', function(e) {
assert.fail(e, null, 'socket error');
});
req.end();
});
});
</pre>
Note also how the test case accesses the process manager's JSON config file
to learn the hostnames and ports of ats1 and ats2. As long as the test cases
refrain from hardcoding connection details they can be turned into remote test
suites with only config changes.
The main trick is making the process manager not start the ats processes.
This can be accomplished either by not starting the process manager at all or
by telling the process manager not to start a specific process with a 'spawn:
false'. Example:
<pre>
{
"servers": {
"ats1": {
"type": "ats",
"root": "ats1",
"spawn": false,
"interfaces": {
"http": {
"type": "http",
"hostname": "ats1.example.com",
"port": 8080
}
}
}
}
}
</pre>
## tests/example/GNUmakefile
NB: This trivial makefile should be replaced with its automake equivalent.
<pre>
test:
@mocha --reporter tap test.js > report.tap
clean:
@rm -rf ats1/var ats2/var report.xml
</pre>
The call to mocha is here. Note the '--reporter'. The 'tap' reporter is
compatible with jenkins and my experience has been that its more robust than
the 'xunit' reporter (which generates junit XML reports).
An example tap report:
<pre>
1..2
ok 1 Example ats1
ok 2 Example ats2
# tests 2
# pass 2
# fail 0
</pre>
## tests/example/ats1
This is the root directory for ats1. Each ATS process needs a distinct
root directory.
Checked in under the root directory are the full config files for ats1.
The layout should look familiar to anyone who has configured ATS before.
Example:
<pre>
$ tail ats1/etc/trafficserver/remap.config
map /foo/bar http://localhost:8082/foo/bar
</pre>
The var directory will also be created here. Every run will wipe out this
directory and recreate it so prior test runs won't affect subsequent test runs.
If a test fails it may help to look at the local logs.
Example:
<pre>
$ tail ats1/var/log/trafficserver/diags.log
[Oct 19 16:54:18.852] Server {0x7fc248e86800} NOTE: cache enabled
</pre>
## tests/example/ats2
You get the idea...
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/yahoo/trafficserver blattj_test_framework
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/trafficserver/pull/131.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #131
----
commit c5dc61999c0a050a8bd52ab83faf04e83b6b2c39
Author: Joshua Blatt <[email protected]>
Date: 2014-10-18T19:07:23Z
Add initial functional test framework with example test case.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---