Note
The Tests feature is only available to our Premium and Enterprise users.
Blackfire gathers a lot of data about how code behaves at runtime, and tests allow to write assertions on those data. You can write performance tests with Blackfire by writing assertions on time dimensions (wall-clock time, I/O time, and CPU time), but whenever possible, we recommend you to write assertions that do not depend on time. That's because time is not stable from one profile to the next and it makes your tests volatile. Instead, find what could make your code slow and write assertions on that (like limiting the number of SQL queries or the amount of used memory). Moreover, Blackfire tests are a great way to test legacy applications.
To get started, create a .blackfire.yml file in the root directory of an
application:
1 2 3 4 5 6 7 8 9 10 | tests:
"Pages should be fast enough":
path: "/.*" # run the assertions for all HTTP requests
assertions:
- "main.wall_time < 100ms" # wall clock time is less than 100ms
"Commands should be fast enough":
command: ".*" # run the assertions for all CLI commands
assertions:
- "main.wall_time < 2s" # wall clock time is less than 2s
|
.blackfire.yml is a YAML file where tests are defined under the main
tests key.
A test is composed of the following required items:
/.* path matches all HTTP URLs and .* matches all CLI
commands);main.wall_time (the time it takes for your application to
render the HTTP response) takes less than 100 milliseconds, 100ms.Here is another example with several assertions limited to the homepage of the application:
1 2 3 4 5 6 7 | tests:
"Homepage should not hit the DB":
path: "/" # only apply the assertions for the homepage
assertions:
- "metrics.sql.queries.count == 0" # no SQL statements executed
- "main.peak_memory < 10mb" # memory does not exceed 10mb
- "metrics.output.network_out < 100kb" # the response size is less than 100kb
|
When a profile is made on a project that contains a .blackfire.yml file,
Blackfire automatically runs all tests matching the HTTP request path. The
result of the tests is displayed as a green or red icon in the dashboard and the
full report is available on the profile page. The same goes when profiling a
CLI script via blackfire run.
Note that assertions in the report contain the actual metric and variable
values so that you know if you are close to the target or not
(metrics.sql.queries.count 5 == 0; 0 is the target, 5 is the actual number
of SQL statements executed).
Assertions support profile comparison as well to assert the performance evolution of your code:
1 2 3 4 5 6 | tests:
"Pages should not become slower":
path: "/.*"
assertions:
- "percent(main.wall_time) < 10%" # time does not increase by more than 10%
- "diff(metrics.sql.queries.count) < 2" # less than 2 additional SQL statements
|
Profile and comparison assertions can be mixed in tests as Blackfire will only run the comparison ones on profile comparisons and ignore them otherwise.
Read the assertion reference guide to learn more about the Blackfire assertion syntax.
Custom metrics can also be defined under the
metrics section of a .blackfire.yml file:
1 2 3 4 5 6 | metrics:
cache.write: # metric name
label: "Cache::write() calls"
matching_calls:
php:
- callee: '=Cache::write' # aggregate the costs of all Cache::write calls
|
... and used in assertions like built-in ones:
1 2 3 4 5 | tests:
"Ensure that Cache::write() calls do no consume too much memory":
path: "/.*"
assertions:
- "metrics.cache.write.peak_memory < 10mb"
|
In the above example, we have defined a cache.write metric. Now, let's say
that the first argument is a string that influences memory consumption. Instead
of having one node for all Cache::write() calls, we want to get separate
nodes depending on the first argument value by using argument capturing:
1 2 3 4 5 6 7 8 | metrics:
foo.bar: # metric name
label: "bar() calls"
matching_calls:
php:
- callee:
selector: '=Foo::bar' # aggregate the costs of all Foo::bar calls
argument: { 1: "^" } # but create separate nodes by the first argument value
|