Benchmarking PHP code with PhpBench

This blog post is all about measuring the speed of PHP code through micro-benchmarks.

Why micro-benchmarks

I think of micro-benchmarks as a complement to tests. A well-written test would give a developer a definite pass or fail result, while well-written micro-benchmark will give the developer an indication of the duration of an operation.

To put it another way, tests can help you find out whether the code is behaving correctly, while micro-benchmarks can help you to track whether your changes are making the code faster or slower.

Make sure that your code is doing real processing work before you spend time on this: Algorithms such as sorting, compressing, or parsing are good candidates. My guess is that most of the code in a regular PHP application would not benefit significantly from having a suite of benchmarks, because web apps tend to be I/O bound (eg database, network and disk access).

Introducing PhpBench

I have previously written a standalone test script every time I’ve needed to measure the speed of something in PHP. This works up to a point, but it gets quite hard to maintain as the number of benchmarks increases.

PhpBench is one of the available tools for running micro-benchmarks from PHP, and there are a few reasons why I think it’s worth a try:

  • it’s actively maintained
  • it installs with composer
  • it runs in a similar way to PHPUnit, so the benchmarks are fully specified in PHP code and run with a PHP tool.
  • it implements familiar concepts that you would find in benchmarking tools from other languages (eg. JMH from the Java world).

Installation

Start out with a blank composer project.

$ composer init

Add some auto-loading settings to composer.yml that PHP knows to find ExampleApp classes in the src folder.

{
    "name": "mike42/php-benchmark-examples",
    "description": "Example PHP benchmark project",
    "type": "project",
    "license": "MIT",
    "minimum-stability": "dev",
    "require": {},
    "require-dev": {},
    "autoload": {
        "psr-4": {
            "ExampleApp\\": "src"
        }
    }
}

At this point, install the phpbench dev version.

$ composer require --dev phpbench/phpbench:@dev
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 22 installs, 0 updates, 0 removals
  - Installing symfony/process (4.4.x-dev 1a42849): Cloning 1a42849a7f from cache
  - Installing symfony/options-resolver (4.4.x-dev 94cbb72): Cloning 94cbb72bb9 from cache
  - Installing symfony/finder (4.4.x-dev 1b5ec12): Cloning 1b5ec12340 from cache
  - Installing symfony/polyfill-ctype (dev-master 82ebae0): Cloning 82ebae0220 from cache
  - Installing symfony/filesystem (4.4.x-dev 5914824): Cloning 59148241f7 from cache
  - Installing psr/log (dev-master c4421fc): Cloning c4421fcac1 from cache
  - Installing symfony/debug (4.4.x-dev 8278839): Cloning 8278839457 from cache
  - Installing symfony/service-contracts (dev-master 0c81a04): Cloning 0c81a04f68 from cache
  - Installing symfony/polyfill-php73 (dev-master d1fb4ab): Cloning d1fb4abcc0 from cache
  - Installing symfony/polyfill-mbstring (dev-master fe5e94c): Cloning fe5e94c604 from cache
  - Installing symfony/console (4.4.x-dev e2fe100): Cloning e2fe1002fd from cache
  - Installing lstrojny/functional-php (1.9.0): Loading from cache
  - Installing beberlei/assert (v3.x-dev ce139b6): Cloning ce139b6bf8 from cache
  - Installing seld/jsonlint (1.7.1): Loading from cache
  - Installing psr/container (dev-master 014d250): Cloning 014d250dae from cache
  - Installing phpbench/container (1.2): Loading from cache
  - Installing webmozart/assert (1.4.0): Loading from cache
  - Installing webmozart/path-util (dev-master 95a8f7a): Cloning 95a8f7ad15 from cache
  - Installing phpbench/dom (0.2.0): Loading from cache
  - Installing doctrine/lexer (dev-master ee614dd): Cloning ee614dd93a from cache
  - Installing doctrine/annotations (1.7.x-dev 3f35255): Cloning 3f35255290 from cache
  - Installing phpbench/phpbench (dev-master dccc67d): Cloning dccc67dd52 from cache
symfony/service-contracts suggests installing symfony/service-implementation
symfony/console suggests installing symfony/event-dispatcher
symfony/console suggests installing symfony/lock
Writing lock file
Generating autoload files

Writing a benchmark

Add some code to src/ExampleThing.php to measure.

<?php

namespace ExampleApp;

class ExampleThing {
  /**
   * Inefficiently multiply numbers together
   */
  public function multiply(int $x, int $y) : int {
    $ret = 0;
    for($i = 0; $i < $x; $i++) {
      for($j = 0; $j < $y; $j++) {
        $ret++;
      }
    }
    return $ret;
  }
}

Write a benchmark for the code in benchmarks/ExampleThingBenchmark.php

<?php

use ExampleApp\ExampleThing;

/**
 * @BeforeMethods({"init"})
 * @Revs(1000)
 * @Iterations(5)
 */
class ExampleThingBenchmark {

    private static $exampleThing;

    public function init()
    {
        self::$exampleThing = new ExampleThing();
    }

    /**
     * @Subject
     */
    public function doMultiply()
    {
        self::$exampleThing -> multiply(100, 100);
    }
}

Before you run anything, you will also need a configuration file. Add this minimal configuration to phpbench.json.dist.

{
    "php_disable_ini": true,
    "bootstrap": "vendor/autoload.php",
    "path": "benchmark",
    "php_config": {
        "extension": [ "json.so" ]
    },
    "time_unit": "milliseconds"
}

Running the benchmark

The most basic way to run all benchmarks at once is phpbench run, which looks like this:

$ php vendor/bin/phpbench run
PhpBench @git_tag@. Running benchmarks.
Using configuration file: /home/mike/workspace/blog/phpbench/php-benchmarks/phpbench.json.dist

\ExampleThingBenchmark

    doMultiply..............................I4 [μ Mo]/r: 0.232 0.232 (ms) [μSD μRSD]/r: 0.000ms 0.10%

1 subjects, 5 iterations, 1,000 revs, 0 rejects, 0 failures, 0 warnings
(best [mean mode] worst) = 0.232 [0.232 0.232] 0.233 (ms)
⅀T: 1.162ms μSD/r 0.000ms μRSD/r: 0.100%

As you might expect, there are options to run specific benchmarks, or to present the results in different ways. However, there were some PhpBench-specific things which you will need to navigate to use it successfully.

Firstly, there are two different output settings to consider:

  • report options allow you to decide which data to print in a table.
  • output options allow you to decide how to format it for output

I also found it useful to add --progress=none to suppress the text displayed by default, since you will otherwise get broken HTML output.

Lastly, be aware that confusingly, the default report is not active by default. Specify it to get a table listing out each iteration.

$ php vendor/bin/phpbench run --progress=none --report=default
suite: 13415431dc0db41c12decee0fa83c26d1db0f678, date: 2019-05-31, stime: 22:01:34
+-----------------------+------------+-----+------+------+----------+-----------+--------------+----------------+
| benchmark             | subject    | set | revs | iter | mem_peak | time_rev  | comp_z_value | comp_deviation |
+-----------------------+------------+-----+------+------+----------+-----------+--------------+----------------+
| ExampleThingBenchmark | doMultiply | 0   | 1000 | 0    | 915,736b | 236.716μs | +1.68σ       | +1.08%         |
| ExampleThingBenchmark | doMultiply | 0   | 1000 | 1    | 915,736b | 231.999μs | -1.45σ       | -0.93%         |
| ExampleThingBenchmark | doMultiply | 0   | 1000 | 2    | 915,736b | 233.960μs | -0.15σ       | -0.1%          |
| ExampleThingBenchmark | doMultiply | 0   | 1000 | 3    | 915,736b | 233.878μs | -0.2σ        | -0.13%         |
| ExampleThingBenchmark | doMultiply | 0   | 1000 | 4    | 915,736b | 234.361μs | +0.12σ       | +0.08%         |
+-----------------------+------------+-----+------+------+----------+-----------+--------------+----------------+

The type of report that I found most useful is aggregate, since it provides a summary of each iteration.

$ php vendor/bin/phpbench run --progress=none --report=aggregate
suite: 1341543c02ebee4031d165665c2724c379bf9c98, date: 2019-05-31, stime: 22:01:18
+-----------------------+------------+-----+------+-----+----------+-----------+-----------+-----------+-----------+---------+--------+-------+
| benchmark             | subject    | set | revs | its | mem_peak | best      | mean      | mode      | worst     | stdev   | rstdev | diff  |
+-----------------------+------------+-----+------+-----+----------+-----------+-----------+-----------+-----------+---------+--------+-------+
| ExampleThingBenchmark | doMultiply | 0   | 1000 | 5   | 915,736b | 231.907μs | 232.614μs | 232.089μs | 233.538μs | 0.713μs | 0.31%  | 1.00x |
+-----------------------+------------+-----+------+-----+----------+-----------+-----------+-----------+-----------+---------+--------+-------+

Both of the example above use console output, which is the default.

HTML reports

To capture phpbench output from a CI environment, you can generate a HTML report as well. Add an output section to phpbench.json.dist.

{
    ...
    "outputs": {
         "html_file": {
             "extends": "html",
             "file": "benchmarks.html",
             "title": "Example benchmark report"
         }
    }
}

This new html_file output can be used with --output=html_file

$ php vendor/bin/phpbench run \
    --progress=none --report=aggregate \
    --output=console --output=html_file

From Jenkins

This is a Jenkinsfile that I use to run benchmarks. You need to install the “HTML Publisher” and “AnsiColor” Jenkins plugins.

pipeline {
  agent any

  stages {
    stage('Install') {
      steps {
        ansiColor('xterm') {
          sh 'composer install'
        }
      }
    }
    stage('Run benchmarks') {
      steps {
        ansiColor('xterm') {
          sh 'php vendor/bin/phpbench run --progress=none --report aggregate --output=console --output=html_file --ansi'
          publishHTML([
            allowMissing: false,
            alwaysLinkToLastBuild: true,
            keepAll: true,
            reportDir: '',
            reportFiles: 'benchmarks.html',
            reportName:
            'phpbench report',
            reportTitles: ''
          ])       
        }
      }
    }
  }
}

Each build is then benchmarked, and the results are available as a link on the left.

The console output of the build shows the results.

To actually view the HTML report you will need to apply this tweak.

Next steps

I’ve started to add phpbench micro-benchmarks to some PHP projects that I work on, and I’m finding it a lot more useful than standalone test scripts.

The next step would be integrate micro-benchmarks better into the development process. For example, I can already review code coverage improvements/regressions of a change on GitHub using Coveralls, which posts a comment to each pull request. This saves you from merging changes which don’t meet your project’s standards.

As far as I can tell, you would need to write custom code for more expressive phpbench reports along these lines, such as:

  • the history of a benchmark over time, or
  • the most-changed benchmarks against a baseline

phpbench does provide the building-blocks though, since you can set some options to archive the results of each run to a folder.

How to integrate Gitea and Jenkins

Gitea is a web app for hosting Git repositories. It’s open source, and very simple to get running. With some extra setup, it can also trigger Jenkins builds, and display the Jenkins build status of each commit once it has been built.

Because the documentation for the Jenkins plugin is very minimalist, I decided to write about it for future reference.

About this setup

I installed Jenkins and Gitea on the same Debian 9 server on the LAN. They communicate only over HTTP, so they could just as easily be installed separately.

To make the configuration clear, I’ve used jenkins.example.com in URLs which refer to Jenkins, and gitea.example.com for the Gitea.

Gitea installation

This command will install and start the linux-amd64 version of Gitea as the user “git”.

useradd git -r --create-home && \
  mkdir /opt/gitea && chown -R git: /opt/gitea && \
  wget -O /opt/gitea/gitea https://dl.gitea.io/gitea/1.7.0/gitea-1.7.0-linux-amd64 && \
  chmod +x /opt/gitea/gitea && \
  sudo -u git bash -c "cd /opt/gitea && ./gitea web"

Shut it down, and configure some paths at /opt/gitea/custom/conf/app.ini. These will depend on your environment.

SSH_DOMAIN       = gitea.example.com
DOMAIN           = gitea.example.com
HTTP_PORT        = 3000
ROOT_URL         = http://gitea.example.com:3000/

Start it back up as a systemd service at this point, by creating /etc/systemd/system/gitea.service with this content:

[Unit]
Description=gitea
After=network.target

[Service]
ExecStart=/opt/gitea/gitea web
WorkingDirectory=/opt/gitea
User=git
Type=simple

[Install]
WantedBy=multi-user.target

Once this is saved, start the service.

systemctl daemon-reload
systemctl start gitea

Optionally, also configure it to start on boot.

systemctl enable gitea

Jenkins installation

I installed Jenkins from the official Debian repo at jenkins-ci.org, and clicked through the initial install.

Add plugin

Open up Manage JenkinsManage Plugins. Navigate to Available and check the Gitea plugin.

Next, install the plugin and restart Jenkins.

The configuration for the plugin is located under Manage JenkinsConfigure System.

At this point you will want to tell Jenkins where to find your Gitea server.

I don’t suggest choosing Manage hooks, because it uses the same account to manage hooks across all repos, which would violate the principle of least privilege.

Set up a project in Gitea

In Gitea, create a project, then a repository under that.

Register an account in Gitea for Jenkins to use for this project.

Log out, log back in as yourself, and add Jenkins as a collaborator to the repo, with Write access.

This is the only permission you need for public repositories. If you plan to lock down your Gitea organization later, then you will also need to give this Jenkins account Read access at the organization level.

Set up a project in Jenkins

Add a new Gitea Organization Jenkins job.

Enter the name of the organization, and the account to log in with.

Add the details for the new account, and make sure it’s selected.

The other options don’t need to be changed at this stage.

When you press ‘save’, Jenkins will immediately attempt to find any repositories in the Gitea organization, and kick off any builds. Unless everything is correct, this is unlikely to work the first time, so pay attention to error logs.

These three places will show what’s happening:

  • Scan Gitea Organization Log, which lists repositories in the organization.
  • Scan Multibranch Pipeline Log for each repository, which shows the discovery of branches.
  • Console output for each build, which will show errors if the build status could not be submitted.

Problems which I’ve found here include:

  • The URL of Gitea in the Jenkins configuration must match the URL to Gitea in its own configuration.
  • The Jenkins user account must have permission to list repositories, clone, and update statuses.
  • Empty repositories, and repositories without a ‘Jenkinsfile’ are ignored.

For that last step, here is an empty Jenkinsfile that you can put in your repository to test this integration:

pipeline {
    agent any

    stages {
        stage('Do nothing') {
            steps {
                sh '/bin/true'
            }
        }
    }
}

Once this is sorted out, you can expect to see your repository in Jenkins.

Every branch with a Jenkinsfile will appear.

And each time a commit is mentioned in Gitea, it will display a small icon to indicate the build status.

Set up a web hook in Gitea

At this point, builds need to be manually triggered. To trigger them each time the repository changes, we need to get a notification out to Jenkins.

Under the repository settings, click WebhooksAdd webhookGitea.

The correct values to use are:

  • URL: http://[ your jenkins server ]/gitea-webhook/post
  • POST Content Type: application/json

Once you press Add Webhook, the path will appear with a small grey dot, indicating that it hasn’t been run before.

If you click edit, then the Test Delivery button can be used to check that it’s working.

The icon indicates the status. If things aren’t working correctly, then click the delivery UUID to expand the full request information, which should help with debugging.

Final result

With Jenkins and Gitea, you have a simple self-hosted a continuous integration environment.

In Gitea, you can store, update and review your code. Any build and test steps in a Jenkinsfile will be run automatically each time the repository changes.

The detailed output for each build is visible in Jenkins, where you can track build results with a variety of plugins.

Handling I/O errors in PHP

This blog post is all about how to handle errors from the PHP file_get_contents function, and others which work like it.

The file_get_contents function will read the contents of a file into a string. For example:

<?php
$text = file_get_contents("hello.txt");
echo $text;

You can try this out on the command-line like so:

$ echo "hello" > hello.txt
$ php test.php 
hello

This function is widely used, but I’ve observed that error handling around it is often not quite right. I’ve fixed a few bugs involving incorrect I/O error handling recently, so here are my thoughts on how it should be done.

How file_get_contents fails

For legacy reasons, this function does not throw an exception when something goes wrong. Instead, it will both log a warning, and return false.

<?php
$filename = "not-a-real-file.txt";
$text = file_get_contents($filename);
echo $text;

Which looks like this when you run it:

$ php test.php 
PHP Warning:  file_get_contents(not-a-real-file.txt): failed to open stream: No such file or directory in test.php on line 3
PHP Stack trace:
PHP   1. {main}() test.php:0
PHP   2. file_get_contents() test.php:3

Warnings are not very useful on their own, because the code will continue on without the correct data.

Error handling in four steps

If anything goes wrong when you are reading a file, your code should be throwing some type of Exception which describes the problem. This allows developers to put a try {} catch {} around it, and avoids nasty surprises where invalid data is used later.

Step 1: Detect that the file was not read

Any call to file_get_contents should be immediately followed by a check for that false return value. This is how you know that there is a problem.

<?php
$filename = "not-a-real-file.txt";
$text = file_get_contents($filename);
if($text === false) {
  throw new Exception("File was not loaded");
}
echo $text;

This now gives both a warning and an uncaught exception:

$ php test.php 
PHP Warning:  file_get_contents(not-a-real-file.txt): failed to open stream: No such file or directory in test.php on line 3
PHP Stack trace:
PHP   1. {main}() test.php:0
PHP   2. file_get_contents() test.php:3
PHP Fatal error:  Uncaught Exception: File was not loaded in test.php:5
Stack trace:
#0 {main}
  thrown in test.php on line 5

Step 2: Suppress the warning

Warnings are usually harmless, but there are several good reasons to suppress them:

  • It ensures that you are not depending on a global error handler (or the absence of one) for correct behaviour.
  • The warning might appear in the middle of the output, depending on php.ini.
  • Warnings can produce a lot of noise in the logs

Use @ to silence any warnings from a function call.

<?php
$filename = "not-a-real-file.txt";
$text = @file_get_contents($filename);
if($text === false) {
  throw new Exception("File was not loaded");
}
echo $text;

The output is now only the uncaught Exception:

$ php test.php 
PHP Fatal error:  Uncaught Exception: File was not loaded in test.php:5
Stack trace:
#0 {main}
  thrown in test.php on line 5

Step 3: Get the reason for the failure

Unfortunately, we lost the “No such file or directory” message, which is pretty important information, which should go in the Exception. This information is retrieved from the old-style error_get_last method.

This function might just return empty data, so you should check that everything is set and non-empty before you try to use it.

<?php
$filename = "not-a-real-file.txt";
error_clear_last();
$text = @file_get_contents($filename);
if($text === false) {
  $e = error_get_last();
  $error = (isset($e) && isset($e['message']) && $e['message'] != "") ?
      $e['message'] : "Check that the file exists and can be read.";
  throw new Exception("File '$filename' was not loaded. $error");
}
echo $text;

This now embeds the failure reason directly in the message.

$ php test.php 
PHP Fatal error:  Uncaught Exception: File 'not-a-real-file.txt' was not loaded. file_get_contents(not-a-real-file.txt): failed to open stream: No such file or directory in test.php:9
Stack trace:
#0 {main}
  thrown in test.php on line 9

Step 4: Add a fallback

The last time I introduced error_clear_last()/get_last_error() into a code-base, I learned out that HHVM does not have these functions.

Call to undefined function error_clear_last()

The fix for this is to write some wrapper code, to verify that each function exists.

<?php
$filename = 'not-a-real-file';
clearLastError();
$text = @file_get_contents($filename);
if ($text === false) {
    $error = getLastErrorOrDefault("Check that the file exists and can be read.");
    throw new \Exception("Could not retrieve image data from '$filename'. $error");
}
echo $text;

/**
 * Call error_clear_last() if it exists. This is dependent on which PHP runtime is used.
 */
function clearLastError()
{
    if (function_exists('error_clear_last')) {
        error_clear_last();
    }
}
/**
 * Retrieve the message from error_get_last() if possible. This is very useful for debugging, but it will not
 * always exist or return anything useful.
 */
function getLastErrorOrDefault(string $default)
{
    if (function_exists('error_clear_last')) {
        $e = error_get_last();
        if (isset($e) && isset($e['message']) && $e['message'] != "") {
            return $e['message'];
        }
    }
    return $default;
}

This does the same thing as before, but without breaking other PHP runtimes.

$ php test.php 
PHP Fatal error:  Uncaught Exception: Could not retrieve image data from 'not-a-real-file'. file_get_contents(not-a-real-file): failed to open stream: No such file or directory in test.php:7
Stack trace:
#0 {main}
  thrown in test.php on line 7

Since HHVM is dropping support for PHP, I expect that this last step will soon become unnecessary.

How not to handle errors

Some applications put a series of checks before each I/O operation, and then simply perform the operation with no error checking. An example of this would be:

<?php
$filename = "not-a-real-file.txt";
// Check everything!
if(!file_exists($filename)) {
  throw new Exception("$filename does not exist");
}
if(!is_file($filename)) {
  throw new Exception("$filename is not a file");
}
if(!is_readable($filename)) {
  throw new Exception("$filename cannot be read");
}
// Assume that nothing can possibly go wrong..
$text = @file_get_contents($filename);
echo $text;

You could probably make a reasonable-sounding argument that checks are a good idea, but I consider them to be misguided:

  • If you skip any actual error handling, then your code is going to fail in more surprising ways when you encounter an I/O problem that could not be detected.
  • If you do perform correct error handling as well, then the extra checks add nothing other than more branches to test.

Lastly, beware of false positives. For example, the above snippet will reject HTTP URL’s, which are perfectly valid for file_get_contents.

Conclusion

Most PHP code now uses try/catch/finally blocks to handle problems, but the ecosystem really values backwards compatibility, so existing functions are rarely changed.

The style of error reporting used in these I/O functions is by now a legacy quirk, and should be wrapped to consistently throw a useful Exception.

How to export a Maildir email inbox as a CSV file

I’ve often thought it would be useful to have an “Export as CSV” button on my inbox, and recently had an excuse to implement one.

I needed to parse the Subject of each email coming from an automated process, but the inbox was in the Maildir format, and I needed something more useful for data interchange.

So I wrote a utility, mail2csv, to export the Maildir as CSV file, which many existing tools can work with. It produces one row per email, and one column per email header:

The basic usage is:

$ mail2csv example/
Date,Subject,From
"Wed, 16 May 2018 20:05:16 +0000",An email,Bob <bob@example.com>
"Wed, 16 May 2018 20:07:52 +0000",Also an email,Alice <alice@example.com>

You can select any header with the --headers field, which accepts globs:

$ mail2csv example/ --headers Message-ID Date Subject
Message-ID,Date,Subject
<89125@example.com>,"Wed, 16 May 2018 20:05:16 +0000",An email
<180c6@example.com>,"Wed, 16 May 2018 20:07:52 +0000",Also an email

If you’re not sure which headers your emails have, then you can make a CSV with all the headers, and just read the first line:

$ mail2csv  example/ --headers '*' | head -n1

You can find the Python code for this tool on GitHub here.

Make better CLI progress bars with Unicode block characters

As a programmer, you might add a progress bar so that the user has feedback while they wait for a slow task.

If you are writing a console (CLI) application, then you need to make your progress bars from text. A good command-line progress bar should update in small increments, like this example:

This uses Unicode Block Elements to give the progress bar a higher resolution.

Code Character
U+2588
U+2589
U+258A
U+258B
U+258C
U+258D
U+258E
U+258F

A lot of applications use plain ASCII in their progress bars. The progress bar in wget, for example, uses ===> characters only, like this:

Progress bars made from ASCII characters like = and # signs are very common, most likely because of the historical portability issues around non-ASCII text. Nowadays, UTF-8 support is ubiquitous, and it’s pointless to adhere to such limitations.

Example: Better progress bars in python

The animation at the top of this blog post is a simple python script.

import sys
import time

from progress_bar import ProgressBar

"""
Example usage of ProgressBar class
"""
print("Doing work\n")

with ProgressBar(sys.stdout) as progress:
    for i in range(0,800):
        progress.update(i / 800)
        time.sleep(0.05)
print("\nDone.\n");

The script progress_bars.py, written for Python 3, contains a class that allows the progress bar to be created and drawn on different types of terminals.

import abc
import math
import shutil
import sys
import time

from typing import TextIO

"""
Produce progress bar with ANSI code output.
"""
class ProgressBar(object):
    def __init__(self, target: TextIO = sys.stdout):
        self._target = target
        self._text_only = not self._target.isatty()
        self._update_width()

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        if(exc_type is None):
            # Set to 100% for neatness, if no exception is thrown
            self.update(1.0)
        if not self._text_only:
            # ANSI-output should be rounded off with a newline
            self._target.write('\n')
        self._target.flush()
        pass

    def _update_width(self):
        self._width, _ = shutil.get_terminal_size((80, 20))

    def update(self, progress : float):
        # Update width in case of resize
        self._update_width()
        # Progress bar itself
        if self._width < 12:
            # No label in excessively small terminal
            percent_str = ''
            progress_bar_str = ProgressBar.progress_bar_str(progress, self._width - 2)
        elif self._width < 40:
            # No padding at smaller size
            percent_str = "{:6.2f} %".format(progress * 100)
            progress_bar_str = ProgressBar.progress_bar_str(progress, self._width - 11) + ' '
        else:
            # Standard progress bar with padding and label
            percent_str = "{:6.2f} %".format(progress * 100) + "  "
            progress_bar_str = " " * 5 + ProgressBar.progress_bar_str(progress, self._width - 21)
        # Write output
        if self._text_only:
            self._target.write(progress_bar_str + percent_str + '\n')
            self._target.flush()
        else:
            self._target.write('\033[G' + progress_bar_str + percent_str)
            self._target.flush()

    @staticmethod
    def progress_bar_str(progress : float, width : int):
        # 0 <= progress <= 1
        progress = min(1, max(0, progress))
        whole_width = math.floor(progress * width)
        remainder_width = (progress * width) % 1
        part_width = math.floor(remainder_width * 8)
        part_char = [" ", "▏", "▎", "▍", "▌", "▋", "▊", "▉"][part_width]
        if (width - whole_width - 1) < 0:
          part_char = ""
        line = "[" + "█" * whole_width + part_char + " " * (width - whole_width - 1) + "]"
        return line

Aside from the use of box drawing characters, this script includes a few other things which a good progress bar should implement:

  • Resize the progress bar when you resize the terminal
  • Simplify the progress bar on very small terminals
  • Don’t print ANSI terminal codes if the script is not connected to a terminal
  • Round off to 100% once the use of the progress bar completes without error

After writing this, I discovered that the progress pypi package can also use these box characters, so I haven’t packaged this code up. I haven’t used progress before, but you might like to evaluate it for your own applications.

How to create effective PHP project documentation with Read the Docs

Documentation is one of the ways that software projects can communicate information to their users. Effective documentation is high-quality, meaning that it’s complete, accurate, and up-to-date. At least for open source libraries, it also means that you can find it with a search engine. For many small PHP projects, the reality is very far removed from the ideal.

Read the Docs (readthedocs.io) makes it easy to host an up-to-date copy of your project’s documentation online. There are around 2,000 PHP projects which host their documentation on the site, which makes PHP the third most popular programming language for projects on the site.

This post covers the process that is used to automatically publish the documentation for the gfx-php PHP graphics library, which is one of those 2000 projects. You should consider using this setup as a template if you your project is small enough that it does not have its own infrastructure.

Basic concept

Typically, people are using Read the Docs with a tool called Sphinx. If you are writing in Python, it’s also possible to use the autodoc Sphinx plugin to add API documentation, based on docstrings in the code.

PHP programmers are already spoiled for choice if they want to produce HTML documentation from their code. These tools all have huge PHP user bases:

These will each output their own HTML, which is only useful if you want to self-host the documentation. I wanted a tool that was more like “autodoc for PHP”, so that I can have my API docs magically appear in Sphinx output that is hosted on Read the Docs.

Doxygen is the most useful tool for this purpose, because it has a stable XML output format and good-enough PHP support. I decided to write a tool which to take the Doxygen XML info and generate rest for Sphinx:

This introduces some extra tools, which looks complex at first. The stack is geared towards running within the Read the Docs build environment, so most developers can treat it as a black box after the initial setup:

This setup is entirely hosted with free cloud services, so you don’t need to run any applications on your own hardware.

Tools to install on local workstation

First, we will set up each of these tools locally, so that we know everything is working before we upload it.

  • Doxygen
  • Sphinx
  • doxyphp2sphinx

Doxygen

Doxygen can read PHP files to extract class names, documentation and method signatures. Linux and Mac install this from most package managers (apt-get, dnf or brew) under the name doxygen, while Windows users need to chase down binaries.

In your repo, make a sub-folder called docs/, and create a Doxyfile with some defaults:

mkdir docs/
doxygen -g Doxyfile

You need to edit the configuration to make it suitable or generating XML output for your PHP project. The version of Doxygen used here is 1.8.13, where you only need to change a few values to set Doxygen to:

  • Recursively search PHP files in your project’s source folder
  • Generate XML and HTML only
  • Log warnings to a file

For a typical project, these settings are:

PROJECT_NAME           = "Example Project"
INPUT                  = ../src
WARN_LOGFILE           = warnings.log
RECURSIVE              = YES
USE_MDFILE_AS_MAINPAGE = ../README.md
GENERATE_LATEX         = NO
GENERATE_XML           = YES

Once you set these in Doxyfile, you can run Doxygen to generate HTML and XML output.

$ doxygen

Doxygen will pick up most method signatures automatically, and you can add to them via docblocks, which work along the same lines as docstrings in Python. Read Doxygen: Documenting the Code to learn the syntax if you have not used a documentation generator in a curly-bracket language before.

The Doxygen HTML will never be published, but you might need to read see how well Doxygen understands your code.

The XML output is much more useful for our purposes. It contains the same information, and we will read it to generate pages of documentation for Sphinx to render.

Sphinx

Sphinx is the tool that we will use to render the final HTML output. If you haven’t used it before, then see the official Getting Started page.

We are using Sphinx because it can be executed by online services like Read the Docs. It uses the reStructuredText format, which is a whole lot more complex than Markdown, but supports cross-references. I’ll only describe these steps briefly, because there are existing how-to guides on making Sphinx work for manually-written PHP documentation elsewhere on the Internet, such as:

Still in the docs folder with your Doxyfile, create and render an empty Sphinx project.

pip install sphinx
sphinx-quickstart --quiet --project example_project --author example_bob
make html

The generated HTML will initially appear like this:

We need to customize this in a way that adds PHP support. The quickest way is to drop this text into requirements.txt:

Sphinx==1.7.4
sphinx-rtd-theme==0.3.0
sphinxcontrib-phpdomain==0.4.1
doxyphp2sphinx>=1.0.1

Then these two sections of config.py:

extensions = [
  "sphinxcontrib.phpdomain"
]
html_theme = 'sphinx_rtd_theme'

Add this to the end of config.py

# PHP Syntax
from sphinx.highlighting import lexers
from pygments.lexers.web import PhpLexer
lexers["php"] = PhpLexer(startinline=True, linenos=1)
lexers["php-annotations"] = PhpLexer(startinline=True, linenos=1)

# Set domain
primary_domain = "php"

And drop this contents in _templates/breadcrumbs.html (explanation)

{%- extends "sphinx_rtd_theme/breadcrumbs.html" %}

{% block breadcrumbs_aside %}
{% endblock %}

Then finally re-install dependencies and re-build:

pip install -r requirements.txt
make html

The HTML output under _build will now appear as:

This setup gives us three things:

  • The documentation looks the same as Read the Docs.
  • We can use PHP snippets and class documentation.
  • There are no ‘Edit’ links, which is important because some of the files will be generated in the next steps.

doxyphp2sphinx

The doxyphp2sphinx tool will generate .rst files from the Doxygen XML files. This was installed from your requirements.txt in the last step, but you can also install it standalone via pip:

pip install doxyphp2sphinx

The only thing you need to specify is the name of the namespace that you are documenting, using :: as a separator.

doxyphp2sphinx FooCorp::Example

This command will read the xml/ subdirectory, and will create api.rst. It will fill the api/ directory with documentation for each class in the \FooCorp\Example namespace.

To verify that this has worked, check your class structure:

$ tree ../src
../src
├── Dooverwhacky.php
└── Widget.php

You should have documentation for each of these:

$ tree xml/ -P 'class*'
xml/
├── classFooCorp_1_1Example_1_1Dooverwhacky.xml
└── classFooCorp_1_1Example_1_1Widget.xml

And if you have the correct namespace name, you will have .rst files for each as well:

$ tree api
api
├── dooverwhacky.rst
└── widget.rst

Now, add a reference to api.rst somewhere in index.rst:

.. toctree::
   :maxdepth: 2
   :caption: API Documentation

   Classes <api.rst>

And re-compile:

make html

You can now navigate to your classes in the HTML documentation.

The quality of the generated class documentation can be improved by adding docstrings. An example of the generated documentation for a method is:

Writing documentation

As you add pages to your documentation, you can include PHP snippets and reference the usage of your classes.

.. code-block:: php

   <?php
   echo "Hello World"

Lorem ipsum dolor sit :class:`Dooverwhacky`, foo bar baz :meth:`Widget::getFeatureCount`.

This will create syntax highlighting for your examples and inline links to the generated API docs.

Beyond this, you will need to learn some reStructuredText. I found this reference to be useful.

Local docs build

A full build has these dependencies:

apt-get install doxygen make python-pip
pip install -r docs/requirements.txt

And these steps:

cd docs/
doxygen 
doxyphp2sphinx FooCorp::Example
make html

Cloud tools

Next, we will take this local build, and run it on a cloud setup instead, using these services.

  • GitHub
  • Read the docs

GitHub

I will assume that you know how to use Git and GitHub, and that you are creating code that is intended for re-use.

Add these files to your .gitignore:

docs/_build/
docs/warnings.log
docs/xml/
docs/html/

Upload the remainder of your repository to GitHub. The gfx-php project is an example of a working project with all of the correct files included.

To have the initial two build steps execute on Read the Docs, add this to the end of docs/conf.py. Don’t forget to update the namespace to match the command you were running locally.

# Regenerate API docs via doxygen + doxyphp2sphinx
import subprocess, os
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
if read_the_docs_build:
    subprocess.call(['doxygen', 'Doxyfile'])
    subprocess.call(['doxyphp2sphinx', 'FooCorp::Example'])

Read the Docs

After all this setup, Read the Docs should be able to build the project. Create the project on Read the Docs by using the Import from GitHub option. There are only two settings which need to be set:

The requirements file location must be docs/requirements.txt:

And the programming language should be PHP.

After this, you can go ahead and run a build.

As a last step, you will want to ensure that you have a Webhook set up on GitHub to trigger the builds automatically in future.

Conclusion

It is emerging as a best practice for small libraries to host their documentation with Read the Docs, but this is not yet common in the PHP world. Larger PHP projects tend to self-host documentation on the project website, but smaller projects will often have no hosted API documentation.

Once you write your docs, publishing them should be easy! Hopefully this provides a good example of what’s possible.

Acknowledgements

Credit where credit is due: The Breathe project fills this niche for C++ developers using Doxygen, and has been around for some time. Breathe is in the early stages of adding PHP support, but is not yet ready at the time of writing.

Quick statistics about documentation hosted on Read the Docs

Many open source projects host their online documentation with Read The Docs. Since I’m currently developing a tool for Read The Docs users, I recently took a look at the types of projects that are hosted there.

The data in this post is derived from the list of projects available in the public Read The Docs API, and has its limitations. In particular, spam, inactive, or abandoned projects are not filtered.

Totals

This data is current as of May 20, 2018. At that time, there were 90,129 total projects on readthedocs.io.

Starting out, I had assumed that the majority of projects had all four of these attributes:

  • Hosted in Git
  • Programmed in Python
  • Documented in English
  • Documentation generated with Sphinx

As it turned out, this particular combination is only used on 35.8% (32,218) of projects, so let’s take a look at how each of these vary.

Programming languages

The two main conclusions I drew by looking at the programming languages are:

  • Python is the largest developer community that is using the Read the Docs platform
  • A lot of projects are hosting documentation that is not tagged with a programming language
# % Projects Programming language
1 39.92% 35978 Not code (“just words”)
2 39.27% 35391 Python
3 9.27% 8354 (No language listed)
4 2.83% 2551 Javascript
5 2.29% 2060 PHP
6 1.15% 1040 C++
7 1.13% 1018 Java
8 0.85% 769 Other
9 0.77% 690 C#
10 0.71% 640 C
11 0.42% 380 CSS
12 0.34% 304 Go
13 0.31% 283 Ruby
14 0.31% 277 Julia
15 0.11% 96 Perl
16 0.06% 55 Objective C
17 0.06% 54 R
18 0.04% 36 TypeScript
19 0.03% 31 Scala
20 0.03% 30 Haskell
21 0.03% 29 Lua
22 0.02% 22 Swift
23 0.02% 15 CoffeeScript
24 0.02% 14 Visual Basic
25 0.01% 12 Groovy

Documentation languages

You might have guessed that English dominates in software documentation, but here is the data:

# % Projects Language
1 93.1% 83888 English (en)
2 2.4% 2178 Indonesian (id)
3 1.4% 1287 Chinese (zh, zh_TW, zh_CN)
4 0.5% 456 Russian (ru)
5 0.5% 425 Spanish (es)
6 0.5% 412 French (fr)
7 0.3% 254 Portuguese (pt)
8 0.3% 233 German (de)
9 0.2% 209 Italian (it)
10 0.2% 208 Japanese (jp)
0.6% 579 All other languages

In total, documentation on Read the Docs has been published in 54 languages. The “All other languages” item here represents 44 other languages.

Documentation build tool

The majority are using the Sphinx documentation generator. The table here counts sphinx, sphinx_htmldir and sphinx_singlehtml together.

# % Projects Language
1 92.5% 83364 Sphinx
2 6.9% 6229 MkDocs
3 0.6% 536 Auto-select

Version control

Git is the version control system used for the vast majority of projects.

# % Projects Version control
1 96.5% 86988 Git
2 2.1% 1879 Mercurial
3 0.9% 816 SVN
4 0.5% 446 Bazaar

First impressions of the Rust programming language

I needed to write a small command-line utility recently, and thought that it would be a good chance to finally try out the Rust programming language.

I was hoping to learn the basics and implement this program in a few hours, but I found that it wasn’t trivial to just pick up Rust and start writing programs immediately. The compiler is quite strict about how you use variables, so progress is slow if you don’t really know what you are doing. Still, Rust allows you to produce programs with some valuable properties, so I’m planning to set aside some time to learn it properly.

In the meantime, these are a few of the first impressions that I had of Rust, as somebody completely new to this particular software ecosystem.

Initial installation is easy

I needed to install the rustc compiler and cargo dependency manager. I found that the command packages for Debian testing were up-to-date, so I installed directly from apt.

sudo apt-get install rustc cargo

From there, it took all of one minute to create a file called hello.rs, drop the “Hello world” program, and see it run:

fn main() {
  println!("Hello");
}

The rustc compiler has a simple-enough invocation:

$ rustc hello.rs

And the resulting program ran as expected.

$ ./hello 
Hello

There were no surprises here. It seemed to work just like C so-far.

Output files are large

The output binary file was much larger than I expected, at 246K.

$ ls -Ahl hello
-rwxr-xr-x 1 mike mike 246K Apr 13 21:55 hello

For comparison, I wrote out a similar program in C and it was 8K.

#include <stdio.h>

int main() {
  printf("Hello\n");
  return 0;
}
$ gcc hello.c -o hello
$ ./hello 
Hello
$ ls -Ahl hello
-rwxr-xr-x 1 mike mike 8.3K Apr 13 21:57 hello

The FAQ explained a few specific reasons why this was the case.

Compile is slow

Compilation took around a quarter of a second. This does not seem like much, but it certainty slower than I expected for such trivial code.

$ time rustc hello.rs
real    0m0.298s

Compared with the C program:

$ time gcc hello.c -o hello
real    0m0.044s

Again, the FAQ points out that zero-cost abstractions incur a compile-time penalty.

Since compilation is a notoriously time-consuming activity with existing languages, I’m hoping that the compile times for Rust are acceptable for small projects.

Legendary compile errors

I’ve read about rustc being very strict, but printing good errors. So, I deleted a ! character and tried to re-compile.

fn main() {
  println("Hello");
}
$ rustc hello.rs 
error[E0423]: expected function, found macro `println`
 --> hello.rs:2:3
  |
2 |   println("Hello");
  |   ^^^^^^^ did you mean `println!(...)`?

error: aborting due to previous error

This error message suggested the correct code to use. This is better than what you get from most compilers, so I’m pretty happy with that.

Editor support is good

Next, I needed a better Rust editor. Eclipse support for Rust was present, but had a patchy maintenance status.

So I tried adding the Rust plugin for IntelliJ IDEA community edition, which is still a fully open source stack. This was point-and-click. The basic steps follow-

Open IntelliJ and click Configure:

Then Plugins:

Search for “rust”, and click Search in Repositories:

Then click RustInstall, and wait:

Once it installs, restart the IDE.

The New Project dialog now shows Rust as an option.

I needed the rust-src package to fill out these options:

$ sudo apt-get install rust-src
$ apt-file list rust-src
/usr/src/rustc-1.24.1/src/ ..

In the IntelliJ Rust environment, most editor features worked, but the debugger was not supported.

So, IDE support is mostly there, but it’s not easy to start hacking as something like Java.

Dependency management

I noticed that Cargo produces .lock files which contain checksums and exact versions of dependencies.

As a composer and yarn user, this was familiar.

“Clippy”

I couldn’t use the clippy linter to check my code, because it does not work on the stable Rust releases. The ‘nightly’ Rust releases are not available in the Debian archive.

So, for installing rustc, I should have used rustup, not apt-get. Lesson learned!

Documentation

I followed the “Guessing game tutorial” from the official Rust book in the IDE, which taught basic I/O, variable scope and control structures in Rust.

The go-to source reference for Rust is ‘The Rust Book’, which is also collaboratively built and freely licensed.

There is also an immense amount of high-quality discussion taking place between Rust users, which I found to be surprisingly accessible as a novice.

Having never used the language before, I didn’t have to ask any questions to get these basics working, which suggests that the available resources are pretty good.

Optimization: How I made my PHP code run 100 times faster

I’ve had a PHP wikitext parser as a dependency of some of my projects since 2012. It has always been a minor performance bottleneck, and I recently decided to do something about it.

I prepared an update to the code over the course of a month, and achieved a speedup of 142 times over the original code.

Before: 20.65 seconds, After: 0.145 seconds

A lot of the information that I could find online about PHP performance was very outdated, so I decided to write a bit about what I’ve learned. This post walks through the process I used, and the things which were were slowing down my code.

This is a long read — I’ve included examples which show slow code, and what I replaced them with. If you’re a serious PHP programmer, then read on!

Lesson 1: Know when to optimize

Conventional wisdom seems to dictate that making your code faster is a waste of developer time.

I think it makes you a better programmer to occasionally optimize something in a language that you normally work with. Having a well-calibrated intuition about how your code will run is part of becoming proficient in a language, and you will tend to create fewer performance problems if you’ve got that intuition.

But you do need to be selective about it. This code has survived in the wild for over five years, and I think I will still be using it in another five. This code is also a good candidate because it does not access external resources, so there is only one component to examine.

Lesson 2: Write tests

In the spirit of MakeItWorkMakeItRightMakeItFast, I started by checking my test suite so that I could refactor the code with confidence.

In my case, I haven’t got good unit tests, but I have input files that I can feed through the parser to compare with known-good HTML output, which serves the same purpose:

php example.php > out.txt
diff good.txt out.txt

I ran this after every change to the code, so that I could be sure that the changes were not affecting the output.

Lesson 3: Profile your code & Question your assumptions

Code profiling allows you see how each part of your program is contributing to its overall run-time. This helps you to target your optimization efforts.

The two main debuggers for PHP are Zend and Xdebug, which can both profile your code. I have xdebug installed, which is the free debugger, and I use the Eclipse IDE, which is the free IDE. Unfortunately, the built-in profiling tool in Eclipse seems to only support the Zend debugger, so I have to profile my scripts on the command-line.

The best sources of information for this are:

On Debian or Ubuntu, xdebug is installed via apt-get:

sudo apt-get install php-cli php-xdebug

On Fedora, the package is called php-pecl-xdebug, and is installed as:

sudo dnf install php-pecl-xdebug

Next, I executed a slow-running example script with profiling enabled:

php -dxdebug.profiler_enable=1 -dxdebug.profiler_output_dir=. example.php

This produces a profile file, which you can use any valgrind-compatible tools to inspect. I used kcachegrind

sudo apt-get install kcachegrind

And for fedora:

sudo dnf install kcachegrind

You can locate and open the profile on the command-line like so:

ls
kcachegrind cachegrind.out.13385

Before profiling, I had guessed that the best way to speed up the code would be to reduce the amount of string concatenation. I have lots of tight loops which append characters one-at-a-time:

$buffer .= "$c"

Xdebug showed me that my guess was wrong, and I would have wasted a lot of time if I tried to remove string concatenation.

kcachegrind screen capture

Instead, it was clear that I was

  • Calculating the same thing hundreds of times over.
  • Doing it inefficiently.

Lesson 4: Avoid multibyte string functions

I had used functions from the mbstring extension (mb_strlen, mb_substr) to replace strlen and substr throughout my code. This is the simplest way to add UTF-8 support when iterating strings, is commonly suggested, and is a bad idea.

What people do

If you have an ASCII string in PHP and want to iterate over each byte, the idiomatic way to do it is with a for loop which indexes into the string, something like this:

<?php
// Make a test string
$testString = str_repeat('a', 60000);
// Loop through test string
$len = strlen($testString);
for($i = 0; $i < $len; $i++) {
  $c = substr($testString, $i, 1);
  // Do work on $c
  // ...
}

I’ve used substr here so that I can show that it has the same usage as mb_substr, which generally operates on UTF-8 characters. The idiomatic PHP for iterating over a multi-byte string one character at a time would be:

<?php
// Make a test string
$testString = str_repeat('a', 60000);
// Loop through test string
$len = mb_strlen($testString);
for($i = 0; $i < $len; $i++) {
  $c = mb_substr($testString, $i, 1);
  // Do work on $c
  // ...
}

Since mb_substr needs to parse UTF-8 from the start of the string each time it is called, the second snippet runs in polynomial time, where the snippet that calls substr in a loop is linear.

With a few kilobytes of input, this makes mb_substr unacceptably slow.

substr: 0.03 seconds, mb_substr: 4.23 seconds

Averaging over 10 runs, the mb_substr snippet takes 4.23 seconds, while the snippet using substr takes 0.03 seconds.

What people should do

Split your strings into bytes or characters before you iterate, and write methods which operate on arrays rather than strings.

You can use str_split to iterate over bytes:

<?php
// Make a test string
$testString = str_repeat('a', 60000);
// Loop through test string
$testArray = str_split($testString);
$len = count($testArray);
for($i = 0; $i < $len; $i++) {
  $c = $testArray[$i];
  // Do work on $c
  // ...
}

And for unicode strings, use preg_split. I learned about this trick from StackOverflow, but it might not be the fastest way. Please leave a comment if you have an alternative!

<?php
// Make a test string
$testString = str_repeat('a', 60000);
// Loop through test string
$testArray = preg_split('//u', $testString, -1, PREG_SPLIT_NO_EMPTY);
$len = count($testArray);
for($i = 0; $i < $len; $i++) {
  $c = $testArray[$i];
  // Do work on $c
  // ...
}

By converting the string to an array, you can pay the penalty of decoding the UTF-8 up-front. This is a few milliseconds at the start of the script, rather than a few milliseconds each time you need to read a character.

str_split: 0.0097s, preg_split: 0.0160s

After discovering this faster alternative to mb_substr, I systematically removed every mb_substr and mb_strlen from the code I was working on.

Lesson 5: Optimize for the most common case

Around 50% of the remaining runtime was spent in a method which expanded templates.

To parse wikitext, you first need to expand templates, which involves detecting tags like {{ template-name | foo=bar }} and <noinclude></noinclude>.

My 40 kilobyte test file had fewer than 100 instances of { and <, | and =, so I added a short-circuit to the code to skip most of the processing, for most of the characters.

<?php
self::$preprocessorChars = [
    '<' => true,
    '=' => true,
    '|' => true,
    '{' => true
];

// ...
for ($i = 0; $i < $len; $i++) {
    $c = $textChars[$i];
    if (!isset(self::$preprocessorChars[$c])) {
        /* Fast exit for characters that do not start a tag. */
        $parsed .= $c;
        continue;
    }
   // ... Slower processing 
}

The slower processing is now avoided 99.75% of the time.

Checking for the presence of a key in a map is very fast. To illustrate, here are two examples which each branch on { and <, | and =.

This one uses a map to check each character:

<?php
// Make a test string
$testString = str_repeat('a', 600000);
$chars = [
    '<' => true,
    '=' => true,
    '|' => true,
    '{' => true
];
// Loop through test string
$testArray = preg_split('//u', $testString, -1, PREG_SPLIT_NO_EMPTY);
$len = count($testArray);
$parsed = "";
for($i = 0; $i < $len; $i++) {
  $c = $testArray[$i];
  if(!isset($chars[$c])) {
    $parsed .= $c;
    continue;
  }
  // Never executed
}

While one uses no map, and has four !== checks instead:

<?php
// Make a test string
$testString = str_repeat('a', 600000);
// Loop through test string
$testArray = preg_split('//u', $testString, -1, PREG_SPLIT_NO_EMPTY);
$len = count($testArray);
$parsed = "";
for($i = 0; $i < $len; $i++) {
  $c = $testArray[$i];
  if($c !== "<" && $c !== "=" && $c !== "|" && $c !== "{") {
    $parsed .= $c;
    continue;
  }
  // Never executed
}

Even though the run time of each script includes the generation of a 600kB test string, the difference is still visible:

0.29 seconds with map, 0.37 seconds without map

Averaging over 10 runs, the code took 0.29 seconds when using a map, while it took 0.37 seconds to run the example with used four !== statements.

I was a little surprised by this result, but I’ll let the data speak for itself rather than try to explain why this is the case.

Lesson 6: Share data between functions

The next item to appear in the profiler was the copious use of array_slice.
My code uses recursive descent, and was constantly slicing up the input to pass around information. The array slicing had replaced earlier string slicing, which was even slower.

I refactored the code to pass around the entire string with indices rather than actually cutting it up.

As a contrived example, these scripts each use a (very unnecessary) recursive-descent parser to take words from the dictionary and transform them like this:

example --> (example!)

The first example slices up the input array at each recursion step:

<?php
function handleWord(string $word) {
  return "($word!)\n";
}

/**
 * Parse a word up to the next newline.
 */
function parseWord(array $textChars) {
  $parsed = "";
  $len = count($textChars);
  for($i = 0; $i < $len; $i++) {
    $c = $textChars[$i];
    if($c === "\n") {
      // Word is finished because we hit a newline
      $start = $i + 1; // Past the newline
      $remainderChars = array_slice($textChars, $start , $len - $start);
      return array('parsed' => handleWord($parsed), 'remainderChars' => $remainderChars);
    }
    $parsed .= $c;
  }
  // Word is finished because we hit the end of the input
  return array('parsed' => handleWord($parsed), 'remainderChars' => []);
}

/**
 * Accept newline-delimited dictionary
 */
function parseDictionary(array $textChars) {
  $parsed = "";
  $len = count($textChars);
  for($i = 0; $i < $len; $i++) {
    $c = $textChars[$i];
    if($c === "\n") {
      // Not a word...
      continue;
    }
    // This is part of a word
    $start = $i;
    $remainderChars = array_slice($textChars, $start, $len - $start);
    $result = parseWord($remainderChars);
    $textChars = $result['remainderChars'];
    $len = count($textChars);
    $i = -1;
    $parsed .= $result['parsed'];
  }
  return array('parsed' => $parsed, 'remainderChars' => []);
}

// Load file, split into characters, parse, print result
$testString = file_get_contents("words");
$testArray = preg_split('//u', $testString, -1, PREG_SPLIT_NO_EMPTY);
$ret = parseDictionary($testArray);
file_put_contents("words2", $ret['parsed']);

While the second one always takes an index into the input array:

<?php
function handleWord(string $word) {
  return "($word!)\n";
}

/**
 * Parse a word up to the next newline.
 */
function parseWord(array $textChars, int $idxFrom = 0) {
  $parsed = "";
  $len = count($textChars);
  for($i = $idxFrom; $i < $len; $i++) {
    $c = $textChars[$i];
    if($c === "\n") {
      // Word is finished because we hit a newline
      $start = $i + 1; // Past the newline
      return array('parsed' => handleWord($parsed), 'remainderIdx' => $start);
    }
    $parsed .= $c;
  }
  // Word is finished because we hit the end of the input
  return array('parsed' => handleWord($parsed), $i);
}

/**
 * Accept newline-delimited dictionary
 */
function parseDictionary(array $textChars, int $idxFrom = 0) {
  $parsed = "";
  $len = count($textChars);
  for($i = $idxFrom; $i < $len; $i++) {
    $c = $textChars[$i];
    if($c === "\n") {
      // Not a word...
      continue;
    }
    // This is part of a word
    $start = $i;
    $result = parseWord($textChars, $start);
    $i = $result['remainderIdx'] - 1;
    $parsed .= $result['parsed'];
  }
  return array('parsed' => $parsed, 'remainderChars' => []);
}

// Load file, split into characters, parse, print result
$testString = file_get_contents("words");
$testArray = preg_split('//u', $testString, -1, PREG_SPLIT_NO_EMPTY);
$ret = parseDictionary($testArray);
file_put_contents("words2", $ret['parsed']);

The run-time difference between these examples is again very pronounced:

3.04s with slicing, 0.0302s with no slicing

Averaging over 10 runs, the code snippet which extracts sub-arrays took 3.04 seconds, while the code that passes around the entire array ran in 0.0302 seconds.

It’s amazing that such an obvious inefficiency in my code had been hiding behind larger problems before.

Lesson 7: Use scalar type hinting

Scalar type hinting looks like this:

function foo(int bar, string $baz) {
...
}

This is the secret PHP performance feature that they don’t tell you about. It does not actually change the speed of your code, but does ensure that it can’t be run on the slower PHP releases before 7.0.

PHP 7.0 was released in 2015, and it’s been established that it is twice as fast as PHP 5.6 in many cases.

I think it’s reasonable to have a dependency on a supported version of your platform, and performance improvements like this are a good reason to update your minimum supported version of PHP.

By breaking compatibility with scalar type hints, you ensure that your software does not appear to “work” in a degraded state.

Simplifying the compatibility landscape will also make performance a more tractable problem.

Lesson 8: Ignore advice that isn’t backed up by (relevant) data

While I was updating this code, I found a lot of out-dated claims about PHP performance, which did not hold true for the versions that I support.

To call out a few myths that still seem to be alive:

  • Style of quotes impacts performance.
  • Use of by-val is slower than by-ref for variable passing.
  • String concatenation is bad for performance.

I attempted to implement each of these, and wasted a lot of time. Thankfully, I was measuring the run-time and using version control, so it was easy for me to identify and discard changes which had a negligible or negative performance impact.

If somebody makes a claim about something being fast or slow in PHP, best assume that it doesn’t apply to you, unless you see some example code with timings on a recent PHP version.

Conclusion

If you’ve read this far, then I hope you’ve seen that modern PHP is not an intrinsically slow language. A significant speed-up is probably achievable on any real-world code-base with such endemic performance issues.

Before: 20.65 seconds, After: 0.145 seconds

To repeat the graphic from the introduction: the test file could be parsed in 20.65 seconds on the old code, and 0.145 seconds on the improved code (averaging over 10 runs, as before).

At this point I declared my efforts “good enough” and moved on. The job is done: although another pass could speed it up further, this code is no longer slow enough to justify the effort.

Since I’ve closed the book on PHP 5, I would be interested in knowing whether there is a faster way to parse UTF-8 with the new IntlChar functions, but I’ll have to save that for the next project.

Now that you’ve seen some very inefficient ways to write PHP, I also hope that you will be able to avoid introducing similar problems in your own projects!

Using a receipt printer with the Amazon Echo

Today, I’m going to share this write-up by fellow developer Chris, who used the escpos-php thermal printing library as part of a setup which printed shopping lists via voice commands, using the Alexa Voice Service API to send the lists back to a Raspberry Pi.

Naturally, the easiest solution […] is to print it in thermal paper… Now, combine this with a voice interface, such as ALEXA, and you made yourself a voice controlled list printer.

I found out about this one through a blog comment, and it’s a recommended read for anybody who is interested in how all of these parts integrate.

When I first uploaded this printing library four years ago, the Amazon Echo did not exist yet, and I was solving a very specific problem. For an old technology, it’s interesting to see that new applications for thermal printers are still appearing, and I’m certainly glad to see my software popping up in cool projects like this.