Profiling and Logging in Query Monitor

Query Monitor 3.0 quietly introduced a feature which allows developers to profile the running time and memory usage of their code. Query Monitor 3.1 adds a PSR-3 compatible logger which allows developers to log debugging messages to Query Monitor.

Let’s take a look at profiling and logging in detail.

Profiling

Basic profiling can be performed and displayed in the Timings panel in Query Monitor using actions in your code:

// Start the 'foo' timer:
do_action( 'qm/start', 'foo' );

// Run some code
my_potentially_slow_function();

// Stop the 'foo' timer:
do_action( 'qm/stop', 'foo' );

The time taken and approximate memory usage used between the qm/start and qm/stop actions for the given function name will be recorded and shown in the Timings panel. Timers can be nested, although be aware that this reduces the accuracy of the memory usage calculations.

Timers can also make use of laps with the qm/lap action:

// Start the 'bar' timer:
do_action( 'qm/start', 'bar' );

// Iterate over some data:
foreach ( range( 1, 10 ) as $i ) {
    my_potentially_slow_function( $i );
    do_action( 'qm/lap', 'bar' );
}

// Stop the 'bar' timer:
do_action( 'qm/stop', 'bar' );

Here’s what the Timing panel looks like:

Query Monitor's Timing Panel

Note that the times and memory usage displayed in the Timings panel should be treated as approximations, because they are recorded at the PHP level and can be skewed by your environment and by other code. If you require highly accurate timings, you’ll need to use a low level profiling tool such as XHProf.

Logging

Debugging messages can be sent to the Logs panel in Query Monitor using actions in your code:

do_action( 'qm/debug', 'This happened!' );

You can use any of the following actions which correspond to PSR-3 and syslog log levels:

  • qm/emergency
  • qm/alert
  • qm/critical
  • qm/error
  • qm/warning
  • qm/notice
  • qm/info
  • qm/debug

A log level of warning or higher will trigger a notification in Query Monitor’s admin toolbar.

Here’s what the Logs panel looks like:

Query Monitor's Logging Panel

Contextual interpolation can be used via the curly brace syntax:

do_action( 'qm/warning', 'Unexpected value of {foo} encountered', [
    'foo' => $foo,
] );

A WP_Error or Exception object can be passed directly into the logger:

if ( is_wp_error( $response ) ) {
    do_action( 'qm/error', $response );
}
try {
    // your code
} catch ( Exception $e ) {
    do_action( 'qm/error', $e );
}

Finally, the static logging methods on the QM class can be used instead of calling do_action():

QM::error( 'Everything is broken' );

I hope you find these two new features useful. As always, feature requests and bug reports can be reported on Query Monitor’s GitHub repo.

11 thoughts on “Profiling and Logging in Query Monitor”

  1. Curly braces are great for simple variable but not so much for objects, even with a print_r.
    Currently I trigger 2 actions: message first, then the object. This way I get a since printed version of the context.
    What would be the best way to pass an array or object as context for qm/debug?
    Thanks.

    Reply
  2. it would be an awesome feature for Query monitor to be able to enable WP_DEBUG and logging from an admin page and see the errors there. Sometimes when there’s a fatal php error on a front end page, the admin bar doesn’t show at all, hence the debug bar is not accessible either to see what is causing the error. With an admin page that lists the debug log entries this would be easy to circumvent.

    Reply
  3. The profiling feature is really useful – thanks!

    Is there a way to log `qm/start` profiling (time and mem use) to file or database? I want to use this to profile background tasks (independent of a web request), where can I find this data?

    Reply
    • There’s no way to do this in QM by default but you could hook into those actions yourself and log them. For example:

      add_action( 'qm/start', function( $name ) {
      error_log( "Started: {$name}" );
      } );

      Reply
      • Firstly great tool – really impressive.

        Reason for comment is that I did some custom development that was complementary to what QueryMonitor along the lines of what Rua was asking. Some details below, would be happy to share more if there is interest.

        I needed to be able to log the performance of my site as I was seeing periodic hang ups. Using QueryMonitor showed that there were slow running queries but I needed to be able to track performance of queries over time. Since QueryMonitor didn’t offer this I did research into the$wpdb->queries, and ended up creating a custom table in the mySQL database supporting the WP instance (call it wp_ext_performance). On the key page on my site I added a PHP script at the end that wrote an entry to the performance table for each script that ran long (I chose 0.1s). The fields I tracked are:
        query_seq – unique sequence number
        page_start_tm – time that the PHP script was called
        query_txt – actual SQL query
        query_run_tm – elapsed run time

        I was expecting to find that there were certain queries that were running long but instead I determined that performance of the site was dropping every five minutes (00, 05, 10, 15, …, 50, 55) with 93% of the slow running queries occurring in those 12 minutes and the remaining 48 minutes only seeing 7% of the slow queries. This strongly suggests that there is a batch process (probably running on the back end) that was kicked off every five minutes that was chewing up resources but not showing up on cPanel monitoring tools.

        QueryMonitor is great where there are consistent issues, but having that additional tracking could assist in other use cases. Would be happy to share any info/code if it would be helpful for future QueryMonitor versions

        Reply
        • Persistent logging is definitely something I want to work on and is one of the most requested features for Query Monitor, but before that I want to get the front end switched over to use client side rendering with React so it can handle substantially more data without choking the browser, so the data sources can be swapped out so historical data can be viewed, and so the components can be reused elsewhere such as a dashboard. Only once that’s ready can I add a persistent logging feature that’s really usable and reliable.

          Regarding the slowdown spikes, it could be that a scheduled event is using a lot of resources but I’d be surprised if it was having such a great impact. Maybe a periodical backup is locking the tables? Or maybe a cron or WP-Cron event is churning the data and invalidating the indexes. Not sure. I would ask your web host about it!

          Reply
  4. Hello!
    First, thank you for making this amazing tool!

    I have a question.
    Loggins is not working inside of a handler for add_action.
    It works outside of it though.
    Do you know why?
    I tried both do_action and the static method.

    Thank you

    Reply

Leave a comment