Table of Contents
To use the MySQL Performance Schema, these configuration considerations apply:
The Performance Schema must be configured into MySQL Server at build time to make it available. Performance Schema support is included in binary MySQL distributions. If you are building from source, you must ensure that it is configured into the build as described in Section 3.1, “Performance Schema Build Configuration”.
The Performance Schema must be enabled at server startup to enable event collection to occur. Specific Performance Schema features can be enabled at server startup or at runtime to control which types of event collection occur. See Section 3.2, “Performance Schema Startup Configuration”, Section 3.3, “Performance Schema Runtime Configuration”, and Section 3.3.2, “Performance Schema Event Filtering”.
For the Performance Schema to be available, it must be configured into the MySQL server at build time. Binary MySQL distributions provided by Oracle Corporation are configured to support the Performance Schema. If you use a binary MySQL distribution from another provider, check with the provider whether the distribution has been appropriately configured.
If you build MySQL from a source distribution, enable the
Performance Schema by running CMake with the
WITH_PERFSCHEMA_STORAGE_ENGINE
option enabled:
shell> cmake . -DWITH_PERFSCHEMA_STORAGE_ENGINE=1
Configuring MySQL with the
-DWITHOUT_PERFSCHEMA_STORAGE_ENGINE=1
option prevents inclusion of the Performance Schema, so if you
want it included, do not use this option. See
MySQL Source-Configuration Options.
If you install MySQL over a previous installation that was
configured without the Performance Schema (or with an older
version of the Performance Schema that may not have all the
current tables), run mysql_upgrade after
starting the server to ensure that the
performance_schema database exists with all
current tables. Then restart the server. One indication that you
need to do this is the presence of messages such as the
following in the error log:
[ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure ...
To verify whether a server was built with Performance Schema
support, check its help output. If the Performance Schema is
available, the output will mention several variables with names
that begin with performance_schema:
shell> mysqld --verbose --help
...
--performance_schema
Enable the performance schema.
--performance_schema_events_waits_history_long_size=#
Number of rows in events_waits_history_long.
...
You can also connect to the server and look for a line that
names the PERFORMANCE_SCHEMA
storage engine in the output from SHOW
ENGINES:
mysql> SHOW ENGINES\G
...
Engine: PERFORMANCE_SCHEMA
Support: YES
Comment: Performance Schema
Transactions: NO
XA: NO
Savepoints: NO
...
If the Performance Schema was not configured into the server at
build time, no row for
PERFORMANCE_SCHEMA will appear in
the output from SHOW ENGINES. You
might see performance_schema listed in the
output from SHOW DATABASES, but
it will have no tables and you will not be able to use it.
A line for PERFORMANCE_SCHEMA in
the SHOW ENGINES output means
that the Performance Schema is available, not that it is
enabled. To enable it, you must do so at server startup, as
described in the next section.
Assuming that the Performance Schema is available, it is enabled
by default as of MySQL 5.6.6. Before 5.6.6, it is disabled by
default. To enable or disable it explicitly, start the server
with the performance_schema
variable set to an appropriate value. For example, use these
lines in your my.cnf file:
[mysqld] performance_schema=ON
If the server is unable to allocate any internal buffer during
Performance Schema initialization, the Performance Schema
disables itself and sets
performance_schema to
OFF, and the server runs without
instrumentation.
As of MySQL 5.6.4, the Performance Schema permits instrument and
consumer configuration at server startup, which previously was
possible only at runtime using
UPDATE statements for the
setup_instruments and
setup_consumers tables. This change
was made because configuration at runtime is too late to disable
instruments that have already been initialized during server
startup. For example, the
wait/synch/mutex/sql/LOCK_open mutex is
initialized once during server startup, so attempts to disable
the corresponding instrument at runtime have no effect.
To control an instrument at server startup, use an option of this form:
--performance-schema-instrument='instrument_name=value'
Here, instrument_name is an
instrument name such as
wait/synch/mutex/sql/LOCK_open, and
value is one of these values:
OFF, FALSE, or
0: Disable the instrument
ON, TRUE, or
1: Enable and time the instrument
COUNTED: Enable and count (rather than
time) the instrument
Each
--performance-schema-instrument
option can specify only one instrument name, but multiple
instances of the option can be given to configure multiple
instruments. In addition, patterns are permitted in instrument
names to configure instruments that match the pattern. To
configure all condition synchronization instruments as enabled
and counted, use this option:
--performance-schema-instrument='wait/synch/cond/%=COUNTED'
To disable all instruments, use this option:
--performance-schema-instrument='%=OFF'
Longer instrument name strings take precedence over shorter pattern names, regardless of order. For information about specifying patterns to select instruments, see Section 3.3.4, “Naming Instruments or Consumers for Filtering Operations”.
An unrecognized instrument name is ignored. It is possible that a plugin installed later may create the instrument, at which time the name is recognized and configured.
To control a consumer at server startup, use an option of this form:
--performance-schema-consumer-consumer_name=value
Here, consumer_name is a consumer
name such as events_waits_history, and
value is one of these values:
OFF, FALSE, or
0: Do not collect events for the consumer
ON, TRUE, or
1: Collect events for the consumer
For example, to enable the
events_waits_history consumer, use this
option:
--performance-schema-consumer-events-waits-history=ON
The permitted consumer names can be found by examining the
setup_consumers table. Patterns are
not permitted. Consumer names in the
setup_consumers table use
underscores, but for consumers set at startup, dashes and
underscores within the name are equivalent.
The Performance Schema includes several system variables that provide configuration information:
mysql> SHOW VARIABLES LIKE 'perf%';
+--------------------------------------------------------+---------+
| Variable_name | Value |
+--------------------------------------------------------+---------+
| performance_schema | ON |
| performance_schema_accounts_size | 100 |
| performance_schema_digests_size | 200 |
| performance_schema_events_stages_history_long_size | 10000 |
| performance_schema_events_stages_history_size | 10 |
| performance_schema_events_statements_history_long_size | 10000 |
| performance_schema_events_statements_history_size | 10 |
| performance_schema_events_waits_history_long_size | 10000 |
| performance_schema_events_waits_history_size | 10 |
| performance_schema_hosts_size | 100 |
| performance_schema_max_cond_classes | 80 |
| performance_schema_max_cond_instances | 1000 |
...
The performance_schema variable
is ON or OFF to indicate
whether the Performance Schema is enabled or disabled. The other
variables indicate table sizes (number of rows) or memory
allocation values.
With the Performance Schema enabled, the number of Performance Schema instances affects the server memory footprint, perhaps to a large extent. It may be necessary to tune the values of Performance Schema system variables to find the number of instances that balances insufficient instrumentation against excessive memory consumption.
To change the value of Performance Schema system variables, set
them at server startup. For example, put the following lines in
a my.cnf file to change the sizes of the
history tables for wait events:
[mysqld] performance_schema performance_schema_events_waits_history_size=20 performance_schema_events_waits_history_long_size=15000
As of MySQL 5.6.6, the Performance Schema automatically sizes the values of several of its parameters at server startup if they are not set explicitly. For example, it sizes the parameters that control the sizes of the events waits tables this way. To see which parameters are autosized under this policy, use mysqld --verbose --help and look for those with a default value of −1, or see Chapter 10, Performance Schema System Variables.
For each autosized parameter that is not set at server startup (or is set to −1), the Performance Schema determines how to set its value based on the value of the following system values, which are considered as “hints” about how you have configured your MySQL server:
max_connections open_files_limit table_definition_cache table_open_cache
To override autosizing for a given parameter, set it to a value other than −1 at startup. In this case, the Performance Schema assigns it the specified value.
At runtime, SHOW VARIABLES
displays the actual values that autosized parameters were set
to.
If the Performance Schema is disabled, its autosized parameters
remain set to −1 and SHOW
VARIABLES displays −1.
Performance Schema setup tables contain information about monitoring configuration:
mysql>SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES->WHERE TABLE_SCHEMA = 'performance_schema'->AND TABLE_NAME LIKE 'setup%';+-------------------+ | TABLE_NAME | +-------------------+ | setup_actors | | setup_consumers | | setup_instruments | | setup_objects | | setup_timers | +-------------------+
You can examine the contents of these tables to obtain
information about Performance Schema monitoring characteristics.
If you have the UPDATE privilege,
you can change Performance Schema operation by modifying setup
tables to affect how monitoring occurs. For additional details
about these tables, see
Section 8.2, “Performance Schema Setup Tables”.
To see which event timers are selected, query the
setup_timers tables:
mysql> SELECT * FROM setup_timers;
+-----------+-------------+
| NAME | TIMER_NAME |
+-----------+-------------+
| idle | MICROSECOND |
| wait | CYCLE |
| stage | NANOSECOND |
| statement | NANOSECOND |
+-----------+-------------+
The NAME value indicates the type of
instrument to which the timer applies, and
TIMER_NAME indicates which timer applies to
those instruments. The timer applies to instruments where their
name begins with a component matching the
NAME value.
To change the timer, update the NAME value.
For example, to use the NANOSECOND timer for
the wait timer:
mysql>UPDATE setup_timers SET TIMER_NAME = 'NANOSECOND'->WHERE NAME = 'wait';mysql>SELECT * FROM setup_timers;+-----------+-------------+ | NAME | TIMER_NAME | +-----------+-------------+ | idle | MICROSECOND | | wait | NANOSECOND | | stage | NANOSECOND | | statement | NANOSECOND | +-----------+-------------+
For discussion of timers, see Section 3.3.1, “Performance Schema Event Timing”.
The setup_instruments and
setup_consumers tables list the
instruments for which events can be collected and the types of
consumers for which event information actually is collected,
respectively. Other setup tables enable further modification of
the monitoring configuration.
Section 3.3.2, “Performance Schema Event Filtering”, discusses how
you can modify these tables to affect event collection.
If there are Performance Schema configuration changes that must
be made at runtime using SQL statements and you would like these
changes to take effect each time the server starts, put the
statements in a file and start the server with the
--init-file=
option. This strategy can also be useful if you have multiple
monitoring configurations, each tailored to produce a different
kind of monitoring, such as casual server health monitoring,
incident investigation, application behavior troubleshooting,
and so forth. Put the statements for each monitoring
configuration into their own file and specify the appropriate
file as the file_name--init-file argument
when you start the server.
Events are collected by means of instrumentation added to the server source code. Instruments time events, which is how the Performance Schema provides an idea of how long events take. It is also possible to configure instruments not to collect timing information. This section discusses the available timers and their characteristics, and how timing values are represented in events.
Two Performance Schema tables provide timer information:
performance_timers lists
the available timers and their characteristics.
setup_timers indicates
which timers are used for which instruments.
Each timer row in setup_timers
must refer to one of the timers listed in
performance_timers.
Timers vary in precision and amount of overhead. To see what
timers are available and their characteristics, check the
performance_timers table:
mysql> SELECT * FROM performance_timers;
+-------------+-----------------+------------------+----------------+
| TIMER_NAME | TIMER_FREQUENCY | TIMER_RESOLUTION | TIMER_OVERHEAD |
+-------------+-----------------+------------------+----------------+
| CYCLE | 2389029850 | 1 | 72 |
| NANOSECOND | 1000000000 | 1 | 112 |
| MICROSECOND | 1000000 | 1 | 136 |
| MILLISECOND | 1036 | 1 | 168 |
| TICK | 105 | 1 | 2416 |
+-------------+-----------------+------------------+----------------+
The columns have these meanings:
The TIMER_NAME column shows the names
of the available timers. CYCLE refers
to the timer that is based on the CPU (processor) cycle
counter. The timers in
setup_timers that you can
use are those that do not have NULL
in the other columns. If the values associated with a
given timer name are NULL, that timer
is not supported on your platform.
TIMER_FREQUENCY indicates the number
of timer units per second. For a cycle timer, the
frequency is generally related to the CPU speed. The
value shown was obtained on a system with a 2.4GHz
processor. The other timers are based on fixed fractions
of seconds. For TICK, the frequency
may vary by platform (for example, some use 100
ticks/second, others 1000 ticks/second).
TIMER_RESOLUTION indicates the number
of timer units by which timer values increase at a time.
If a timer has a resolution of 10, its value increases
by 10 each time.
TIMER_OVERHEAD is the minimal number
of cycles of overhead to obtain one timing with the
given timer. The overhead per event is twice the value
displayed because the timer is invoked at the beginning
and end of the event.
To see which timers are in effect or to change timers,
access the setup_timers table:
mysql>SELECT * FROM setup_timers;+-----------+-------------+ | NAME | TIMER_NAME | +-----------+-------------+ | idle | MICROSECOND | | wait | CYCLE | | stage | NANOSECOND | | statement | NANOSECOND | +-----------+-------------+ mysql>UPDATE setup_timers SET TIMER_NAME = 'MICROSECOND'->WHERE NAME = 'idle';mysql>SELECT * FROM setup_timers;+-----------+-------------+ | NAME | TIMER_NAME | +-----------+-------------+ | idle | MICROSECOND | | wait | CYCLE | | stage | NANOSECOND | | statement | NANOSECOND | +-----------+-------------+
By default, the Performance Schema uses the best timer available for each instrument type, but you can select a different one.
To time wait events, the most important criterion is to
reduce overhead, at the possible expense of the timer
accuracy, so using the CYCLE timer is the
best.
The time a statement (or stage) takes to execute is in
general orders of magnitude larger than the time it takes to
execute a single wait. To time statements, the most
important criterion is to have an accurate measure, which is
not affected by changes in processor frequency, so using a
timer which is not based on cycles is the best. The default
timer for statements is NANOSECOND. The
extra “overhead” compared to the
CYCLE timer is not significant, because
the overhead caused by calling a timer twice (once when the
statement starts, once when it ends) is orders of magnitude
less compared to the CPU time used to execute the statement
itself. Using the CYCLE timer has no
benefit here, only drawbacks.
The precision offered by the cycle counter depends on
processor speed. If the processor runs at 1 GHz (one billion
cycles/second) or higher, the cycle counter delivers
sub-nanosecond precision. Using the cycle counter is much
cheaper than getting the actual time of day. For example,
the standard gettimeofday() function can
take hundreds of cycles, which is an unacceptable overhead
for data gathering that may occur thousands or millions of
times per second.
Cycle counters also have disadvantages:
End users expect to see timings in wall-clock units, such as fractions of a second. Converting from cycles to fractions of seconds can be expensive. For this reason, the conversion is a quick and fairly rough multiplication operation.
Processor cycle rate might change, such as when a laptop goes into power-saving mode or when a CPU slows down to reduce heat generation. If a processor's cycle rate fluctuates, conversion from cycles to real-time units is subject to error.
Cycle counters might be unreliable or unavailable
depending on the processor or the operating system. For
example, on Pentiums, the instruction is
RDTSC (an assembly-language rather
than a C instruction) and it is theoretically possible
for the operating system to prevent user-mode programs
from using it.
Some processor details related to out-of-order execution or multiprocessor synchronization might cause the counter to seem fast or slow by up to 1000 cycles.
MySQL works with cycle counters on x386 (Windows, OS X, Linux, Solaris, and other Unix flavors), PowerPC, and IA-64.
Rows in Performance Schema tables that store current events
and historical events have three columns to represent timing
information: TIMER_START and
TIMER_END indicate when an event started
and finished, and TIMER_WAIT indicates
event duration.
The setup_instruments table has
an ENABLED column to indicate the
instruments for which to collect events. The table also has
a TIMED column to indicate which
instruments are timed. If an instrument is not enabled, it
produces no events. If an enabled instrument is not timed,
events produced by the instrument have
NULL for the
TIMER_START,
TIMER_END, and
TIMER_WAIT timer values. This in turn
causes those values to be ignored when calculating the sum,
minimum, maximum, and average time values in summary tables.
Internally, times within events are stored in units given by the timer in effect when event timing begins. For display when events are retrieved from Performance Schema tables, times are shown in picoseconds (trillionths of a second) to normalize them to a standard unit, regardless of which timer is selected.
Modifications to the
setup_timers table affect
monitoring immediately. Events already in progress may use
the original timer for the begin time and the new timer for
the end time. To avoid unpredictable results after you make
timer changes, use TRUNCATE
TABLE to reset Performance Schema statistics.
The timer baseline (“time zero”) occurs at
Performance Schema initialization during server startup.
TIMER_START and
TIMER_END values in events represent
picoseconds since the baseline.
TIMER_WAIT values are durations in
picoseconds.
Picosecond values in events are approximate. Their accuracy
is subject to the usual forms of error associated with
conversion from one unit to another. If the
CYCLE timer is used and the processor
rate varies, there might be drift. For these reasons, it is
not reasonable to look at the TIMER_START
value for an event as an accurate measure of time elapsed
since server startup. On the other hand, it is reasonable to
use TIMER_START or
TIMER_WAIT values in ORDER
BY clauses to order events by start time or
duration.
The choice of picoseconds in events rather than a value such
as microseconds has a performance basis. One implementation
goal was to show results in a uniform time unit, regardless
of the timer. In an ideal world this time unit would look
like a wall-clock unit and be reasonably precise; in other
words, microseconds. But to convert cycles or nanoseconds to
microseconds, it would be necessary to perform a division
for every instrumentation. Division is expensive on many
platforms. Multiplication is not expensive, so that is what
is used. Therefore, the time unit is an integer multiple of
the highest possible TIMER_FREQUENCY
value, using a multiplier large enough to ensure that there
is no major precision loss. The result is that the time unit
is “picoseconds.” This precision is spurious,
but the decision enables overhead to be minimized.
Before MySQL 5.6.26, while a wait, stage, or statement event
is executing, the respective current-event tables display
the event with TIMER_START populated, but
with TIMER_END and
TIMER_WAIT set to
NULL:
events_waits_current events_stages_current events_statements_current
As of MySQL 5.6.26, current-event timing provides more information. To make it possible to determine how how long a not-yet-completed event has been running, the timer columns are set as follows:
TIMER_START is populated (unchanged
from previous behavior)
TIMER_END is populated with the
current timer value
TIMER_WAIT is populated with the time
elapsed so far (TIMER_END −
TIMER_START)
Events that have not yet completed have an
END_EVENT_ID value of
NULL. To assess time elapsed so far for
an event, use the TIMER_WAIT column.
Therefore, to identify events that have not yet completed
and have taken longer than N
picoseconds thus far, monitoring applications can use this
expression in queries:
WHERE END_EVENT_ID IS NULL AND TIMER_WAIT > N
Event identification as just described assumes that the
corresponding instruments have ENABLED
and TIMED set to YES
and that the relevent consumers are enabled.
Events are processed in a producer/consumer fashion:
Instrumented code is the source for events and produces
events to be collected. The
setup_instruments table lists
the instruments for which events can be collected, whether
they are enabled, and (for enabled instruments) whether to
collect timing information:
mysql> SELECT * FROM setup_instruments;
+------------------------------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+------------------------------------------------------------+---------+-------+
...
| wait/synch/mutex/sql/LOCK_global_read_lock | YES | YES |
| wait/synch/mutex/sql/LOCK_global_system_variables | YES | YES |
| wait/synch/mutex/sql/LOCK_lock_db | YES | YES |
| wait/synch/mutex/sql/LOCK_manager | YES | YES |
...
The setup_instruments table
provides the most basic form of control over event
production. To further refine event production based on
the type of object or thread being monitored, other tables
may be used as described in
Section 3.3.3, “Event Pre-Filtering”.
Performance Schema tables are the destinations for events
and consume events. The
setup_consumers table lists
the types of consumers to which event information can be
sent and whether they are enabled:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| events_stages_current | NO |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | NO |
| events_waits_current | NO |
| events_waits_history | NO |
| events_waits_history_long | NO |
| global_instrumentation | YES |
| thread_instrumentation | YES |
| statements_digest | YES |
+--------------------------------+---------+
Filtering can be done at different stages of performance monitoring:
Pre-filtering. This is done by modifying Performance Schema configuration so that only certain types of events are collected from producers, and collected events update only certain consumers. To do this, enable or disable instruments or consumers. Pre-filtering is done by the Performance Schema and has a global effect that applies to all users.
Reasons to use pre-filtering:
To reduce overhead. Performance Schema overhead should be minimal even with all instruments enabled, but perhaps you want to reduce it further. Or you do not care about timing events and want to disable the timing code to eliminate timing overhead.
To avoid filling the current-events or history tables with events in which you have no interest. Pre-filtering leaves more “room” in these tables for instances of rows for enabled instrument types. If you enable only file instruments with pre-filtering, no rows are collected for nonfile instruments. With post-filtering, nonfile events are collected, leaving fewer rows for file events.
To avoid maintaining some kinds of event tables. If you disable a consumer, the server does not spend time maintaining destinations for that consumer. For example, if you do not care about event histories, you can disable the history table consumers to improve performance.
Post-filtering. This
involves the use of WHERE clauses in
queries that select information from Performance Schema
tables, to specify which of the available events you want
to see. Post-filtering is performed on a per-user basis
because individual users select which of the available
events are of interest.
Reasons to use post-filtering:
To avoid making decisions for individual users about which event information is of interest.
To use the Performance Schema to investigate a performance issue when the restrictions to impose using pre-filtering are not known in advance.
The following sections provide more detail about pre-filtering and provide guidelines for naming instruments or consumers in filtering operations. For information about writing queries to retrieve information (post-filtering), see Chapter 4, Performance Schema Queries.
Pre-filtering is done by the Performance Schema and has a global effect that applies to all users. Pre-filtering can be applied to either the producer or consumer stage of event processing:
To configure pre-filtering at the producer stage, several tables can be used:
setup_instruments
indicates which instruments are available. An
instrument disabled in this table produces no events
regardless of the contents of the other
production-related setup tables. An instrument enabled
in this table is permitted to produce events, subject
to the contents of the other tables.
setup_objects controls
whether the Performance Schema monitors particular
table objects.
threads indicates whether
monitoring is enabled for each server thread.
setup_actors determines
the initial monitoring state for new foreground
threads.
To configure pre-filtering at the consumer stage, modify
the setup_consumers table.
This determines the destinations to which events are sent.
setup_consumers also
implicitly affects event production. If a given event will
not be sent to any destination (that is, will not be
consumed), the Performance Schema does not produce it.
Modifications to any of these tables affect monitoring immediately, with some exceptions:
Modifications to some instruments in the
setup_instruments table are
effective only at server startup; changing them at runtime
has no effect. This affects primarily mutexes, conditions,
and rwlocks in the server, although there may be other
instruments for which this is true.
Modifications to the
setup_actors table affect
only foreground threads created subsequent to the
modification, not existing threads.
When you change the monitoring configuration, the Performance
Schema does not flush the history tables. Events already
collected remain in the current-events and history tables
until displaced by newer events. If you disable instruments,
you might need to wait a while before events for them are
displaced by newer events of interest. Alternatively, use
TRUNCATE TABLE to empty the
history tables.
After making instrumentation changes, you might want to
truncate the summary tables to clear aggregate information for
previously collected events. Except for
events_statements_summary_by_digest,
the effect of TRUNCATE TABLE
for summary tables is to reset the summary columns to 0 or
NULL, not to remove rows.
The following sections describe how to use specific tables to control Performance Schema pre-filtering.
The setup_instruments table
lists the available instruments:
mysql> SELECT * FROM setup_instruments;
+------------------------------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+------------------------------------------------------------+---------+-------+
...
| wait/synch/mutex/sql/LOCK_global_read_lock | YES | YES |
| wait/synch/mutex/sql/LOCK_global_system_variables | YES | YES |
| wait/synch/mutex/sql/LOCK_lock_db | YES | YES |
| wait/synch/mutex/sql/LOCK_manager | YES | YES |
...
| wait/synch/rwlock/sql/LOCK_grant | YES | YES |
| wait/synch/rwlock/sql/LOGGER::LOCK_logger | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_connect | YES | YES |
| wait/synch/rwlock/sql/LOCK_sys_init_slave | YES | YES |
...
| wait/io/file/sql/binlog | YES | YES |
| wait/io/file/sql/binlog_index | YES | YES |
| wait/io/file/sql/casetest | YES | YES |
| wait/io/file/sql/dbopt | YES | YES |
...
To control whether an instrument is enabled, set its
ENABLED column to YES
or NO. To configure whether to collect
timing information for an enabled instrument, set its
TIMED value to YES or
NO. Setting the TIMED
column affects Performance Schema table contents as
described in Section 3.3.1, “Performance Schema Event Timing”.
Modifications to most
setup_instruments rows affect
monitoring immediately. For some instruments, modifications
are effective only at server startup; changing them at
runtime has no effect. This affects primarily mutexes,
conditions, and rwlocks in the server, although there may be
other instruments for which this is true.
The setup_instruments table
provides the most basic form of control over event
production. To further refine event production based on the
type of object or thread being monitored, other tables may
be used as described in
Section 3.3.3, “Event Pre-Filtering”.
The following examples demonstrate possible operations on
the setup_instruments table.
These changes, like other pre-filtering operations, affect
all users. Some of these queries use the
LIKE operator and a pattern
match instrument names. For additional information about
specifying patterns to select instruments, see
Section 3.3.4, “Naming Instruments or Consumers for Filtering Operations”.
Disable all instruments:
mysql> UPDATE setup_instruments SET ENABLED = 'NO';
Now no events will be collected.
Disable all file instruments, adding them to the current set of disabled instruments:
mysql>UPDATE setup_instruments SET ENABLED = 'NO'->WHERE NAME LIKE 'wait/io/file/%';
Disable only file instruments, enable all other instruments:
mysql>UPDATE setup_instruments->SET ENABLED = IF(NAME LIKE 'wait/io/file/%', 'NO', 'YES');
Enable all but those instruments in the
mysys library:
mysql>UPDATE setup_instruments->SET ENABLED = CASE WHEN NAME LIKE '%/mysys/%' THEN 'YES' ELSE 'NO' END;
Disable a specific instrument:
mysql>UPDATE setup_instruments SET ENABLED = 'NO'->WHERE NAME = 'wait/synch/mutex/mysys/TMPDIR_mutex';
To toggle the state of an instrument,
“flip” its ENABLED
value:
mysql>UPDATE setup_instruments->SET ENABLED = IF(ENABLED = 'YES', 'NO', 'YES')->WHERE NAME = 'wait/synch/mutex/mysys/TMPDIR_mutex';
Disable timing for all events:
mysql> UPDATE setup_instruments SET TIMED = 'NO';
The setup_objects table
controls whether the Performance Schema monitors particular
table objects. The initial
setup_objects contents look
like this:
mysql> SELECT * FROM setup_objects;
+-------------+--------------------+-------------+---------+-------+
| OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | ENABLED | TIMED |
+-------------+--------------------+-------------+---------+-------+
| TABLE | mysql | % | NO | NO |
| TABLE | performance_schema | % | NO | NO |
| TABLE | information_schema | % | NO | NO |
| TABLE | % | % | YES | YES |
+-------------+--------------------+-------------+---------+-------+
Modifications to the
setup_objects table affect
object monitoring immediately.
The OBJECT_TYPE column indicates the type
of object to which a row applies. TABLE
filtering affects table I/O events
(wait/io/table/sql/handler instrument)
and table lock events
(wait/lock/table/sql/handler instrument).
The OBJECT_SCHEMA and
OBJECT_NAME columns should contain a
literal schema or table name, or '%' to
match any name.
The ENABLED column indicates whether
matching objects are monitored, and TIMED
indicates whether to collect timing information. Setting the
TIMED column affects Performance Schema
table contents as described in
Section 3.3.1, “Performance Schema Event Timing”.
The effect of the default object configuration is to
instrument all tables except those in the
mysql,
INFORMATION_SCHEMA, and
performance_schema databases. (Tables in
the INFORMATION_SCHEMA database are not
instrumented regardless of the contents of
setup_objects; the row for
information_schema.% simply makes this
default explicit.)
When the Performance Schema checks for a match in
setup_objects, it tries to find
more specific matches first. For rows that match a given
OBJECT_TYPE, the Performance Schema
checks rows in this order:
Rows with
OBJECT_SCHEMA='
and
literal'OBJECT_NAME='.
literal'
Rows with
OBJECT_SCHEMA='
and literal'OBJECT_NAME='%'.
Rows with OBJECT_SCHEMA='%' and
OBJECT_NAME='%'.
For example, with a table db1.t1, the
Performance Schema looks in TABLE rows
for a match for 'db1' and
't1', then for 'db1'
and '%', then for '%'
and '%'. The order in which matching
occurs matters because different matching
setup_objects rows can have
different ENABLED and
TIMED values.
For table-related events, the Performance Schema combines
the contents of setup_objects
with setup_instruments to
determine whether to enable instruments and whether to time
enabled instruments:
For tables that match a row in
setup_objects, table
instruments produce events only if
ENABLED is YES in
both setup_instruments and
setup_objects.
The TIMED values in the two tables
are combined, so that timing information is collected
only when both values are YES.
Suppose that setup_objects
contains the following TABLE rows that
apply to db1, db2, and
db3:
+-------------+---------------+-------------+---------+-------+ | OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | ENABLED | TIMED | +-------------+---------------+-------------+---------+-------+ | TABLE | db1 | t1 | YES | YES | | TABLE | db1 | t2 | NO | NO | | TABLE | db2 | % | YES | YES | | TABLE | db3 | % | NO | NO | | TABLE | % | % | YES | YES | +-------------+---------------+-------------+---------+-------+
If a table-related instrument in
setup_instruments has an
ENABLED value of NO,
events for the object are not monitored. If the
ENABLED value is YES,
event monitoring occurs according to the
ENABLED value in the relevant
setup_objects row:
db1.t1 events are monitored
db1.t2 events are not monitored
db2.t3 events are monitored
db3.t4 events are not monitored
db4.t5 events are monitored
Similar logic applies for combining the
TIMED columns from the
setup_instruments and
setup_objects tables to
determine whether to collect event timing information.
If a persistent table and a temporary table have the same
name, matching against
setup_objects rows occurs the
same way for both. It is not possible to enable monitoring
for one table but not the other. However, each table is
instrumented separately.
The ENABLED column was added in MySQL
5.6.3. For earlier versions that have no
ENABLED column,
setup_objects is used only to
enable monitoring for objects that match some row in the
table. There is no way to explicitly disable instrumentation
with the table.
The threads table contains a
row for each server thread. Each row contains information
about a thread and indicates whether monitoring is enabled
for it. For the Performance Schema to monitor a thread,
these things must be true:
The thread_instrumentation consumer
in the setup_consumers
table must be YES.
The threads.INSTRUMENTED column must
be YES.
Monitoring occurs only for those thread events produced
from instruments that are enabled in the
setup_instruments table.
The INSTRUMENTED column in the
threads table indicates the
monitoring state for each thread. For foreground threads
(resulting from client connections), the initial
INSTRUMENTED value is determined by
whether the user account associated with the thread matches
any row in the setup_actors
table.
For background threads, there is no associated user.
INSTRUMENTED is YES by
default and setup_actors is not
consulted.
The initial setup_actors
contents look like this:
mysql> SELECT * FROM setup_actors;
+------+------+------+
| HOST | USER | ROLE |
+------+------+------+
| % | % | % |
+------+------+------+
The HOST and USER
columns should contain a literal host or user name, or
'%' to match any name.
The Performance Schema uses the HOST and
USER columns to match each new foreground
thread. (ROLE is unused.) The
INSTRUMENTED value for the thread becomes
YES if any row matches,
NO otherwise. This enables instrumenting
to be applied selectively per host, user, or combination of
host and user.
By default, monitoring is enabled for all new foreground
threads because the
setup_actors table initially
contains a row with '%' for both
HOST and USER. To
perform more limited matching such as to enable monitoring
only for some foreground threads, you must delete this row
because it matches any connection.
Suppose that you modify
setup_actors as follows:
TRUNCATE TABLE setup_actors;
Now setup_actors is empty and there are
no rows that could match incoming connections. Consequently,
the Performance Schema sets the
INSTRUMENTED column to
NO for all new foreground threads.
Suppose that you further modify
setup_actors:
INSERT INTO setup_actors (HOST,USER,ROLE) VALUES('localhost','joe','%');
INSERT INTO setup_actors (HOST,USER,ROLE) VALUES('%','sam','%');
Now the Performance Schema determines how to set the
INSTRUMENTED value for new connection
threads as follows:
If joe connects from the local host,
the connection matches the first inserted row.
If joe connects from any other host,
there is no match.
If sam connects from any host, the
connection matches the second inserted row.
For any other connection, there is no match.
Modifications to the
setup_actors table affect only
foreground threads created subsequent to the modification,
not existing threads. To affect existing threads, modify the
INSTRUMENTED column of
threads table rows.
The setup_consumers table lists
the available consumer types and which are enabled:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| events_stages_current | NO |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | NO |
| events_waits_current | NO |
| events_waits_history | NO |
| events_waits_history_long | NO |
| global_instrumentation | YES |
| thread_instrumentation | YES |
| statements_digest | YES |
+--------------------------------+---------+
Modify the setup_consumers
table to affect pre-filtering at the consumer stage and
determine the destinations to which events are sent. To
enable or disable a consumer, set its
ENABLED value to YES
or NO.
Modifications to the
setup_consumers table affect
monitoring immediately.
If you disable a consumer, the server does not spend time maintaining destinations for that consumer. For example, if you do not care about historical event information, disable the history consumers:
mysql>UPDATE setup_consumers->SET ENABLED = 'NO' WHERE NAME LIKE '%history%';
The consumer settings in the
setup_consumers table form a
hierarchy from higher levels to lower. The following
principles apply:
Destinations associated with a consumer receive no events unless the Performance Schema checks the consumer and the consumer is enabled.
A consumer is checked only if all consumers it depends on (if any) are enabled.
If a consumer is not checked, or is checked but is disabled, other consumers that depend on it are not checked.
Dependent consumers may have their own dependent consumers.
If an event would not be sent to any destination, the Performance Schema does not produce it.
The following lists describe the available consumer values. For discussion of several representative consumer configurations and their effect on instrumentation, see Section 3.3.3.5, “Example Consumer Configurations”.
Global and Thread Consumers
global_instrumentation is the highest
level consumer. If
global_instrumentation is
NO, it disables global
instrumentation. All other settings are lower level and
are not checked; it does not matter what they are set
to. No global or per thread information is maintained
and no individual events are collected in the
current-events or event-history tables. If
global_instrumentation is
YES, the Performance Schema maintains
information for global states and also checks the
thread_instrumentation consumer.
thread_instrumentation is checked
only if global_instrumentation is
YES. Otherwise, if
thread_instrumentation is
NO, it disables thread-specific
instrumentation and all lower-level settings are
ignored. No information is maintained per thread and no
individual events are collected in the current-events or
event-history tables. If
thread_instrumentation is
YES, the Performance Schema maintains
thread-specific information and also checks
events_
consumers.
xxx_current
Wait Event Consumers
These consumers require both
global_instrumentation and
thread_instrumentation to be
YES or they are not checked. If checked,
they act as follows:
events_waits_current, if
NO, disables collection of individual
wait events in the
events_waits_current table.
If YES, it enables wait event
collection and the Performance Schema checks the
events_waits_history and
events_waits_history_long consumers.
events_waits_history is not checked
if event_waits_current is
NO. Otherwise, an
events_waits_history value of
NO or YES disables
or enables collection of wait events in the
events_waits_history table.
events_waits_history_long is not
checked if event_waits_current is
NO. Otherwise, an
events_waits_history_long value of
NO or YES disables
or enables collection of wait events in the
events_waits_history_long
table.
Stage Event Consumers
These consumers require both
global_instrumentation and
thread_instrumentation to be
YES or they are not checked. If checked,
they act as follows:
events_stages_current, if
NO, disables collection of individual
stage events in the
events_stages_current
table. If YES, it enables stage event
collection and the Performance Schema checks the
events_stages_history and
events_stages_history_long consumers.
events_stages_history is not checked
if event_stages_current is
NO. Otherwise, an
events_stages_history value of
NO or YES disables
or enables collection of stage events in the
events_stages_history
table.
events_stages_history_long is not
checked if event_stages_current is
NO. Otherwise, an
events_stages_history_long value of
NO or YES disables
or enables collection of stage events in the
events_stages_history_long
table.
Statement Event Consumers
These consumers require both
global_instrumentation and
thread_instrumentation to be
YES or they are not checked. If checked,
they act as follows:
events_statements_current, if
NO, disables collection of individual
statement events in the
events_statements_current
table. If YES, it enables statement
event collection and the Performance Schema checks the
events_statements_history and
events_statements_history_long
consumers.
events_statements_history is not
checked if events_statements_current
is NO. Otherwise, an
events_statements_history value of
NO or YES disables
or enables collection of statement events in the
events_statements_history
table.
events_statements_history_long is not
checked if events_statements_current
is NO. Otherwise, an
events_statements_history_long value
of NO or YES
disables or enables collection of statement events in
the
events_statements_history_long
table.
Statement Digest Consumer
This consumer requires
global_instrumentation to be
YES or it is not checked. There is no
dependency on the statement event consumers, so you can
obtain statistics per digest without having to collect
statistics in
events_statements_current,
which is advantageous in terms of overhead. Conversely, you
can get detailed statements in
events_statements_current
without digests (the DIGEST and
DIGEST_TEXT columns will be
NULL).
The consumer settings in the
setup_consumers table form a
hierarchy from higher levels to lower. The following
discussion describes how consumers work, showing specific
configurations and their effects as consumer settings are
enabled progressively from high to low. The consumer values
shown are representative. The general principles described
here apply to other consumer values that may be available.
The configuration descriptions occur in order of increasing functionality and overhead. If you do not need the information provided by enabling lower-level settings, disable them and the Performance Schema will execute less code on your behalf and you will have less information to sift through.
The setup_consumers table
contains the following hierarchy of values:
global_instrumentation
thread_instrumentation
events_waits_current
events_waits_history
events_waits_history_long
events_stages_current
events_stages_history
events_stages_history_long
events_statements_current
events_statements_history
events_statements_history_long
statements_digest
In the consumer hierarchy, the consumers for waits, stages, and statements are all at the same level. This differs from the event nesting hierarchy, for which wait events nest within stage events, which nest within statement events.
If a given consumer setting is NO, the
Performance Schema disables the instrumentation associated
with the consumer and ignores all lower-level settings. If a
given setting is YES, the Performance
Schema enables the instrumentation associated with it and
checks the settings at the next lowest level. For a
description of the rules for each consumer, see
Section 3.3.3.4, “Pre-Filtering by Consumer”.
For example, if global_instrumentation is
enabled, thread_instrumentation is
checked. If thread_instrumentation is
enabled, the
events_
consumers are checked. If of these
xxx_currentevents_waits_current is enabled,
events_waits_history and
events_waits_history_long are checked.
Each of the following configuration descriptions indicates which setup elements the Performance Schema checks and which output tables it maintains (that is, for which tables it collects information).
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | NO |
...
+---------------------------+---------+
In this configuration, nothing is instrumented.
Setup elements checked:
Table setup_consumers,
consumer global_instrumentation
Output tables maintained:
None
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+---------------------------+---------+
| NAME | ENABLED |
+---------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | NO |
...
+---------------------------+---------+
In this configuration, instrumentation is maintained only for global states. Per-thread instrumentation is disabled.
Additional setup elements checked, relative to the preceding configuration:
Table setup_consumers,
consumer thread_instrumentation
Table setup_instruments
Table setup_objects
Table setup_timers
Additional output tables maintained, relative to the preceding configuration:
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | NO |
...
| events_stages_current | NO |
...
| events_statements_current | YES |
...
+--------------------------------+---------+
In this configuration, instrumentation is maintained globally and per thread. No individual events are collected in the current-events or event-history tables.
Additional setup elements checked, relative to the preceding configuration:
Table setup_consumers,
consumers
events_,
where xxx_currentxxx is
waits, stages,
statements
Table setup_actors
Column threads.instrumented
Additional output tables maintained, relative to the preceding configuration:
events_,
where xxx_summary_by_yyy_by_event_namexxx is
waits, stages,
statements; and
yyy is
thread, user,
host, account
Server configuration state:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | NO |
| events_waits_history_long | NO |
| events_stages_current | YES |
| events_stages_history | NO |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | NO |
...
+--------------------------------+---------+
In this configuration, instrumentation is maintained globally and per thread. Individual events are collected in the current-events table, but not in the event-history tables.
Additional setup elements checked, relative to the preceding configuration:
Consumers
events_,
where xxx_historyxxx is
waits, stages,
statements
Consumers
events_,
where xxx_history_longxxx is
waits, stages,
statements
Additional output tables maintained, relative to the preceding configuration:
events_,
where xxx_currentxxx is
waits, stages,
statements
The preceding configuration collects no event history
because the
events_
and
xxx_historyevents_
consumers are disabled. Those consumers can be enabled
separately or together to collect event history per thread,
globally, or both.
xxx_history_long
This configuration collects event history per thread, but not globally:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | YES |
| events_waits_history_long | NO |
| events_stages_current | YES |
| events_stages_history | YES |
| events_stages_history_long | NO |
| events_statements_current | YES |
| events_statements_history | YES |
| events_statements_history_long | NO |
...
+--------------------------------+---------+
Event-history tables maintained for this configuration:
events_,
where xxx_historyxxx is
waits, stages,
statements
This configuration collects event history globally, but not per thread:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | NO |
| events_waits_history_long | YES |
| events_stages_current | YES |
| events_stages_history | NO |
| events_stages_history_long | YES |
| events_statements_current | YES |
| events_statements_history | NO |
| events_statements_history_long | YES |
...
+--------------------------------+---------+
Event-history tables maintained for this configuration:
events_,
where xxx_history_longxxx is
waits, stages,
statements
This configuration collects event history per thread and globally:
mysql> SELECT * FROM setup_consumers;
+--------------------------------+---------+
| NAME | ENABLED |
+--------------------------------+---------+
| global_instrumentation | YES |
| thread_instrumentation | YES |
| events_waits_current | YES |
| events_waits_history | YES |
| events_waits_history_long | YES |
| events_stages_current | YES |
| events_stages_history | YES |
| events_stages_history_long | YES |
| events_statements_current | YES |
| events_statements_history | YES |
| events_statements_history_long | YES |
...
+--------------------------------+---------+
Event-history tables maintained for this configuration:
events_,
where xxx_historyxxx is
waits, stages,
statements
events_,
where xxx_history_longxxx is
waits, stages,
statements
Names given for filtering operations can be as specific or general as required. To indicate a single instrument or consumer, specify its name in full:
mysql>UPDATE setup_instruments->SET ENABLED = 'NO'->WHERE NAME = 'wait/synch/mutex/myisammrg/MYRG_INFO::mutex';mysql>UPDATE setup_consumers->SET ENABLED = 'NO' WHERE NAME = 'events_waits_current';
To specify a group of instruments or consumers, use a pattern that matches the group members:
mysql>UPDATE setup_instruments->SET ENABLED = 'NO'->WHERE NAME LIKE 'wait/synch/mutex/%';mysql>UPDATE setup_consumers->SET ENABLED = 'NO' WHERE NAME LIKE '%history%';
If you use a pattern, it should be chosen so that it matches all the items of interest and no others. For example, to select all file I/O instruments, it is better to use a pattern that includes the entire instrument name prefix:
... WHERE NAME LIKE 'wait/io/file/%';
A pattern of '%/file/%' will match other
instruments that have a component of
'/file/' anywhere in the name. Even less
suitable is the pattern '%file%' because it
will match instruments with 'file' anywhere
in the name, such as
wait/synch/mutex/sql/LOCK_des_key_file.
To check which instrument or consumer names a pattern matches, perform a simple test:
mysql>SELECT NAME FROM setup_instruments WHERE NAME LIKE 'mysql>pattern';SELECT NAME FROM setup_consumers WHERE NAME LIKE 'pattern';
For information about the types of names that are supported, see Chapter 5, Performance Schema Instrument Naming Conventions.
It is always possible to determine what instruments the
Performance Schema includes by checking the
setup_instruments table. For
example, to see what file-related events are instrumented for
the InnoDB storage engine, use this query:
mysql> SELECT * FROM setup_instruments WHERE NAME LIKE 'wait/io/file/innodb/%';
+--------------------------------------+---------+-------+
| NAME | ENABLED | TIMED |
+--------------------------------------+---------+-------+
| wait/io/file/innodb/innodb_data_file | YES | YES |
| wait/io/file/innodb/innodb_log_file | YES | YES |
| wait/io/file/innodb/innodb_temp_file | YES | YES |
+--------------------------------------+---------+-------+
An exhaustive description of precisely what is instrumented is not given in this documentation, for several reasons:
What is instrumented is the server code. Changes to this code occur often, which also affects the set of instruments.
It is not practical to list all the instruments because there are hundreds of them.
As described earlier, it is possible to find out by
querying the
setup_instruments table. This
information is always up to date for your version of
MySQL, also includes instrumentation for instrumented
plugins you might have installed that are not part of the
core server, and can be used by automated tools.