Table of Contents
This chapter discusses starting and stopping the MySQL Cluster Manager agent and client, and setting up, backing up, and restoring MySQL Clusters using the MySQL Cluster Manager.
mcmd is the MySQL Cluster Manager agent program; invoking this executable starts the MySQL Cluster Manager Agent, to which you can connect using the mcm client (see Section 3.3, “Starting the MySQL Cluster Manager Client”, and Chapter 4, MySQL Cluster Manager Client Commands, for more information).
You can modify the behavior of the agent in a number of different
ways by specifying one or more of the options discussed in this
sections. Most of these options can be specified either on the
command line or in the agent configuration file (normally
etc/mcmd.ini). (Some exceptions include the
--defaults-file and
--bootstrap options, which, if used,
must be specified on the command line, and which are mutually
exclusive with one another.) For example, you can set the
agent's cluster logging level to warning
instead than the default message in either one of
the following two ways:
Include --log-level=warning on the
command line when invoking mcmd.
When specifying an agent configuration option on the command
line, the name of the option is prefixed with two leading dash
characters (--).
Include the following line in the agent configuration file:
log-level=warning
You can change the logging level at runtime using the
mcm client change
log-level command.
When used in the configuration file, the name of the option
should not be prefixed with any other characters. Each option
must be specified on a separate line. You can comment out all of
a given line by inserting a leading hash character
(#), like this:
#log-level=warning
You can also comment out part of a line in this way; any text
following the # character is ignored, to the
end of the current line.
The following table contains a summary of agent options that are read on startup by mcmd. More detailed information about each of these options, such as allowed range of values, can be found in the list following the table.
Table 3.1 MySQL Cluster Manager Agent (mcmd) Option
Summary
| Format | Description |
|---|---|
| --agent-uuid | Set the agent's UUID; needed only when running multiple agent processes on the same host. |
| --basedir | Directory to use as prefix for relative paths in the configuration |
| --bootstrap | Bootstrap a default cluster on startup. |
| --daemon | Run in daemon mode. |
| --defaults-file | Configuration file to use |
| --event-threads | Number of event handler threads to use. |
| --help | Show application options. |
| --help-all | Show all options (application options and manager module options). |
| --help-manager | Show manager module options. |
| --keepalive | Try to restart mcmd in the event of a crash. |
| --log-backtrace-on-crash | Attempt to load debugger in case of a crash. |
| --log-file | Name of the file to write the log to. |
| --log-level | Set the cluster logging level. |
| --log-use-syslog | Log to syslog. |
| --manager-directory | Directory used for manager data storage. |
| --manager-password | Password used for the manager account. |
| --manager-port | Port for client to use when connecting to manager. |
| --manager-username | User account name to run the manager under. |
| --max-open-files | Maximum number of open files (ulimit -n). |
| --pid-file | Specify PID file (used if running as daemon) |
| --plugin-dir | Directory in which to look for plugins |
| --plugins | Comma-separated list of plugins to load; must include "manager". |
| --verbose-shutdown | Always log the exit code when shutting down. |
| --version | Show the manager version. |
| --xcom-port | Specify the XCOM port. |
mcmd) Option DescriptionsThe following list contains descriptions of each startup option available for use with mcmd, including allowed and default values. Options noted as boolean need only be specified in order to take effect; you should not try to set a value for these.
| Command-Line Format | --agent-uuid=uuid | ||
| Permitted Values | Type | string | |
| Default | [set internally] | ||
Set a UUID for this agent. Normally this value is set automatically, and needs to be specified only when running more than one mcmd process on the same host.
| Command-Line Format | --basedir=dir_name | ||
| Permitted Values | Type | directory name | |
| Default | . | ||
Directory with path to use as prefix for relative paths in the configuration.
| Command-Line Format | --bootstrap | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
Start the agent with default configuration values, create a
default one-machine cluster named mycluster,
and start it. This option works only if no clusters have yet
been created. This option is mutually exclusive with the
--defaults-file option.
Currently, any data stored in the default cluster
mycluster is not preserved between cluster
restarts.
| Command-Line Format | --daemon | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
Run mcmd as a daemon.
| Command-Line Format | --defaults-file=file_name | ||
| Permitted Values | Type | file name | |
| Default | etc/mcmd.ini | ||
Set the file from which to read configuration options. The
default is etc/mcmd.ini. See
Section 2.4, “MySQL Cluster Manager Configuration File”, for more
information.
| Command-Line Format | --event-threads=# | ||
| Permitted Values | Type | numeric | |
| Default | 1 | ||
| Min Value | 1 | ||
| Max Value | [system dependent] | ||
Number of event handler threads to use. The the default is 1, which is sufficient for most normal operations.
--help, -?
| Command-Line Format | --help | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
mcmd help output is divided into
Application and
Manager sections. When used with
mcmd, --help causes the
Application options to be shown, as shown
here:
shell> mcmd --help
Usage:
mcmd [OPTION...] - MySQL Cluster Manager
Help Options:
-?, --help Show help options
--help-all Show all help options
--help-manager Show options for the manager-module
Application Options:
-V, --version Show version
--defaults-file=<file> configuration file
--verbose-shutdown Always log the exit code when shutting down
--daemon Start in daemon-mode
--basedir=<absolute path> Base directory to prepend to relative paths in the config
--pid-file=<file> PID file in case we are started as daemon
--plugin-dir=<path> Path to the plugins
--plugins=<name> Plugins to load
--log-level=<string> Log all messages of level ... or higher
--log-file=<file> Log all messages in a file
--log-use-syslog Log all messages to syslog
--log-backtrace-on-crash Try to invoke debugger on crash
--keepalive Try to restart mcmd if it crashed
--max-open-files Maximum number of open files (ulimit -n)
--event-threads Number of event-handling threads (default: 1)
| Command-Line Format | --help-all | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
mcmd help output is divided into
Application and
Manager sections. When used with
--help-all, mcmd displays
both the Application and the
Manager options, like this:
> mcmd --help-all
Usage:
mcmd [OPTION...] - MySQL Cluster Manager
Help Options:
-?, --help Show help options
--help-all Show all help options
--help-manager Show options for the manager-module
manager-module
--manager-port=<clientport> Port to manage the cluster (default: 1862)
--xcom-port=<xcomport> Xcom port (default: 18620)
--manager-username=<username> Username to manage the cluster (default: mcmd)
--manager-password=<password> Password for the manager user-account (default: super)
--bootstrap Bootstrap a default cluster on initial startup
--manager-directory=<directory> Path to managers config information
Application Options:
-V, --version Show version
--defaults-file=<file> configuration file
--verbose-shutdown Always log the exit code when shutting down
--daemon Start in daemon-mode
--basedir=<absolute path> Base directory to prepend to relative paths in the config
--pid-file=<file> PID file in case we are started as daemon
--plugin-dir=<path> Path to the plugins
--plugins=<name> Plugins to load
--log-level=<string> Log all messages of level ... or higher
--log-file=<file> Log all messages in a file
--log-use-syslog Log all messages to syslog
--log-backtrace-on-crash Try to invoke debugger on crash
--keepalive Try to restart mcmd if it crashed
--max-open-files Maximum number of open files (ulimit -n)
--event-threads Number of event-handling threads (default: 1)
| Command-Line Format | --help-manager | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
mcmd help output is divided into
Application and
Manager sections. When used with
--help-manager, mcmd
displays the Manager options, like this:
shell> mcmd --help-manager
Usage:
mcmd [OPTION...] - MySQL Cluster Manager
manager-module
--manager-port=<clientport> Port to manage the cluster (default: 1862)
--xcom-port=<xcomport> Xcom port (default: 18620)
--manager-username=<username> Username to manage the cluster (default: mcmd)
--manager-password=<password> Password for the manager user-account (default: super)
--bootstrap Bootstrap a default cluster on initial startup
--manager-directory=<directory> Path to managers config information
| Command-Line Format | --keepalive | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
Use this option to cause mcmd to attempt to restart in the event of a crash.
| Command-Line Format | --log-backtrace-on-crash | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
Attempt to load the debugger in the event of a crash. Not normally used in production.
| Command-Line Format | --log-file=file | ||
| Permitted Values | Type | file name | |
| Default | mcmd.log | ||
Set the name of the file to write the log to. The default is
mcmd.log in the installation directory. On
Linux and other Unix-like platforms, you can use a relative
path; this is in relation to the MySQL Cluster Manager installation directory,
and not to the bin or
etc subdirectory. On Windows, you must use
an absolute path, and it cannot contain any spaces; in addition,
you must replace any backslash (\) characters
in the path with forward slashes (/).
| Command-Line Format | --log-level=level | ||
| Permitted Values | Type | enumeration | |
| Default | message | ||
| Valid Values | message | ||
debug | |||
critical | |||
error | |||
info | |||
warning | |||
Sets the cluster log event severity level; see
NDB Cluster Logging Management Commands, for
definitions of the levels, which are the same as these except
that ALERT is mapped to
critical and the Unix syslog
LOG_NOTICE level is used (and mapped to
message). For additional information, see
Event Reports Generated in NDB Cluster.
Possible values for this option are (any one of)
debug, critical,
error, info,
message, and warning.
message is the default.
You should be aware that the debug,
message, and info levels
can result in rapid growth of the agent log, so for normal
operations, you may prefer to set this to
warning or error.
You can also change the cluster logging level at runtime using
the change log-level command in
the mcm client. The option applies its
setting to all hosts running on all sites, whereas
change log-level is more
flexible; its effects can be constrained to a specific
management site, or to one or more hosts within that site.
| Command-Line Format | --log-use-syslog | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
Write logging output to syslog.
| Command-Line Format | --manager-directory=dir | ||
| Permitted Values | Type | directory name | |
| Default | /opt/mcm_data | ||
Set the location of the agent repository, which contains
collections of MySQL Cluster Manager data files and MySQL Cluster configuration
and data files. The value must be a valid absolute path. On
Linux, if the directory does not exist, it is created; on
Windows, the directory must be created if it does not exist.
additionally on Windows, the path may not contain any spaces or
backslash (\) characters; backslashes must be
replaced with forward slashes (/).
The default location is /opt/mcm_data. If you
change the default, you should use a standard location external
to the MySQL Cluster Manager installation directory, such as
/var/opt/mcm on Linux.
In addition to the MySQL Cluster Manager data files, the
manager-directory also contains a
rep directory in which MySQL Cluster data
files for each MySQL Cluster under MySQL Cluster Manager control are kept.
Normally, there is no need to interact with these directories
beyond specifying the location of the
manager-directory in the agent configuration
file (mcmd.ini).
However, in the event that an agent reaches an inconsistent
state, it is possible to delete the contents of the
rep directory, in which case the agent
attempts to recover its repository from another agent.
In such cases, you must also delete the
repchksum file and the
high_water_mark file from the
manager-directory. Otherwise, the
agent reads these files and raises errors due to the now-empty
rep directory.
| Command-Line Format | --manager-password=password | ||
| Permitted Values | Type | string | |
| Default | super | ||
Set a password to be used for the manager agent user account.
The default is super.
Using this option together with
manager-username causes the
creation of a MySQL user account, having the username and
password specified using these two options. This
account is created with all privileges on the MySQL server
including the granting of privileges. In other words,
it is created as if you had executed
GRANT ALL PRIVILEGES ON
*.* ... WITH GRANT OPTION in the
mysql client.
| Command-Line Format | --manager-port=port | ||
| Permitted Values | Type | numeric | |
| Default | localhost:1862 | ||
Specify the port used by MySQL Cluster Manager client connections. Any valid TC/IP port number can be used. Normally, there is no need to change it from the default value (1862).
Previously, this option could optionally take a host name in addition to the port number, but in MySQL Cluster Manager 1.1.1 and later the host name is no longer accepted.
| Command-Line Format | --manager-username=name | ||
| Permitted Values | Type | string | |
| Default | mcmd | ||
Set a user name for the MySQL account to be used by the MySQL Cluster Manager
agent. The default is mcmd.
When used together with
manager-password, this option also
causes the creation of a new MySQL user account, having the user
name and password specified using these two options.
This account is created with all privileges on the
MySQL server including the granting of privileges. In
other words, it is created as if you had executed
GRANT ALL PRIVILEGES ON
*.* ... WITH GRANT OPTION in the
mysql client. The existing MySQL
root account is not altered in such cases,
and the default test database is preserved.
| Command-Line Format | --max-open-files=# | ||
| Permitted Values | Type | numeric | |
| Default | 1 | ||
| Min Value | 1 | ||
| Max Value | [system dependent] | ||
Set the maximum number of open files (as with ulimit -n).
| Command-Line Format | --pid-file=file_name | ||
| Permitted Values | Type | file name | |
| Default | mcmd.pid | ||
Set the name and path to a process ID
(.pid) file. Not normally used or needed.
This option is not supported on Windows systems.
| Command-Line Format | --plugin-dir=dir_name | ||
| Permitted Values | Type | directory name | |
| Default | lib/mcmd | ||
Set the directory to search for plugins. The default is
lib/mcmd, in the MySQL Cluster Manager installation
directory; normally there is no need to change this.
| Command-Line Format | --plugins=list | ||
| Permitted Values | Type | directory name | |
| Default | | ||
Specify a list of plugins to be loaded on startup. To enable
MySQL Cluster Manager, this list must include manager (the
default value). Please be aware that we currently do not test
MySQL Cluster Manager with any values for plugins other than
manager. Therefore, we recommend using the
default value in a production setting.
| Command-Line Format | --verbose-shutdown | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
Force mcmd to log the exit code whenever
shutting down, regardless of the reason.
--version, -V
| Command-Line Format | --version | ||
| Permitted Values | Type | boolean | |
| Default | true | ||
Display version information and exit. Output may vary according to the MySQL Cluster Manager software version, operating platform, and versions of libraries used on your system, but should closely resemble what is shown here, with the first line of output containing the MySQL Cluster Manager release number (emphasized text):
shell> mcmd -V
MySQL Cluster Manager 1.3.6 (64bit)
chassis: mysql-proxy 0.8.3
glib2: 2.16.6
libevent: 1.4.13-stable
-- modules
manager: 1.3.6
| Command-Line Format | --xcom-port=port | ||
| Permitted Values | Type | numeric | |
| Default | 18620 | ||
Allows you to specify the XCOM port. The default in 18620.
Before you can start using MySQL Cluster Manager to create and manage a MySQL Cluster, the MySQL Cluster Manager agent must be started on each computer that is intended to host one or more nodes in the MySQL Cluster to be managed.
The MySQL Cluster Manager agent employs a MySQL user account for administrative access to mysqld processes. It is possible, but not a requirement, to change the default user name, the default password used for this account, or both. For more information, see Section 2.3.3, “Setting the MySQL Cluster Manager Agent User Name and Password”.
To start the MySQL Cluster Manager agent on a given host running a Linux or
similar operating system, you should run
mcmd, found in the bin
directory within the manager installation directory on that
host. Typical options used with mcmd are
shown here:
mcmd [--defaults-file | --bootstrap] [--log-file] [--log-level]
See Section 3.1, “mcmd, the MySQL Cluster Manager Agent”, for information about additional options that can be used when invoking mcmd from the command line, or in a configuration file.
mcmd normally runs in the foreground. If you
wish, you can use your platform's usual mechanism for
backgrounding a process. On a Linux system, you can do this by
appending an ampersand character (&),
like this (not including any options that might be required):
shell> ./bin/mcmd &
By default, the agent assumes that the agent configuration file
is etc/mcmd.ini, in the MySQL Cluster Manager installation
directory. You can tell the agent to use a different
configuration file by passing the path to this file to the
--defaults-file option, as shown
here:
shell> ./bin/mcmd --defaults-file=/home/mcm/mcm-agent.conf
The --bootstrap option causes the
agent to start with default configuration values, create a
default one-machine cluster named mycluster,
and start it. This option works only if no cluster has yet
created, and is mutually exclusive with the
--defaults-file option. Currently,
any data stored in the default cluster
mycluster is not preserved between cluster
restarts; this is a known issue which we may address in a future
release of MySQL Cluster Manager.
The use of the --bootstrap option
with mcmd is shown here on a system having
the host name torsk, where MySQL Cluster Manager has been
installed to /home/jon/mcm:
shell> ./mcmd --bootstrap
MySQL Cluster Manager 1.3.6 started
Connect to MySQL Cluster Manager by running "/home/jon/mcm/bin/mcm" -a torsk:1862
Configuring default cluster 'mycluster'...
Starting default cluster 'mycluster'...
Cluster 'mycluster' started successfully
ndb_mgmd torsk:1186
ndbd torsk
ndbd torsk
mysqld torsk:3306
mysqld torsk:3307
ndbapi *
Connect to the database by running "/home/jon/mcm/cluster/bin/mysql" -h torsk -P 3306 -u root
You can then connect to the agent using the mcm client (see Section 3.3, “Starting the MySQL Cluster Manager Client”), and to either of the MySQL Servers running on ports 3306 and 3307 using mysql or another MySQL client application.
The --log-file option allows you to
override the default location for the agent log file (normally
mcmd.log, in the MySQL Cluster Manager installation
directory).
You can use --log-level option to
override the log-level set in the agent
configuration file.
See Section 3.1, “mcmd, the MySQL Cluster Manager Agent”, for more information about options that can be used with mcmd.
The MySQL Cluster Manager agent must be started on each host in the MySQL Cluster to be managed.
To stop one or more instances of the MySQL Cluster Manager agent, use the
stop agents command in the
MySQL Cluster Manager client. If the client is unavailable, you can stop each
agent process using the system's standard method for doing
so, such as ^C or kill.
You can also set the agent up as a daemon or service on Linux
and other Unix-like systems. (See
Section 2.3.1, “Installing MySQL Cluster Manager on Unix Platforms”.) If you also want data
node failed processes from a running MySQL Cluster to be started
when the agent fails and restarts in such cases, you must make
sure that StopOnError is
set to 0 on each data node (and not to 1, the default).
To start the MySQL Cluster Manager agent manually on a Windows host, you should
invoke mcmd.exe, found in the
bin directory under the manager
installation directory on that host. By default, the agent uses
etc/mcmd.ini in the MySQL Cluster Manager installation directory as its
configuration file; this can be overridden by passing the
desired file's location as the value of the
--defaults-file option.
Typical options for mcmd are shown here:
mcmd[.exe] [--defaults-file | --bootstrap] [--log-file] [--log-level]
For information about additional options that can be used with mcmd on the command line or in an option file, see Section 3.1, “mcmd, the MySQL Cluster Manager Agent”.
By default, the agent assumes that the agent configuration file
is etc/mcmd.ini, in the MySQL Cluster Manager installation
directory. You can tell the agent to use a different
configuration file by passing the path to this file to the
--defaults-file option, as shown
here:
C:\Program Files (x86)\MySQL\MySQL Cluster Manager 1.1.4\bin>
mcmd --defaults-file="C:\Program Files (x86)\MySQL\MySQL Cluster
Manager 1.3.6\etc\mcmd.ini"
The --bootstrap option causes the
agent to start with default configuration values, create a
default one-machine cluster named mycluster,
and start it. The use of this option with
mcmd is shown here on a system having the
host name torsk, where MySQL Cluster Manager has been
installed to the default location:
C:\Program Files (x86)\MySQL\MySQL Cluster Manager 1.3.6\bin>mcmd --bootstrap
MySQL Cluster Manager 1.3.6 started
Connect to MySQL Cluster Manager by running "C:\Program Files (x86)\MySQL\MySQL
Cluster Manager 1.3.6\bin\mcm" -a TORSK:1862
Configuring default cluster 'mycluster'...
Starting default cluster 'mycluster'...
Cluster 'mycluster' started successfully
ndb_mgmd TORSK:1186
ndbd TORSK
ndbd TORSK
mysqld TORSK:3306
mysqld TORSK:3307
ndbapi *
Connect to the database by running "C:\Program Files (x86)\MySQL\MySQL Cluster
Manager 1.3.6\cluster\bin\mysql" -h TORSK -P 3306 -u root
You can then connect to the agent using the mcm client (see Section 3.3, “Starting the MySQL Cluster Manager Client”), and to either of the MySQL Servers running on ports 3306 and 3307 using mysql or another MySQL client application.
When starting the MySQL Cluster Manager agent for the first time, you may see one or more Windows Security Alert dialogs, such as the one shown here:

You should grant permission to connect to private networks for any of the programs mcmd.exe, ndb_mgmd.exe, ndbd.exe, ndbmtd.exe, or mysqld.exe. To do so, check the Private Networks... box and then click the button. It is generally not necessary to grant MySQL Cluster Manager or MySQL Cluster access to public networks such as the Internet.
The --defaults-file and
--bootstrap options are mutually
exclusive.
The --log-file option allows you to
override the default location for the agent log file (normally
mcmd.log, in the MySQL Cluster Manager installation
directory).
You can use --log-level option to
override the log-level set in the agent
configuration file.
See Section 3.1, “mcmd, the MySQL Cluster Manager Agent”, for more information about options that can be used with mcmd.
The MySQL Cluster Manager agent must be started on each host in the MySQL Cluster to be managed.
It is possible to install MySQL Cluster Manager as a Windows service, so that it is started automatically each time Windows starts. See Section 2.3.2.1, “Installing the MySQL Cluster Manager Agent as a Windows Service”.
To stop one or more instances of the MySQL Cluster Manager agent, use the
stop agents command in the
MySQL Cluster Manager client. You can also stop an agent process using the
Windows Task Manager. In addition, if you have installed MySQL Cluster Manager
as a Windows service, you can stop (and start) the agent using
the Windows Service Manager, CTRL-C, or the
appropriate NET STOP (or NET
START) command. See
Starting and stopping the MySQL Cluster Manager agent Windows service,
for more information about each of these options.
This section covers starting the MySQL Cluster Manager client and connecting to the MySQL Cluster Manager agent.
MySQL Cluster Manager 1.3.6 includes a command-line client
mcm, located in the installation
bin directory. mcm can be
invoked with any one of the options shown in the following table:
| Long form | Short form | Description |
|---|---|---|
--help | -? | Display mcm client options |
--version | -V | Shows MySQL Cluster Manager agent/client version. |
| — | -W | Shows MySQL Cluster Manager agent/client version, with version of mysql used by mcm. |
--address | -a | Host and optional port to use when connecting to
mcmd, in
format; default is 127.0.0.1:1862. |
--mysql-help | -I | Show help for mysql client (see following). |
The client-server protocol used by MySQL Cluster Manager is platform-independent. You can connect to any MySQL Cluster Manager agent with an mcm client on any platform where it is available. This means, for example, that you can use an mcm client on Microsoft Windows to connect to a MySQL Cluster Manager agent that is running on a Linux host.
mcm actually acts as a wrapper for the mysql client that is included with the bundled MySQL Cluster distribution. Invoking mcm with no options specified is equivalent to the following:
shell> mysql -umcmd -psuper -h 127.0.0.1 -P 1862 --prompt="mcm>"
(These -u and -p options and
values are hard-coded and cannot be changed.) This means that you
can use the mysql client to run MySQL Cluster Manager client
sessions on platforms where mcm itself (or even
mcmd) is not available. For more information,
see Connecting to the agent using the mysql client.
If you experience problems starting an MySQL Cluster Manager client session because the client fails to connect, see Can't connect to [local] MySQL server, for some reasons why this might occur, as well as suggestions for some possible solutions.
To end a client session, use the exit or
quit command (short form:
\q). Neither of these commands requires a
separator or terminator character.
For more information, see Chapter 4, MySQL Cluster Manager Client Commands.
Connecting to the agent with the mcm client. You can connect to the MySQL Cluster Manager agent by invoking mcm (or, on Windows, mcm.exe). You may also need to specify a hostname, port number, or both, using the following command-line options:
--host=hostname or
-h[ ]hostname
This option takes the name or IP address of the host to
connect to. The default is localhost (which
may not be recongized on all platforms when starting a
mcm client session even if it works for
starting mysql client sessions).
You should keep in mind that the mcm client does not perform host name resolution; any name resolution information comes from the operating system on the host where the client is run. For this reason, it is usually best to use a numeric IP address rather than a hostname for this option.
--port=portnumber
or -P[ ]portnumber
This option specifies the TCP/IP port for the client to use.
This must be the same port that is used by the MySQL Cluster Manager agent. As
mentioned eslewhere, if no agent port is specified in the
MySQL Cluster Manager agent configuration file
(mcmd.ini), the default number of the
port used by the MySQL Cluster Manager agent is 1862, which is also used by
default by mcm.
mcm accepts additional mysql
client options, some of which may possibly be of use for MySQL Cluster Manager
client sessions. For example, the
--pager option might prove helpful
when the output of get contains
too many rows to fit in a single screen. The
--prompt option can be used to
provide a distinctive prompt to help avoid confusion between
multiple client sessions. However, options not shown in the
current manual have not been extensively tested with
mcm and so cannot be guaranteed to work
correctly (or even at all). See
mysql Options, for a complete listing
and descriptions of all mysql client options.
Like the mysql client, mcm
also supports \G as a statement terminator
which causes the output to be formatted vertically. This can be
helpful when using a terminal whose width is restricted to some
number of (typically 80) characters. See
Chapter 4, MySQL Cluster Manager Client Commands, for examples.
Connecting to the agent using the mysql client. As mentioned previously, mcm actually serves as a wrapper for the mysql client. In fact, a mysql client from any recent MySQL distribution (MySQL 5.1 or later) should work without any issues for connecting to mcmd. In addition, since the client-server protocol used by MySQL Cluster Manager is platform-independent, you can use a mysql client on any platform supported by MySQL. (This means, for example, that you can use a mysql client on Microsoft Windows to connect to a MySQL Cluster Manager agent that is running on a Linux host.) Connecting to the MySQL Cluster Manager agent using the mysql client is accomplished by invoking mysql and specifying a hostname, port number, username and password, using the following command-line options:
--host=hostname
or -h[ ]hostname
This option takes the name or IP address of the host to
connect to. The default is localhost. Like
the mcm client, the
mysql client does not perform host name
resolution, and relies on the host operating system for this
task. For this reason, it is usually best to use a numeric IP
address rather than a hostname for this option.
--port=portnumber
or -P[ ]portnumber
This option specifies the TCP/IP port for the client to use. This must be the same port that is used by the MySQL Cluster Manager agent. Although the default number of the port used by the MySQL Cluster Manager agent is 1862 (which is also used by default by mcm), this default value is not known to the mysql client, which uses port 3306 (the default port for the MySQL server) if this option is not specified when mysql is invoked.
Thus, you must use the
--port or -P
option to connect to the MySQL Cluster Manager agent using the
mysql client, even if the agent
process is using the MySQL Cluster Manager default port, and even
if the agent process is running on the same host as the
mysql client. Unless the correct agent port
number is supplied to it on startup, mysql
is unable to connect to the agent.
--user=username
or -u[ ]username
Specifies the username for the user trying to connect. Currently, the only user permitted to connect is “mcmd”; this is hard-coded into the agent software and cannot be altered by any user. By default, the mysql client tries to use the name of the current system user on Unix systems and “ODBC” on Windows, so you must supply this option and the username “mcmd” when trying to access the MySQL Cluster Manager agent with the mysql client; otherwise, mysql cannot connect to the agent.
--password[=password]
or -p[password]
Specifies the password for the user trying to connect. If you
use the short option form (-p), you
must not leave a space between this
option and the password. If you omit the
password value following the
--password or -p option on
the command line, the mysql client prompts
you for one.
Specifying a password on the command line should be considered insecure. It is preferable that you either omit the password when invoking the client, then supply it when prompted, or put the password in a startup script or configuration file.
Currently, the password is hard-coded as “super”,
and cannot be changed or overridden by MySQL Cluster Manager users. Therefore,
if you do not include the
--password or
-p option when invoking
mysql, it cannot connect to the agent.
In addition, you can use the
--prompt option to set the
mysql client's prompt. This is
recommended, since allowing the default prompt
(mysql>) to be used could lead to confusion
between a MySQL Cluster Manager client session and a MySQL client session.
Thus, you can connect to a MySQL Cluster Manager agent by invoking the mysql client on the same machine from the system shell in a manner similar to what is shown here.
shell> mysql -h127.0.0.1 -P1862 -umcmd -p --prompt='mcm> '
For convenience, on systems where mcm itself is
not available, you might even want to put this invocation in a
startup script. On a Linux or similar system, this script might be
named mcm-client.sh, with contents similar to
what is shown here:
#!/bin/sh /usr/local/mysql/bin/mysql -h127.0.0.1 -P1862 -umcmd -p --prompt='mcm> '
In this case, you could then start up a MySQL Cluster Manager client session using something like this in the system shell:
shell> ./mcm-client
On Windows, you can create a batch file with a name such as
mcm-client.bat containing something like
this:
C:\mysql\bin\mysql.exe -umcmd -psuper -h localhost -P 1862 --prompt="mcm> "
(Adjust the path to the mysql.exe client executable as necessary to match its location on your system.)
If you save this file to a convenient location such as the Windows desktop, you can start a MySQL Cluster Manager client session merely by double-clicking the corresponding file icon on the desktop (or in Windows Explorer); the client session opens in a new cmd.exe (DOS) window.
This section provides basic information about setting up a new MySQL Cluster with MySQL Cluster Manager. It also supplies guidance on migration of an existing MySQL Cluster to MySQL Cluster Manager.
For more information about obtaining and installing the MySQL Cluster Manager agent and client software, see Chapter 2, MySQL Cluster Manager Installation, Configuration, Cluster Setup.
See Chapter 4, MySQL Cluster Manager Client Commands, for detailed information on the MySQL Cluster Manager client commands shown in this chapter.
In this section, we discuss the procedure for using MySQL Cluster Manager to create and start a new MySQL Cluster. We assume that you have already obtained the MySQL Cluster Manager and MySQL Cluster software, and that you are already familiar with installing MySQL Cluster Manager (see Chapter 2, MySQL Cluster Manager Installation, Configuration, Cluster Setup).
MySQL Cluster Manager 1.3.0 and later also supports importing existing, standalone MySQL Clusters; for more information, see Section 3.5, “Importing MySQL Clusters into MySQL Cluster Manager”.
We also assume that you have identified the hosts on which you plan to run the cluster and have decided on the types and distributions of the different types of nodes among these hosts, as well as basic configuration requirements based on these factors and the hardware charactersitics of the host machines.
You can create and start a MySQL Cluster on a single host for
testing or similar purposes, simply by invoking
mcmd with the --bootstrap
option. See Section 3.2, “Starting and Stopping the MySQL Cluster Manager Agent”.
Creating a new cluster consists of the following tasks:
MySQL Cluster Manager agent installation and startup. Install the MySQL Cluster Manager software distribution, make any necessary edits of the agent configuration files, and start the agent processes as explained in Chapter 2, MySQL Cluster Manager Installation, Configuration, Cluster Setup. Agent processes must be running on all cluster hosts before you can create a cluster. This means that you need to place a complete copy of the MySQL Cluster Manager software distribution on every host. The MySQL Cluster Manager software does not have to be in a specific location, or even the same location on all hosts, but it must be present; you cannot manage any cluster processes hosted on a computer where mcmd is not also running.
MySQL Cluster Manager client session startup. Starting the MySQL Cluster Manager client and connect to the MySQL Cluster Manager agent. You can connect to an agent process running on any of the cluster hosts, using the mcm client on any computer that can establish a network connection to the desired host. See Section 3.3, “Starting the MySQL Cluster Manager Client”, for details.
On systems where mcm is not available, you can use the mysql client for this purpose. See Connecting to the agent using the mysql client.
MySQL Cluster software deployment.
The simplest and easiest way to do this is to copy the
complete MySQL Cluster distribution to the same location
on every host in the cluster. (If you have installed MySQL Cluster Manager
1.3.6 on each host, the MySQL Cluster NDB 7.2.4
distribution is already included, in
.)
If you do not use the same location on every host, be sure
to note it for each host. Do not yet start any MySQL
Cluster processes or edit any configuration files; when
creating a new cluster, MySQL Cluster Manager takes care of these tasks
automatically.
mcm_installation_dir/cluster
On Windows hosts, you should not install as services any of the MySQL Cluster node process programs, including ndb_mgmd.exe, ndbd.exe, ndbmtd.exe, and mysqld.exe. MySQL Cluster Manager manages MySQL Cluster processes independently of the Windows Service Manager and does not interact with the Service Manager or any Windows services when doing so.
You can actually perform this step at any time up to the
point where the software package is registered (using
add package). However,
we recommend that you have all required
software—including the MySQL Cluster
software—in place before executing any MySQL Cluster Manager client
commands.
Management site definition.
Using the create site
command in the MySQL Cluster Manager client, define a MySQL Cluster Manager management
site—that is, the set of hosts to be managed. This
command provides a name for the site, and must reference
all hosts in the cluster.
Section 4.2.6, “The create site Command”, provides syntax and
other information about this command. To verify that the
site was created correctly, use the MySQL Cluster Manager client commands
list sites and
list hosts.
MySQL Cluster software package registration.
In this step, you provide the location of the MySQL
Cluster software on all hosts in the cluster using one or
more add package
commands. To verify that the package was created
correctly, use the list
packages and list
processes commands.
Cluster definition.
Execute a create cluster
command to define the set of MySQL Cluster nodes
(processes) and hosts on which each cluster process runs,
making up a the MySQL Cluster. This command also uses the
name of the package registered in the previous step so
that MySQL Cluster Manager knows the location of the binary running each
cluster process. You can use the
list clusters and
list processes commands
to determine whether the cluster has been defined as
desired.
If you wish to use SQL node connection pooling, see Setup for mysqld connection pooling before creating the cluster.
Initial configuration.
Perform any configuration of the cluster that is required
or desired prior to starting it. You can set values for
MySQL Cluster Manager configuration attributes (MySQL Cluster parameters
and MySQL Server options) using the MySQL Cluster Manager client
set command. You do not
need to edit any configuration files directly—in
fact, you should not do so. Keep in
mind that certain attributes are read-only, and that some
others cannot be reset after the cluster has been started
for the first time. You can use the
get command to verify
that attributes have been set to the correct values.
Cluster startup.
Once you have completed the previous steps, including
necessary or desired initial configuration, you are ready
to start the cluster. The start
cluster command starts all cluster processes in
the correct order. You can verify that the cluster has
started and is running normally after this command has
completed, using the MySQL Cluster Manager client command
show status. At this
point, the cluster is ready for use by MySQL Cluster
applications.
It is possible to bring a “wild” MySQL Cluster—that is, a cluster not created using MySQL Cluster Manager—under the control of MySQL Cluster Manager. The following sections provide an outline of the procedure required to import such a cluster into MySQL Cluster Manager, followed by a more detailed example.
The importation process consists generally of following the steps listed here:
Create and configure in MySQL Cluster Manager a “target” cluster whose configuration matches that of the “wild” cluster.
Prepare the “wild” cluster for migration.
Verify PID files for cluster processes.
Perform a test run, and then execute the import
cluster command.
This expanded listing breaks down each of the tasks just mentioned into smaller steps; an example with more detail is also provided following the listing.
Create and configure “target” cluster under MySQL Cluster Manager control
Install MySQL Cluster Manager and start mcmd on all hosts; see Section 2.3, “MySQL Cluster Manager Installation”, for more information.
Create a MySQL Cluster Manager site encompassing these hosts, using the
create site command.
Add a MySQL Cluster Manager package referencing the MySQL Cluster
binaries, using the add
package command. Use this command's
--basedir option to point to the
correct location.
Create the target cluster using the
create
cluster command, including the same processes
and hosts used by the wild cluster. Use the command's
--import option to specify that the
cluster is a target for import.
If the wild cluster adheres to the recommendation for
node ID assignments given in the description for the
create
cluster command (that is, having node ID 1 to
48 assigned to data nodes, and 49 and above assigned to
other node types), you need not specify the node IDs for
the processes in the
create
cluster command.
Also, this step may be split into a
create
cluster command followed by one or more
add process commands
(see an example of such splitting in the description for
the
add
process command).
MySQL Cluster Manager 1.3.1 and later: Use
import config to copy
the wild cluster's configuration data into the
target cluster. Use this command's
--dryrun option (short form:
-y) to perform a test run that merely
logs the configuration information that the command
copies when it is executed without the option.
If any ndb_mgmd or
mysqld processes in the wild cluster
are running on ports other than the default, you must
perform set commands to
assign the correct port numbers for these in the target
cluster. When all such processes are running on the
correct ports, you can execute import
config (without the --dryrun
option) to copy the wild cluster's configuration
data. Following this step, you should check the log as
well as the configuration of the target cluster to
ensure that all configuration attribute values were
copied correctly and with the correct scope. Correct any
inconsistencies with the wild cluster's
configuration using the appropriate
set commands.
MySQL Cluster Manager 1.3.0: Since
import config is not
supported prior to the MySQL Cluster Manager 1.3.1 release, you must
copy the wild cluster's configuration information
to the target cluster manually, issuing
set commands in the
mcm client that duplicate the wild
cluster's configuration in the target cluster, as
discussed in the paragraphs immediately following.
MySQL Cluster global configuration data is stored in a
file on the management node host which is usually (but
not always) named config.ini. This
global configuration file uses INI format which makes it
simple to read or parse. For more information about this
file, see NDB Cluster Configuration Files,
and NDB Cluster Configuration: Basic Example
In addition, each mysqld process (SQL
node) has its own configuration data in the form of
system variables which are specific to that
mysqld, and many of which can be
changed at runtime. You can check their values using the
SQL SHOW VARIABLES
statement, and execute appropriate
set commands for values
differing from their defaults.
Prepare the “wild” cluster for migration
Create a MySQL user named mcmd on
each SQL node, and grant root privileges to this user.
Kill each data node angel process using your system's facility for doing so. Do not kill any non-angel data node daemons.
Kill and restart each management node process. When
restarting ndb_mgmd, be sure to do so
with the configuration cache disabled. Since the
configuration cache is enabled by default, you must
start the management node with
--config-cache=false to
deactivate it.
Any cluster processes that are under the control of the
system's boot-time process management facility,
such as /etc/init.d on Linux
systems or the Services Manager on Windows platforms,
should be removed from its control.
It is highly recommended that you take a complete backup of the “wild” cluster before proceeding any further, using the ndb_mgm client. For more information, see Using The NDB Cluster Management Client to Create a Backup.
Verify cluster process PID files.
Verify that each process in the “wild” cluster has a valid PID file.
If a given process does not have a valid PID file, you must create one for it.
See Section 3.5.2.3, “Verify All Cluster Process PID Files”, for a more detailed explanation and examples.
Test and perform migration of “wild” cluster.
Perform a test run of the proposed migration using
import cluster with the
--dryrun option, which causes MySQL Cluster Manager to
check for errors, but not actually migrate any processes
or data.
Correct any errors found using
--dryrun. Repeat the dry run from the
previous step to ensure that no errors were missed.
When the dry run no longer reports any errors, you can
perform the migration using import
cluster, but without the
--dryrun option.
As discussed previously (see
Section 3.5.1, “Importing a Cluster Into MySQL Cluster Manager: Basic Procedure”), importing
a standalone or “wild” cluster that was created
without the use of MySQL Cluster Manager into the manager requires the
completion of four major tasks: create a cluster in MySQL Cluster Manager and
update its configuration such that this matches that of the
“wild” cluster; prepare the “wild”
cluster for MySQL Cluster Manager control; verify all PID files for cluster
processes; and performing a dry run and then the actual import
using the import cluster
command. The example provided over the next few sections shows
all steps required to perform the importation of a small,
standalone MySQL Cluster into MySQL Cluster Manager.
Sample cluster used in example. The “wild” cluster used in this example consists of four nodes—one management node, one SQL node, and two data nodes running ndbd. Each of these nodes resides on one of four hosts, all of which are running a recent server release of a typical Linux distribution. The host names for each of these hosts is shown in the following table:
| Node type (executable) | Host name |
|---|---|
| Management node (ndb_mgmd) | alpha |
| Data node (ndbd) | beta |
| Data node (ndbd) | gamma |
| SQL node (mysqld) | delta |
We assume that these hosts are on a dedicated network or subnet,
and that each of them is running only the MySQL Cluster binaries
and applications providing required system and network services.
We assume on each host that the MySQL Cluster software has been
installed from a release binary archive (see
Installing an NDB Cluster Binary Release on Linux). We also
assume that management node is using
/var/lib/mysql-cluster/config.ini as the
cluster's global configuration file, which is shown here:
[ndbd default] DataMemory= 16G IndexMemory= 12G NoOfReplicas= 2 [ndb_mgmd] HostName=alpha NodeId=50 [ndbd] NodeId=5 HostName=beta DataDir=/var/lib/mysql-cluster [ndbd] NodeId=6 HostName=gamma DataDir=/var/lib/mysql-cluster [mysqld] NodeId=100 HostName=delta [ndbapi] NodeId=101
The objective for this example is to bring this cluster, including all of its processes and data, under MySQL Cluster Manager control. This configuration also provides for a “free” SQL node or NDB API application not bound to any particular host; we account for this in the example.
The first task when preparing to import a standalone MySQL Cluster into MySQL Cluster Manager is to create a “target” cluster, Once this is done, we modify the target's configuration until it matches that of the “wild” cluster that we want to import. At a later point in the example, we also show how to test the configuration in a dry run before attempting to perform the actual import.
To create and then configure the target cluster, follow the steps listed here:
Install MySQL Cluster Manager and start mcmd on all
hosts; we assume that you have installed MySQL Cluster Manager to the
recommended location, in this case the directory
/opt/mcm-1.3.6. (See
Section 2.3, “MySQL Cluster Manager Installation”, for more
information.) Once you have done this, you can start the
mcm client (see
Section 3.3, “Starting the MySQL Cluster Manager Client”) on any one of
these hosts to perform the next few steps.
Create a MySQL Cluster Manager site encompassing all four of these hosts,
using the create site
command, as shown here:
mcm> create site --hosts=alpha,beta,gamma,delta newsite;
+---------------------------+
| Command result |
+---------------------------+
| Site created successfully |
+---------------------------+
1 row in set (0.15 sec)
We have named this site newsite. You
should be able to see it listed in the output of the
list sites command,
similar to what is shown here:
mcm> list sites;
+---------+------+-------+------------------------+
| Site | Port | Local | Hosts |
+---------+------+-------+------------------------+
| newsite | 1862 | Local | alpha,beta,gamma,delta |
+---------+------+-------+------------------------+
1 row in set (0.01 sec)
Add a MySQL Cluster Manager package referencing the MySQL Cluster
binaries, using the add
package command; this command's
--basedir
option can be used to point to the correct location. The
command shown here creates such a package, named
newpackage:
mcm> add package --basedir=/usr/local/mysql newpackage;
+----------------------------+
| Command result |
+----------------------------+
| Package added successfully |
+----------------------------+
1 row in set (0.70 sec)
You do not need to include the bin
directory containing the MySQL Cluster executables in the
--basedir path. Since the executables are
in /usr/local/mysql/bin, it is
sufficient to specify
/usr/local/mysql; MySQL Cluster Manager automatically
checks for the binaries in a bin
directory within the one specified by
--basedir.
Create the target cluster including at least some of the
same processes and hosts used by the standalone cluster.
Do not include any processes or hosts that are
not part of this cluster. In order to prevent
potentially disruptive process or cluster operations from
interfering by accident with the import process, it is
strongly recommended that you create the cluster for
import, using the
--import
option for the create
cluster command.
You must also take care to preserve the correct node ID
(as listed in the config.ini file
shown previously) for each node. In MySQL Cluster Manager 1.3.1 and later,
using the --import option allows you to
specify node IDs under 49 for nodes other than data nodes,
which is otherwise prevented when using create
cluster (the restriction has been lifted since
MySQL Cluster Manager 1.3.4).
The following command creates the cluster
newcluster for import, and includes the
management and data nodes, but not the SQL or
“free” API node (which we add in the next
step):
mcm> create cluster --import --package=newpackage \
--processhosts=ndb_mgmd:50@alpha,ndbd:1@beta,ndbd:2@gamma \
newcluster;
+------------------------------+
| Command result |
+------------------------------+
| Cluster created successfully |
+------------------------------+
1 row in set (0.96 sec)
You can verify that the cluster was created correctly by
checking the output of show
status with the
--process
(-r) option, like this:
mcm> show status -r newcluster;
+--------+----------+-------+--------+-----------+------------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+-------+--------+-----------+------------+
| 50 | ndb_mgmd | alpha | import | | newpackage |
| 5 | ndbd | beta | import | n/a | newpackage |
| 6 | ndbd | gamma | import | n/a | newpackage |
+--------+----------+-------+--------+-----------+------------+
3 rows in set (0.01 sec)
If necessary, add any remaining processes and hosts from
the “wild” cluster not included in the
previous step using one or more add
process commands. We have not yet accounted for
2 of the nodes from the wild cluster: the SQL node with
node ID 100, on host delta; and the API
node which has node ID 101, and is not bound to any
specific host. You can use the following command to add
both of these processes to newcluster:
mcm> add process --processhosts=mysqld:100@delta,ndbapi:101@* newcluster;
+----------------------------+
| Command result |
+----------------------------+
| Process added successfully |
+----------------------------+
1 row in set (0.41 sec)
Once again checking the output from show
status -r, we see that the
mysqld and ndbapi
processes were added as expected:
mcm> show status -r newcluster;
+--------+----------+-------+--------+-----------+------------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+-------+--------+-----------+------------+
| 50 | ndb_mgmd | alpha | import | | newpackage |
| 5 | ndbd | beta | import | n/a | newpackage |
| 6 | ndbd | gamma | import | n/a | newpackage |
| 100 | mysqld | delta | import | | newpackage |
| 101 | ndbapi | * | import | | |
+--------+----------+-------+--------+-----------+------------+
5 rows in set (0.08 sec)
You can also see that, since newcluster
was created using the create
cluster command's
--import
option, the status of all processes in this
cluster—including those we just added—is
import. This means we cannot yet start
newcluster or any of its processes, as
shown here:
mcm>start process 50 newcluster;ERROR 5317 (00MGR): Unable to perform operation on cluster created for import mcm>start cluster newcluster;ERROR 5317 (00MGR): Unable to perform operation on cluster created for import
The import status and its effects on
newcluster and its cluster processes
persist until we have completed importing another cluster
into newcluster.
Following the execution of the add
process command shown previously, the target
newcluster cluster now has the same
processes, with the same node IDs, and on the same hosts
as the original standalone cluster. We are ready to
proceed to the next step.
Now it is necessary to duplicate the wild cluster's
configuration attributes in the target cluster. In MySQL Cluster Manager
1.3.1 and later, you can handle most of these using the
import config command, as
shown here:
mcm> import config --dryrun newcluster;
+---------------------------------------------------------------------------+
| Command result |
+---------------------------------------------------------------------------+
| Import checks passed. Please check log for settings that will be applied. |
+---------------------------------------------------------------------------+
1 row in set (5.36 sec)
Before executing this command it is necessary to set any
non-default ports for ndb_mgmd and
mysqld processes using the
set command in the
mcm client.
As indicated by the output from
import
config --dryrun, you can see the configuration
attributes and values that would be copied to
newcluster by the unimpeded command in
the agent log file (mcmd.log), which
by default is created in the MySQL Cluster Manager installation directory.
If you open this file in a text editor, you can locate a
series of set commands
that would accomplish this task, similar to what is shown
here (emphasized text):
2014-03-14 16:05:11.896: (message) [T0x1ad12a0 CMGR ]: Got new message mgr_import_configvalues {84880f7a 35 0}
2014-03-14 16:05:11.896: (message) [T0x1ad12a0 CMGR ]: Got new message mgr_import_configvalues {84880f7a 36 0}
2014-03-14 16:05:11.896: (message) [T0x1ad12a0 CMGR ]: Got new message mgr_import_configvalues {84880f7a 37 0}
2014-03-14 16:05:13.698: (message) [T0x7f4fb80171a0 RECFG]: All utility process have finished
2014-03-14 16:05:13.698: (message) [T0x7f4fb80171a0 RECFG]: Process started : /usr/local/mysql/bin/mysqld --no-defaults --help --verbose
2014-03-14 16:05:13.698: (message) [T0x7f4fb80171a0 RECFG]: Spawning mysqld --nodefaults --help --verbose asynchronously
2014-03-14 16:05:13.904: (message) [T0x7f4fb80171a0 RECFG]: Successfully pulled default configuration from mysqld 100
2014-03-14 16:05:13.905: (warning) [T0x7f4fb80171a0 RECFG]: Failed to remove evsource!
2014-03-14 16:05:15.719: (message) [T0x7f4fb80171a0 RECFG]: All utility process have finished
2014-03-14 16:05:15.725: (message) [T0x7f4fb80171a0 RECFG]: Applying mysqld configuration to cluster...
2014-03-14 16:05:16.186: (message) [T0x1ad12a0 CMGR ]: Got new message mgr_import_configvalues {84880f7a 38 0}
2014-03-14 16:05:16.187: (message) [T0x1ad12a0 CMGR ]: Got new message x_trans {84880f7a 39 0}
2014-03-14 16:05:16.286: (message) [T0x1ad12a0 CMGR ]: Got new message x_trans {84880f7a 40 0}
2014-03-14 16:05:16.286: (message) [T0x7f4fb80171a0 RECFG]: The following will be applied to the current cluster config:
set DataDir:ndb_mgmd:50="" newcluster
set IndexMemory:ndbd:5=1073741824 newcluster
set DataMemory:ndbd:5=1073741824 newcluster
set DataDir:ndbd:5=/usr/local/mysql/mysql-cluster/data newcluster
set ThreadConfig:ndbd:5="" newcluster
set IndexMemory:ndbd:6=1073741824 newcluster
set DataMemory:ndbd:6=1073741824 newcluster
set DataDir:ndbd:6=/usr/local/mysql/mysql-cluster/data newcluster
set ThreadConfig:ndbd:6="" newcluster
set basedir:mysqld:100=/usr/local/mysql newcluster
set character_sets_dir:mysqld:100=/usr/local/mysql/share/charsets newcluster
set datadir:mysqld:100=/usr/local/mysql/data newcluster
set general_log_file:mysqld:100=/usr/local/mysql/data/delta.log newcluster
set lc_messages_dir:mysqld:100=/usr/local/mysql/share newcluster
set log_error:mysqld:100=/usr/local/mysql/data/delta.err newcluster
set ndb_connectstring:mysqld:100=alpha newcluster
set ndb_mgmd_host:mysqld:100=alpha newcluster
set optimizer_trace:mysqld:100=enabled=off,one_line=off newcluster
set pid_file:mysqld:100=/usr/local/mysql/data/delta.pid newcluster
set plugin_dir:mysqld:100=/usr/local/mysql/lib/plugin newcluster
set report_port:mysqld:100=3306 newcluster
set slow_query_log_file:mysqld:100=/usr/local/mysql/data/delta-slow.log newcluster
set sql_mode:mysqld:100=STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION newcluster
Assuming that the dry run was successful, you should now be able to import the wild cluster's configuration into newcluster, with the command and a result similar similar to what is shown here:
mcm> import config newcluster;
+------------------------------------------------------------------------------------------------------------------+
| Command result |
+------------------------------------------------------------------------------------------------------------------+
| Configuration imported successfully. Please manually verify plugin options, abstraction level and default values |
+------------------------------------------------------------------------------------------------------------------+
You should check the log from the dry run and the
resulting configuration of newcluster
carefully against the configuration of the wild cluster.
If you find any inconsistencies, you must correct these in
newcluster using the appropriate
set commands afterwards.
Manual configuration import (MySQL Cluster Manager 1.3.0).
In MySQL Cluster Manager 1.3.0, which does not support the
import config command, it
is necessary to copy the wild cluster's configuration
manually, using set
commands in the mcm client (once you have
obtained the values of any attributes that differ from their
defaults). The remainder of this section applies primarily
to MySQL Cluster Manager 1.3.0 and the process described here is generally
not needed in MySQL Cluster Manager 1.3.1 and later.
MySQL Cluster global configuration data is stored in a file
that is usually (but not always) named
config.ini. Its location on a management
node host is arbitrary (there is no default location for it),
but if this is not already known, you can determine it by
checking—for example, on a typical Linux
system—the output of ps
for the --config-file option
value that the management node was started with, shown with
emphasized text in the output:
shell> ps ax | grep ndb_mgmd
18851 ? Ssl 0:00 ./ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini
18922 pts/4 S+ 0:00 grep --color=auto ndb_mgmd
This file uses INI format to store global
configuration information, and is thus easy to read, or to
parse with a script. We start the setup of the target
cluster' configuration by checking each section of this
file in turn. The first section is repeated here:
[ndbd default] DataMemory= 16G IndexMemory= 12G NoOfReplicas= 2
The [ndbd default] heading indicates that
all attributes defined in this section apply to all cluster
data nodes. We can set all three attributes listed in this
section of the file for all data nodes in
newcluster, using the
set command shown here:
mcm> set DataMemory:ndbd=16G,IndexMemory:ndbd=12G,NoOfReplicas:ndbd=2 newcluster;
+-----------------------------------+
| Command result |
+-----------------------------------+
| Cluster reconfigured successfully |
+-----------------------------------+
1 row in set (0.36 sec)
You can verify that the desired changes have taken effect
using the get command, as
shown here:
mcm> get DataMemory:ndbd,IndexMemory:ndbd,NoOfReplicas:ndbd newcluster;
+--------------+-------+----------+---------+----------+---------+---------+---------+
| Name | Value | Process1 | NodeId1 | Process2 | NodeId2 | Level | Comment |
+--------------+-------+----------+---------+----------+---------+---------+---------+
| DataMemory | 16G | ndbd | 5 | | | Process | |
| IndexMemory | 12G | ndbd | 5 | | | Process | |
| NoOfReplicas | 2 | ndbd | 5 | | | Process | |
| DataMemory | 16G | ndbd | 6 | | | Process | |
| IndexMemory | 12G | ndbd | 6 | | | Process | |
| NoOfReplicas | 2 | ndbd | 6 | | | Process | |
+--------------+-------+----------+---------+----------+---------+---------+---------+
6 rows in set (0.07 sec)
The next section in the file is shown here:
[ndb_mgmd] HostName=alpha NodeId=1
This section of the file applies to the management node. We
set its NodeId and
HostName attributes
previously, when we created newcluster. No
further changes are required at this time.
The next two sections of the config.ini
file, shown here, contain configuration values specific to
each of the data nodes:
[ndbd] NodeId=5 HostName=beta DataDir=/var/lib/mysql-cluster [ndbd] NodeId=6 HostName=gamma DataDir=/var/lib/mysql-cluster
As was the case for the management node, we already provided
the correct node IDs and host names for the data nodes when we
created newcluster, so only the
DataDir attribute
remains to be set. We can accomplish this by executing the
following command in the mcm client:
mcm> set DataDir:ndbd:5=/var/lib/mysql-cluster,DataDir:ndbd:6=/var/lib/mysql-cluster \
newcluster;
+-----------------------------------+
| Command result |
+-----------------------------------+
| Cluster reconfigured successfully |
+-----------------------------------+
1 row in set (0.42 sec)
You may have noticed that we could have set the
DataDir attribute on
the process level using the shorter and simpler command
set DataDir:ndbd=/var/lib/mysql-cluster
newcluster, but since this attribute was defined
individually for each data node in the original configuration,
we match this scope in the new configuration by setting this
attribute for each ndbd instance instead.
Once again, we check the result using the
mcm client
get command, as shown here:
mcm> get DataDir:ndbd newcluster;
+---------+------------------------+----------+---------+----------+---------+-------+---------+
| Name | Value | Process1 | NodeId1 | Process2 | NodeId2 | Level | Comment |
+---------+------------------------+----------+---------+----------+---------+-------+---------+
| DataDir | /var/lib/mysql-cluster | ndbd | 5 | | | | |
| DataDir | /var/lib/mysql-cluster | ndbd | 6 | | | | |
+---------+------------------------+----------+---------+----------+---------+-------+---------+
2 rows in set (0.01 sec)
Configuration attributes for the SQL node are contained the the next section of the file, shown here:
[mysqld] NodeId=100 HostName=delta
The NodeId and
HostName attributes
were already set when we added the mysqld
process to newcluster, so no additional
set commands are required at
this point. Keep in mind that there may be
additional local configuration values for this
mysqld that must be accounted for in the
configuration we are creating for
newcluster; we discuss how to
determine these values on the SQL node later in this section.
The remaining section of the file, shown here, contains a section defining attributes for a “free” API node that is not required to connect from any particular host:
[ndbapi] NodeId=101
We have already set the NodeId and there is
no need for a HostName for a free process.
There are no other attributes that need to be set for this
node.
For more information about the MySQL
config.ini global configuration file, see
NDB Cluster Configuration Files, and
NDB Cluster Configuration: Basic Example.
As mentioned earlier in this section, each
mysqld process (SQL node) may have, in
addition to any attributes set in
config.ini, its own configuration data in
the form of system variables which are specific to that
mysqld. These can be set in two ways:
Because the initial values of many options can be changed at
runtime, it is recommended that—rather than attempt to
read the my.cnf or
my.ini file—you check values for
all system variables on each SQL node “live” in
the mysql client by examining the output of
the SHOW VARIABLES statement,
and execute set commands
setting each of these values where it differs from the default
for that variable on that SQL node.
The mcm client can execute a script file
containing client commands. The contents of such a script,
named my-commands.mcm, which contains all
commands we executed to create and configure
newcluster, are shown here:
create cluster --import --package=newpackage --processhosts=ndb_mgmd:50@alpha,ndbd:5@beta,ndbd:6@gamma newcluster;
add process --processhosts=mysqld:100@delta,ndbapi:101@* newcluster;
set DataMemory:ndbd=16G,IndexMemory:ndbd=12G,NoOfReplicas:ndbd=2 newcluster;
set DataDir:ndbd:5=/var/lib/mysql-cluster,DataDir:ndbd:6=/var/lib/mysql-cluster newcluster;
You can run such a script by invoking the client from the command line with a redirection operator, like this:
shell> mcm < my-commands.mcm
The name of the script file is completely arbitrary. It must
contain valid mcm client commands or
comments only. (A comment is delimited by a
# character, and extends from the point in
the line where this is found to the end of the line.) Any
valid mcm client command can be used in
such a file. mcm must be able to read the
file, but the file need not be executable, or readable by any
other users.
The next step in the import process is to prepare the
“wild” cluster for migration. This requires
creating an mcmd user account with root
privileges on all hosts in the cluster; killing any data node
angel processes that may be running; restarting all management
nodes without configuration caching; removing cluster
processes from control by any system service management
facility. More detailed information about performing these
tasks is provided in the remainder of this section.
Before proceeding with any migration, the taking of a backup
using the ndb_mgm client's
BACKUP command is strongly recommended.
MySQL Cluster Manager acts through a MySQL user named
mcmd on each SQL node. It is therefore
necessary to create this user and grant root privileges to
it. To do this, log in to the SQL node running on host
delta and execute in the
mysql client the SQL statements shown
here:
CREATE USER 'mcmd'@'delta' IDENTIFIED BY 'super'; GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'delta' WITH GRANT OPTION;
Keep in mind that, if the “wild” cluster has
more than one SQL node, you must create the
mcmd user on every one of these nodes.
Kill each data node angel process using the system's
facility for doing so. Do not kill any non-angel data node
daemons. On a Linux system, you can identify angel
processes by matching their process IDs with the owner IDs
of the remaining ndbd processes in the
output of ps excuted on
host beta of the example cluster, as
shown here, with the relevant process IDs shown in
emphasized text:
shell> ps -ef | grep ndbd
jon 2023 1 0 18:46 ? 00:00:00 ./ndbd -c alpha
jon 2024 2023 1 18:46 ? 00:00:00 ./ndbd -c alpha
jon 2124 1819 0 18:46 pts/2 00:00:00 grep --color=auto ndbd
Use the kill command to terminate the process with the indicated process ID, like this:
shell> kill -9 2023
Verify that the angel process has been killed, and that only one of the two original ndbd processes remain, by issuing ps again, as shown here:
shell> ps -ef | grep ndbd
jon 2024 1 1 18:46 ? 00:00:01 ./ndbd -c alpha
jon 2150 1819 0 18:47 pts/2 00:00:00 grep --color=auto ndbd
Now repeat this process from a login shell on host
gamma, as shown here:
shell>ps -ef | grep ndbdjon 2066 1 0 18:46 ? 00:00:00 ./ndbd -c alpha jon 2067 2066 1 18:46 ? 00:00:00 ./ndbd -c alpha jon 3712 1704 0 18:46 pts/2 00:00:00 grep --color=auto ndbd shell>kill -9 2066shell>ps -ef | grep ndbdjon 2067 1 1 18:46 ? 00:00:01 ./ndbd -c alpha jon 2150 1819 0 18:47 pts/2 00:00:00 grep --color=auto ndbd
The wild cluster's data nodes are now ready for migration.
Kill and restart each management node process. When
restarting ndb_mgmd, its configuration
cache must be disabled; since this is enabled by default,
you must start the management server with
--config-cache=false, in
addition to any other options that it was previously
started with.
Do not use 0 or
OFF for the value of the
--config-cache option when restarting
ndb_mgmd in this step. Using either
of these values instead of false at
this time causes the migration of the management node
process to fail at later point in the importation
process.
On Linux, we can once again use
ps to obtain the
information we need to accomplish this, this time in a
shell on host alpha:
shell> ps -ef | grep ndb_mgmd
jon 16005 1 1 18:46 ? 00:00:09 ./ndb_mgmd -f /etc/mysql-cluster/config.ini
jon 16401 1819 0 18:58 pts/2 00:00:00 grep --color=auto ndb_mgmd
The process ID is 16005, and the management node was
started with the -f option (the short
form for --config-file).
First, terminate the management using
kill, as shown here, with
the process ID obtained from
ps previously:
shell> kill -9 16005
Verify that the management node process was killed, like this:
shell> ps -ef | grep ndb_mgmd
jon 16532 1819 0 19:03 pts/2 00:00:00 grep --color=auto ndb_mgmd
Now restart the management node as described previously, with the same options that it was started with previously, and with the configuration cache disabled. Change to the directory where ndb_mgmd is located, and restart it, like this:
shell> ./ndb_mgmd -f /etc/mysql-cluster/config.ini --config-cache=false
MySQL Cluster Management Server mysql-5.6.24-ndb-7.4.6
2013-12-06 19:16:08 [MgmtSrvr] INFO -- Skipping check of config directory since
config cache is disabled.
Verify that the process is running as expected, using ps:
shell> ps -ef | grep ndb_mgmd
jon 17066 1 1 19:16 ? 00:00:01 ./ndb_mgmd -f
/etc/mysql-cluster/config.ini --config-cache=false
jon 17311 1819 0 19:17 pts/2 00:00:00 grep --color=auto ndb_mgmd
The management node is now ready for migration.
While our example cluster has only a single management node, it is possible for a MySQL Cluster to have more than one. In such cases, you must stop and restart each management node process as just described in this step.
Any cluster processes that are under the control of a
system boot process management facility, such as
/etc/init.d on Linux systems or the
Services Manager on Windows platforms, should be removed
from this facility's control. Consult your system
operating documentation for information about how to do
this. Be sure not to stop any running cluster processes in
the course of doing so.
It is highly recommended that you take a complete backup
of the “wild” cluster before proceeding any
further, using the ndb_mgm
client's START BACKUP command:
ndb_mgm> START BACKUP
Waiting for completed, this may take several minutes
Node 5: Backup 1 started from node 1
Node 5: Backup 1 started from node 1 completed
StartGCP: 1338 StopGCP: 20134
#Records: 205044 #LogRecords: 10112
Data: 492807474 bytes Log: 317805 bytes
It may require some time for the backup to complete,
depending on the size of the cluster's data and logs.
For START BACKUP command options and
additional information, see
Using The NDB Cluster Management Client to Create a Backup.
You must verify that each process in the “wild” cluster has a valid PID file. For purposes of this discussion, a valid PID file has the following characteristics:
The filename is
ndb_,
where node_id.pidnode_id is the node
ID used for this process.
The file is located in the data directory used by this process.
The first line of the file contains the process ID, and only the process ID.
To check the PID file for the management node process, log
into the system shell on host alpha,
and change to the management node's data directory.
If this is not specified, the PID file should be created
in the same directory that ndb_mgmd
runs in; change to this directory instead. Then check to
see whether the PID file is present using your
system's tools for doing this. On Linux, you can use
the command shown here:
shell> ls ndb_1*
ndb_1_cluster.log ndb_1_out.log ndb_1.pid
Check the content of the matching
.pid file using a pager or text
editor. We use more for
this purpose here:
shell> more ndb_1.pid
17066
The number shown should match the ndb_mgmd process ID. We can check this on Linux as before, using ps:
shell> ps -ef | grep ndb_mgmd
jon 17066 1 1 19:16 ? 00:00:01 ./ndb_mgmd -f /etc/mysql-cluster/config.ini --config-cache=false
jon 17942 1819 0 19:17 pts/2 00:00:00 grep --color=auto ndb_mgmd
The management node PID file satisfies the requirements
listed at the beginning of this section. Next, we check
the PID files for the data nodes, on hosts
beta and gamma. Log
into a system shell on beta, then
obtain the process ID of the ndbd
process on this host, as shown here:
shell> ps -ef | grep ndbd
jon 2024 1 1 18:46 ? 00:00:01 ./ndbd -c alpha
jon 2150 1819 0 18:47 pts/2 00:00:00 grep --color=auto ndbd
We observed earlier (see
Section 3.5.2.1, “Creating and Configuring the Target Cluster”)
that this node's node ID is 5 and that its
DataDir is
/var/lib/mysql-cluster. Check in this
directory for the presence of a file named
ndb_5.pid:
shell> ls /var/lib/mysql-cluster/ndb_5.pid
ndb_5.pid
Now check the content of this file and make certain that it contains the process ID 2024 on the first line and no other content, like this:
shell> more /var/lib/mysql-cluster/ndb_5.pid
2024
Similarly, we locate and check the content of the PID file
for the remaining data node (node ID 6, data directory
/var/lib/mysql-cluster/) on host
gamma:
shell>ps -ef | grep ndbdjon 2067 1 1 18:46 ? 00:00:01 ./ndbd -c alpha jon 2150 1819 0 18:47 pts/2 00:00:00 grep --color=auto ndbd shell>ls /var/lib/mysql-cluster/ndb_6.pidndb_6.pid shell>more /var/lib/mysql-cluster/ndb_6.pid2067
The PID file for this data node also meets our
requirements, so we are now ready to proceed to the
mysqld binary running on host
delta. We handle the PID file for this
process in the next step.
If a given process does not have a valid PID file, you
must create one for it, or, in some cases, modify the
existing one. This is most likely to be a concern when
checking PID files for mysqld
processes, due to the fact that the MySQL Server is
customarily started using the startup script
mysqld_safe, which can start the
mysqld binary with any number of
default options, including the
--pid-file option. We see
that is the case when we check on host
delta for the running
mysqld process there (emphasized text):
shell>ps -ef | grep mysqldjon 8782 8520 0 10:30 pts/3 00:00:00 /bin/sh ./mysqld_safe --ndbcluster --ndb-connectstring=alpha jon 8893 8782 1 10:30 pts/3 00:00:00 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --plugin-dir=/usr/local/mysql/lib/plugin --ndbcluster --ndb-connectstring=alpha --log-error=/usr/local/mysql/data/delta.err --pid-file=/usr/local/mysql/data/delta.pid jon 8947 8520 0 10:30 pts/3 00:00:00 grep --color=auto mysqld shell>more /usr/local/mysql/data/delta.pid8893
The PID for the SQL node is in an acceptable location (the data directory) and has the correct content, but has the wrong name.
You can create a correct PID file in either of two
locations—in the process data directory, or in the
directory
on the same host as the
process, where mcm_dir/clusters/cluster
name/pid/mcm_dir is the
MySQL Cluster Manager installation directory, and
cluster_name is the name of the
cluster. In this case, since the existing PID file is
otherwise correct, it is probably easiest just to copy it
to a correctly named file in the same directory
incorporating the node ID (100), like this:
shell> cp /usr/local/mysql/data/delta.pid /usr/local/mysql/data/ndb_100.pid
Another alternative is to create and write a completely new PID file to the proper location in the MySQL Cluster Manager installation directory, as shown here:
shell> echo '8893' > /opt/mcm-1.3.6/clusters/newcluster/pid/ndb_100.pid
shell> more /opt/mcm-1.3.6/clusters/newcluster/pid/ndb_100.pid
8893
ndbapi processes running under MySQL Cluster Manager do
not require PID files, so we have completed this step of
the import, and we should be ready for a test or
“dry run” of the migration. We perform this
test in the next step.
Testing and performing and performing the migration of a standalone MySQL Cluster into MySQL Cluster Manager consists of the following steps:
Perform a test run of the proposed import using
import cluster with the
--dryrun option. When this option is
used, MySQL Cluster Manager checks for mismatched configuration
attributes, missing or invalid processes or hosts, missing
or invalid PID files, and other errors, and warns of any
it finds, but does not actually perform any migration of
processes or data.
mcm> import cluster --dryrun newcluster;
ERROR 5302 (00MGR): No access for user mcmd to mysqld 100 in cluster newcluster.
Please verify user access and grants adhere to documented requirements.
We omitted a crucial step earlier: we apparently neglected
to create the mcmd superuser account
needed on all SQL nodes in the “wild” cluster
to bring them under control of MySQL Cluster Manager. In this case, there
is only one SQL node, running on delta.
Log into this SQL node as the MySQL
root user, and create the
mcmd account in the
mysql client, as shown here:
shell>./mysql -uroot -pEnter password:************Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.6.24-ndb-7.4.6 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>CREATE USER 'mcmd'@'localhost' IDENTIFIED BY 'super';Query OK, 0 rows affected (0.00 sec) mysql>GRANT ALL PRIVILEGES ON *.*->TO 'mcmd'@'localhost' IDENTIFIED BY 'super'->WITH GRANT OPTION;Query OK, 0 rows affected (0.00 sec) mysql>SHOW GRANTS FOR 'mcmd'@'localhost'\G*************************** 1. row *************************** Grants for mcmd@localhost: GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'localhost' IDENTIFIED BY PASSWORD '*F85A86E6F55A370C1A115F696A9AD71A7869DB81' WITH GRANT OPTION 1 row in set (0.00 sec)
Having corrected this issue on the SQL node, we repeat the dry run in the mcm client:
mcm> import cluster --dryrun newcluster;
ERROR 5310 (00MGR): Process ndb_mgmd 50 reported 6 processes, while 5 processes
are configured for cluster newcluster
This error means that there are one or more cluster
processes not accounted for the configuration for the
target cluster. Checking the contents of the file
/etc/mysql-cluster/config.ini on host
alpha, we see that we overlooked a
section in it earlier. This section is shown here:
[mysqld] NodeId=102
To address this discrepancy, we need to add another
“free” ndbapi process to
newcluster, which we can do by executing the following
add process command in
the mcm client:
mcm> add process -R ndbapi:102@* newcluster;
+----------------------------+
| Command result |
+----------------------------+
| Process added successfully |
+----------------------------+
1 row in set (0.38 sec)
You can verify this by checking the output of
show status
-r
command, as shown here:
mcm> show status -r newcluster;
+--------+----------+-------+--------+-----------+------------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+-------+--------+-----------+------------+
| 50 | ndb_mgmd | alpha | import | | newpackage |
| 5 | ndbd | beta | import | n/a | newpackage |
| 6 | ndbd | gamma | import | n/a | newpackage |
| 100 | mysqld | delta | import | | newpackage |
| 101 | ndbapi | * | import | | |
| 102 | ndbapi | * | import | | |
+--------+----------+-------+--------+-----------+------------+
6 rows in set (0.11 sec)
Now we can run another test, using
import cluster with the
--dryrun as we did previously:
mcm> import cluster --dryrun newcluster;
Continue to correct any errors or other discrepancies
found using --dryrun, repeating the dry
run shown in the previous step to ensure that no errors
were missed. The following list contains some common
errors you may encounter, and their likely causes:
MySQL Cluster Manager requires a specific MySQL user and privileges to
manage SQL nodes. If the mcmd MySQL
user account is not set up properly, you may see
No access for user...,
Incorrect grants for user...,
or possibly other errors. See
Section 3.5.2.2, “Preparing the Standalone Cluster for Migration”.
As described previously, each cluster process (other
than a process whose type is
ndbapi) being brought under MySQL Cluster Manager
control must have a valid PID file. Missing, misnamed,
or invalid PID files can produce errors such as
PID file does not exist for
process..., PID ... is not
running ..., and PID ... is
type .... See
Section 3.5.2.3, “Verify All Cluster Process PID Files”.
Process version mismatches can also produce seemingly random errors whose cause can sometime prove difficult to track down. Ensure that all nodes are supplied with the correct release of the MySQL Cluster software, and that it is the same release and version of the software.
Each data node angel process in the standalone cluster
must be stopped prior to import. A running angel
process can cause errors such as Angel
process pid exists
... or Process
pid is an angel process for
.... See
Section 3.5.2.2, “Preparing the Standalone Cluster for Migration”.
The number of processes, their types, and the hosts
where they reside in the standalone cluster must be
reflected accurately when creating the target site,
package, and cluster for import. Otherwise, errors
such as Process
id reported
# processes
..., Process
id ... does not match
configured process ..., Process
idnot configured
..., and Process
iddoes not match configured
process .... See
Section 3.5.2.1, “Creating and Configuring the Target Cluster”.
Other factors that can cause specific errors include processes in the wrong state, processes that were started with unsupported command-line options or without required options, and processes having the wrong process ID, or using the wrong node ID.
When import cluster
--dryrun no longer warns of any errors,
you can perform the import with the import
cluster command, this time omitting the
--dryrun option.
This section describes usage of the
NDB native backup and restore
functionality implemented in MySQL Cluster Manager, to perform a number of common
tasks.
This section provides information about basic requirements for performing backup and restore operations using MySQL Cluster Manager.
Requirements for MySQL Cluster backup. Basic requirements for performing MySQL backups using MySQL Cluster Manager are minimal. At least one data node in each node group must be running, and there must be sufficient disk space on the node file systems. Partial backups are not supported.
Requirements for MySQL Cluster restore. Restoring a MySQL Cluster using MySQL Cluster Manager is subject to the following conditions:
A complete restore requires that all data nodes are up and running, and that all files belonging to a given backup are available.
A partial restore is possible, but must be specified as
such. This can be accomplished using the
restore cluster client
command with its --skip-nodeid option.
In the event that data nodes have been added to the cluster
since the backup was taken, only those data nodes for which
backup files exist are restored. In such cases data is not
automatically distributed to the new nodes, and, following
the restore, you must redistribute the data manually by
issuing an
ALTER
ONLINE TABLE ... REORGANIZE PARTITION statement in
the mysql client for each
NDB table in the cluster. See
Adding NDB Cluster Data Nodes Online: Basic procedure, for
more information.
This section describes backing up and restoring a MySQL Cluster,
with examples of complete and partial restore operations. Note
that the backup cluster and restore
cluster commands work with
NDB tables only; tables using other
MySQL storage engines (such as
InnoDB or
MyISAM) are ignored.
For purposes of example, we use a MySQL Cluster named
mycluster whose processes and status can be
seen here:
mcm> show status -r mycluster;
+--------+----------+----------+---------+-----------+-----------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+----------+---------+-----------+-----------+
| 49 | ndb_mgmd | tonfisk | running | | mypackage |
| 1 | ndbd | tonfisk | running | 0 | mypackage |
| 2 | ndbd | tonfisk | running | 0 | mypackage |
| 50 | mysqld | tonfisk | running | | mypackage |
| 51 | mysqld | tonfisk | running | | mypackage |
| 52 | ndbapi | *tonfisk | added | | |
| 53 | ndbapi | *tonfisk | added | | |
+--------+----------+----------+---------+-----------+-----------+
7 rows in set (0.08 sec)
You can see whether there are any existing backups of
mycluster using the
list backups command, as shown
here:
mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host | Timestamp | Comment |
+----------+--------+---------+---------------------+---------+
| 1 | 1 | tonfisk | 2012-12-04 12:03:52 | |
| 1 | 2 | tonfisk | 2012-12-04 12:03:52 | |
| 2 | 1 | tonfisk | 2012-12-04 12:04:15 | |
| 2 | 2 | tonfisk | 2012-12-04 12:04:15 | |
| 3 | 1 | tonfisk | 2012-12-04 12:17:41 | |
| 3 | 2 | tonfisk | 2012-12-04 12:17:41 | |
+----------+--------+---------+---------------------+---------+
6 rows in set (0.12 sec)
Simple backup.
To create a backup, use the backup
cluster command with the name of the cluster as an
argument, similar to what is shown here:
mcm> backup cluster mycluster;
+-------------------------------+
| Command result |
+-------------------------------+
| Backup completed successfully |
+-------------------------------+
1 row in set (3.31 sec)
backup cluster requires only the name of the
cluster to be backed up as an argument; for information about
additional options supported by this command, see
Section 4.7.2, “The backup cluster Command”. To verify that a new
backup of mycluster was created with a unique
ID, check the output of list
backups, as shown here (where the rows corresponding
to the new backup files are indicated with emphasized text):
mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host | Timestamp | Comment |
+----------+--------+---------+---------------------+---------+
| 1 | 1 | tonfisk | 2012-12-04 12:03:52 | |
| 1 | 2 | tonfisk | 2012-12-04 12:03:52 | |
| 2 | 1 | tonfisk | 2012-12-04 12:04:15 | |
| 2 | 2 | tonfisk | 2012-12-04 12:04:15 | |
| 3 | 1 | tonfisk | 2012-12-04 12:17:41 | |
| 3 | 2 | tonfisk | 2012-12-04 12:17:41 | |
| 4 | 1 | tonfisk | 2012-12-12 14:24:35 | |
| 4 | 2 | tonfisk | 2012-12-12 14:24:35 | |
+----------+--------+---------+---------------------+---------+
8 rows in set (0.04 sec)
If you attempt to create a backup of a MySQL Cluster in which
each node group does not have at least one data node running,
backup cluster fails with the
error Backup cannot be performed as processes are
stopped in cluster
cluster_name.
Simple complete restore. To perform a complete restore of a MySQL Cluster from a backup with a given ID, follow the steps listed here:
Identify the backup to be used.
In this example, we use the backup having the ID 4, that was
created for mycluster previously in this
section.
Wipe the MySQL Cluster data.
The simplest way to do this is to stop and then perform an
initial start of the cluster as shown here, using
mycluster:
mcm>stop cluster mycluster;+------------------------------+ | Command result | +------------------------------+ | Cluster stopped successfully | +------------------------------+ 1 row in set (15.24 sec) mcm>start cluster --initial mycluster;+------------------------------+ | Command result | +------------------------------+ | Cluster started successfully | +------------------------------+ 1 row in set (34.47 sec)
Restore the backup.
This is done using the restore
cluster command, which requires the backup ID and
the name of the cluster as arguments. Thus, you can restore
backup 4 to mycluster as shown here:
mcm> restore cluster --backupid=4 mycluster;
+--------------------------------+
| Command result |
+--------------------------------+
| Restore completed successfully |
+--------------------------------+
1 row in set (16.78 sec)
Partial restore—missing images.
It is possible using MySQL Cluster Manager to perform a partial restore of a
MySQL Cluster—that is, to restore from a backup in which
backup images from one or more data nodes are not available.
This is required if we wish to restore
mycluster to backup number 6, since an
image for this backup is available only for node 1, as can be
seen in the output of list
backups in the mcm client
(emphasized text):
mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host | Timestamp | Comment |
+----------+--------+---------+---------------------+---------+
| 1 | 1 | tonfisk | 2012-12-04 12:03:52 | |
| 1 | 2 | tonfisk | 2012-12-04 12:03:52 | |
| 2 | 1 | tonfisk | 2012-12-04 12:04:15 | |
| 2 | 2 | tonfisk | 2012-12-04 12:04:15 | |
| 3 | 1 | tonfisk | 2012-12-04 12:17:41 | |
| 3 | 2 | tonfisk | 2012-12-04 12:17:41 | |
| 4 | 1 | tonfisk | 2012-12-12 14:24:35 | |
| 4 | 2 | tonfisk | 2012-12-12 14:24:35 | |
| 5 | 1 | tonfisk | 2012-12-12 14:31:31 | |
| 5 | 2 | tonfisk | 2012-12-12 14:31:31 | |
| 6 | 1 | tonfisk | 2012-12-12 14:32:09 | |
+----------+--------+---------+---------------------+---------+
11 rows in set (0.08 sec)
To perform a restore of only those nodes for which we have
images (in this case, node 1 only), we can use the
--skip-nodeid option when executing a
restore cluster command. This
option causes one or more nodes to be skipped when performing
the restore. Assuming that mycluster has been
cleared of data (as described earlier in this section), we can
perform a restore that skips node 2 as shown here:
mcm> restore cluster --backupid=6 --skip-nodeid=2 mycluster;
+--------------------------------+
| Command result |
+--------------------------------+
| Restore completed successfully |
+--------------------------------+
1 row in set (17.06 sec)
Because we excluded node 2 from the restore process, no data has
been distributed to it. To cause MySQL Cluster data to be
distributed to any such excluded or skipped nodes following a
partial restore, it is necessary to redistribute the data
manually by executing an
ALTER
ONLINE TABLE ... REORGANIZE PARTITION statement in the
mysql client for each
NDB table in the cluster. To obtain
a list of NDB tables from the
mysql client, you can use multiple
SHOW TABLES statements or a query
such as this one:
SELECT CONCAT('' TABLE_SCHEMA, '.', TABLE_NAME)
FROM INFORMATION_SCHEMA.TABLES
WHERE ENGINE='ndbcluster';
You can generate the necessary SQL statements using a more elaborate version of the query just shown, such the one employed here:
mysql>SELECT->CONCAT('ALTER ONLINE TABLE `', TABLE_SCHEMA,->'`.`', TABLE_NAME, '` REORGANIZE PARTITION;')->AS Statement->FROM INFORMATION_SCHEMA.TABLES->WHERE ENGINE='ndbcluster';+--------------------------------------------------------------------------+ | Statement | +--------------------------------------------------------------------------+ | ALTER ONLINE TABLE `mysql`.`ndb_apply_status` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `mysql`.`ndb_index_stat_head` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `mysql`.`ndb_index_stat_sample` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `db1`.`n1` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `db1`.`n2` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `db1`.`n3` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `test`.`n1` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `test`.`n2` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `test`.`n3` REORGANIZE PARTITION; | | ALTER ONLINE TABLE `test`.`n4` REORGANIZE PARTITION; | +--------------------------------------------------------------------------+ 10 rows in set (0.09 sec)
Partial restore—data nodes added.
A partial restore can also be performed when new data nodes
have been added to a MySQL Cluster following a backup. In this
case, you can exclude the new nodes using
--skip-nodeid when executing the
restore cluster command.
Consider the MySQL Cluster named mycluster
as shown in the output of the following
show status command:
mcm> show status -r mycluster;
+--------+----------+----------+---------+-----------+-----------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+----------+---------+-----------+-----------+
| 49 | ndb_mgmd | tonfisk | stopped | | mypackage |
| 1 | ndbd | tonfisk | stopped | 0 | mypackage |
| 2 | ndbd | tonfisk | stopped | 0 | mypackage |
| 50 | mysqld | tonfisk | stopped | | mypackage |
| 51 | mysqld | tonfisk | stopped | | mypackage |
| 52 | ndbapi | *tonfisk | added | | |
| 53 | ndbapi | *tonfisk | added | | |
+--------+----------+----------+---------+-----------+-----------+
7 rows in set (0.03 sec)
The output of list backups
shows us the available backup images for this cluster:
mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host | Timestamp | Comment |
+----------+--------+---------+---------------------+---------+
| 1 | 1 | tonfisk | 2012-12-04 12:03:52 | |
| 1 | 2 | tonfisk | 2012-12-04 12:03:52 | |
| 2 | 1 | tonfisk | 2012-12-04 12:04:15 | |
| 2 | 2 | tonfisk | 2012-12-04 12:04:15 | |
| 3 | 1 | tonfisk | 2012-12-04 12:17:41 | |
| 3 | 2 | tonfisk | 2012-12-04 12:17:41 | |
| 4 | 1 | tonfisk | 2012-12-12 14:24:35 | |
| 4 | 2 | tonfisk | 2012-12-12 14:24:35 | |
+----------+--------+---------+---------------------+---------+
8 rows in set (0.06 sec)
Now suppose that, at a later point in time, 2 data nodes have
been added to mycluster using an
add process command. The
show status output for
mycluster now looks like this:
mcm> show status -r mycluster;
+--------+----------+----------+---------+-----------+-----------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+----------+---------+-----------+-----------+
| 49 | ndb_mgmd | tonfisk | running | | mypackage |
| 1 | ndbd | tonfisk | running | 0 | mypackage |
| 2 | ndbd | tonfisk | running | 0 | mypackage |
| 50 | mysqld | tonfisk | running | | mypackage |
| 51 | mysqld | tonfisk | running | | mypackage |
| 52 | ndbapi | *tonfisk | added | | |
| 53 | ndbapi | *tonfisk | added | | |
| 3 | ndbd | tonfisk | running | 1 | mypackage |
| 4 | ndbd | tonfisk | running | 1 | mypackage |
+--------+----------+----------+---------+-----------+-----------+
9 rows in set (0.01 sec)
Since nodes 3 and 4 were not included in the backup, we need to
exclude them when performing the restore. You can cause
restore cluster to skip
multiple data nodes by specifying a comma-separated list of node
IDs with the --skip-nodeid option. Assume that
we have just cleared mycluster of MySQL
Cluster data using the mcm client commands
stop cluster and
start cluster
--initial as described previously in this
section; then we can restore mycluster (now
having 4 data nodes numbered 1, 2, 3, and 4) from backup number
4 (made when mycluster had only 2 data nodes
numbered 1 and 2) as shown here:
mcm> restore cluster --backupid=4 --skip-nodeid=3,4 mycluster;
+--------------------------------+
| Command result |
+--------------------------------+
| Restore completed successfully |
+--------------------------------+
1 row in set (17.61 sec)
No data is distributed to the skipped (new) nodes; you must
force nodes 3 and 4 to be included in a redistribution of the
data using
ALTER
ONLINE TABLE ... REORGANIZE PARTITION as described
previously in this section.
This section explains how to back up configuration data for
mcmd agents and how to restore the backed-up
agent data. Used together with the backup
cluster command, the backup
agents command allows you to backup and restore a
complete cluster-plus-manager setup.
If no host names are given with the backup
agents command, backups are created for all agents of
the site:
mcm> backup agents mysite; +-----------------------------------+ | Command result | +-----------------------------------+ | Agent backup created successfully | +-----------------------------------+ 1 row in set (0.07 sec)
To backup one or more specific agents, specify them with the
--hosts option:
mcm> backup agents --hosts=tonfisk mysite; +-----------------------------------+ | Command result | +-----------------------------------+ | Agent backup created successfully | +-----------------------------------+ 1 row in set (0.07 sec)
If no site name is given, only the agent that the mcm client is connected to is backed up.
The backup for each agent includes the following contents from the
agent repository (mcm_data folder):
The rep subfolder
The metadata files high_water_mark and
repchksum
The repository is locked while the backup are in progress, to
avoid creating an inconsistent backup. The backup for each agent
is created in a subfolder named
rep_backup/
under the agent's timestampmcm_data folder, with
timestamp reflecting the time the
backup began. If you want the backup to be at another place,
create a soft link from mcm_data/rep_backup
to your desired storage location.
To restore the backup for an agent:
Wipe the contents of the agent's
mcm_data/rep folder
Delete the metadata files
high_water_mark and
repchksum from the
mcm_data folder
Copy the contents in the
mcm_data/rep_backup/
folder back into the timestamp/repmcm_data/rep
folder
Copy the metadata files high_water_mark
and repchksum from the
mcm_data/rep_backup/
folder back into the timestampmcm_data folder
Restart the agent
The steps are illustrated below:
mysql@tonfisk$ cd mcm_data mysql@tonfisk$ cp mcm_data/rep_backup/timestamp/rep/* ./rep/ mysql@tonfisk$ cp mcm_data/rep_backup/timestamp/high_water_mark ./ mysql@tonfisk$ cp mcm_data/rep_backup/timestamp/repchksum ./ mysql@tonfisk$ mcm1.3.6/bin/mcmd
The backup may be manually restored on just one, or more than one agents. If backup is restored for only one agent on, say, host A, host A will contact the other agents of the site to make them recover their repositories from host A using the usual mechanism for agent recovery. If all agents on all hosts are restored and restarted manually, the situation will be similar to the normal restarting all agents after stopping them at slightly different points in time.
If configuration changes has been made to the cluster since the
restored backup was created, the same changes must be made again
after the agent restores have been completed, to ensure that the
agents' configurations match those of the actual running cluster.
For example: sometime after a backup was done, a set
MaxNoOfTables:ndbmtd=500 mycluster command was issued
and soon afterward, something happened and corrupted the agent
repository; after the agent backup was restored, the same
set command has to be rerun in order to update
the mcmd agents' configurations. While the
command does not effectively change anything on the cluster
itself, after it has been run, a rolling restart of the cluster
processes using the restart
cluster command is still required.
This section provides sample steps for setting up a MySQL Cluster replication with a single replication channel using the MySQL Cluster Manager.
Before trying the following steps, it is recommended that you first read NDB Cluster Replication to familiarize yourself with the concepts, requirements, operations, and limitations of MySQL Cluster replication.
Create and start a master cluster:
mcm> create site --hosts=tonfisk msite;
mcm> add package --basedir=/usr/local/cluster-mgt/cluster-7.3.2 7.3.2;
mcm> create cluster -P 7.3.2 -R \
ndb_mgmd@tonfisk,ndbmtd@tonfisk,ndbmtd@tonfisk,mysqld@tonfisk,mysqld@tonfisk,ndbapi@*,ndbapi@* \
master;
mcm> set portnumber:ndb_mgmd=4000 master;
mcm> set port:mysqld:51=3307 master;
mcm> set port:mysqld:50=3306 master;
mcm> set server_id:mysqld:50=100 master;
mcm> set log_bin:mysqld:50=binlog master;
mcm> set binlog_format:mysqld:50=ROW master;
mcm> set ndb_connectstring:mysqld:50=tonfisk:4000 master;
mcm> start cluster master;
Create and start a slave cluster (we begin with creating a new site called “ssite” just for the slave cluster; you can also skip that and put the master and slave cluster hosts under the same site instead):
mcm> create site --hosts=flundra ssite;
mcm> add package --basedir=/usr/local/cluster-mgt/cluster-7.3.2 7.3.2;
mcm> create cluster -P 7.3.2 -R \
ndb_mgmd@flundra,ndbmtd@flundra,ndbmtd@flundra,mysqld@flundra,mysqld@flundra,ndbapi@*,ndbapi@* \
slave;
mcm> set portnumber:ndb_mgmd=4000 slave;
mcm> set port:mysqld:50=3306 slave;
mcm> set port:mysqld:51=3307 slave;
mcm> set server_id:mysqld:50=101 slave;
mcm> set ndb_connectstring:mysqld:50=flundra:4000 slave;
mcm> set slave_skip_errors:mysqld=all slave;
mcm> start cluster slave;
Create a slave account (with the user name
“myslave” and password “mypw”) on
the master cluster with the appropriate privilege by logging
into the master replication client
(mysql) and
issuing the following statements:
M
mysqlM>GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'flundra' -> IDENTIFIED BY 'mypw';
Log into the slave cluster client
(mysql) and
issue the following statements:
S
mysqlS>CHANGE MASTER TO -> MASTER_HOST='tonfisk', -> MASTER_PORT=3306, -> MASTER_USER='myslave', -> MASTER_PASSWORD='mypw';
Start replication by issuing the following statement with the slave cluster client:
mysqlS>START SLAVE;
The above example assumes that the master and slave clusters are created at about the same time, with no data on both before replication starts. If the master cluster has already been operating and has data on it when the salve cluster is created, after step 3 above, follow these steps to transfer the data from the master cluster to the slave cluster and prepare the slave cluster for replication:
Back up your master cluster using the
backup cluster command of
MySQL Cluster Manager:
mcm> backup cluster master;
Only NDB tables are backed up
by the command; tables using other MySQL storage engines
are ignored.
Look up the backup ID of the backup you just made by listing all backups for the master cluster:
mcm> list backups master;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host | Timestamp | Comment |
+----------+--------+---------+---------------------+---------+
| 1 | 1 | tonfisk | 2014-10-17 20:03:23 | |
| 1 | 2 | tonfisk | 2014-10-17 20:03:23 | |
| 2 | 1 | tonfisk | 2014-10-17 20:09:00 | |
| 2 | 2 | tonfisk | 2014-10-17 20:09:00 | |
+----------+--------+---------+---------------------+---------+
From the output, you can see that the latest backup you created has the backup ID “2”, and backup data exists for node “1” and “2”.
Using the backup ID and the related node IDs, identify the
backup files just created under
/mcm_data/clusters/
in the master cluster's installation directory (in this
case, the files under the
cluster_name/node_id/data/BACKUP/BACKUP-backup_id//mcm_data/clusters/master/1/data/BACKUP/BACKUP-2
and
/mcm_data/clusters/master/2/data/BACKUP/BACKUP-2),
and copy them over to the equivalent places for the slave
cluster (in this case,
/mcm_data/clusters/slave/1/data/BACKUP/BACKUP-2
and
/mcm_data/clusters/slave/2/data/BACKUP/BACKUP-2
under the slave cluster's installation directory). After the
copying is finished, use the following command to check that
the backup is now available for the slave cluster:
mcm> list backups slave;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host | Timestamp | Comment |
+----------+--------+---------+---------------------+---------+
| 2 | 1 | flundra | 2014-10-17 21:19:00 | |
| 2 | 2 | flundra | 2014-10-17 21:19:00 | |
+----------+--------+---------+---------------------+---------+
Restore the backed up data to the slave cluster (note that
you need an unused ndbapi slot for the
restore cluster command to
work):
mcm> restore cluster --backupid=2 slave;
On the master cluster client, use the following command to identify the correct binary log file and position for replication to start:
mysqlM> SHOW MASTER STATUS\G;
*************************** 1. row ***************************
File: binlog.000017
Position: 2857
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
On the slave cluster client, provide to the slave cluster
the information of the master cluster, including the binary
log file name (with the MASTER_LOG_FILE
option) and position (with the
MASTER_LOG_POS option) you just
discovered in step 5 above:
mysqlS>CHANGE MASTER TO -> MASTER_HOST='tonfisk', -> MASTER_PORT=3306, -> MASTER_USER='myslave', -> MASTER_PASSWORD='mypw', -> MASTER_LOG_FILE='binlog.000017', -> MASTER_LOG_POS=2857;
Start replication by issuing the following statement with the slave cluster client:
mysqlS>START SLAVE;
As an alternative to these steps, you can also follow the steps described in NDB Cluster Backups With NDB Cluster Replication to copy the data from the master to the slave and to specify the binary log file and position for replication to start.