Chapter 3 Using MySQL Cluster Manager

Table of Contents

3.1 mcmd, the MySQL Cluster Manager Agent
3.2 Starting and Stopping the MySQL Cluster Manager Agent
3.2.1 Starting and Stopping the Agent on Linux
3.2.2 Starting and Stopping the MySQL Cluster Manager Agent on Windows
3.3 Starting the MySQL Cluster Manager Client
3.4 Setting Up MySQL Clusters with MySQL Cluster Manager
3.4.1 Creating a MySQL Cluster with MySQL Cluster Manager
3.5 Importing MySQL Clusters into MySQL Cluster Manager
3.5.1 Importing a Cluster Into MySQL Cluster Manager: Basic Procedure
3.5.2 Importing a Cluster Into MySQL Cluster Manager: Example
3.6 MySQL Cluster Backup and Restore Using MySQL Cluster Manager
3.6.1 Requirements for Backup and Restore
3.6.2 Basic MySQL Cluster Backup and Restore Using MySQL Cluster Manager
3.7 Backing Up and Restoring MySQL Cluster Manager Agents
3.8 Restoring a MySQL Cluster Manager Agent with Data from Other Agents
3.9 Setting Up MySQL Cluster Replication with MySQL Cluster Manager

This chapter discusses starting and stopping the MySQL Cluster Manager agent and client, and setting up, backing up, and restoring MySQL Clusters using the MySQL Cluster Manager.

3.1 mcmd, the MySQL Cluster Manager Agent

mcmd is the MySQL Cluster Manager agent program; invoking this executable starts the MySQL Cluster Manager Agent, to which you can connect using the mcm client (see Section 3.3, “Starting the MySQL Cluster Manager Client”, and Chapter 4, MySQL Cluster Manager Client Commands, for more information).

You can modify the behavior of the agent in a number of different ways by specifying one or more of the options discussed in this sections. Most of these options can be specified either on the command line or in the agent configuration file (normally etc/mcmd.ini). (Some exceptions include the --defaults-file and --bootstrap options, which, if used, must be specified on the command line, and which are mutually exclusive with one another.) For example, you can set the agent's cluster logging level to warning instead than the default message in either one of the following two ways:

  • Include --log-level=warning on the command line when invoking mcmd.

    Note

    When specifying an agent configuration option on the command line, the name of the option is prefixed with two leading dash characters (--).

  • Include the following line in the agent configuration file:

    log-level=warning
    
    Note

    You can change the logging level at runtime using the mcm client change log-level command.

    When used in the configuration file, the name of the option should not be prefixed with any other characters. Each option must be specified on a separate line. You can comment out all of a given line by inserting a leading hash character (#), like this:

    #log-level=warning
    

    You can also comment out part of a line in this way; any text following the # character is ignored, to the end of the current line.

The following table contains a summary of agent options that are read on startup by mcmd. More detailed information about each of these options, such as allowed range of values, can be found in the list following the table.

Table 3.1 MySQL Cluster Manager Agent (mcmd) Option Summary

FormatDescription
--agent-uuidSet the agent's UUID; needed only when running multiple agent processes on the same host.
--basedirDirectory to use as prefix for relative paths in the configuration
--bootstrapBootstrap a default cluster on startup.
--daemonRun in daemon mode.
--defaults-fileConfiguration file to use
--event-threadsNumber of event handler threads to use.
--helpShow application options.
--help-allShow all options (application options and manager module options).
--help-managerShow manager module options.
--keepaliveTry to restart mcmd in the event of a crash.
--log-backtrace-on-crashAttempt to load debugger in case of a crash.
--log-fileName of the file to write the log to.
--log-levelSet the cluster logging level.
--log-use-syslogLog to syslog.
--manager-directoryDirectory used for manager data storage.
--manager-passwordPassword used for the manager account.
--manager-portPort for client to use when connecting to manager.
--manager-usernameUser account name to run the manager under.
--max-open-filesMaximum number of open files (ulimit -n).
--pid-fileSpecify PID file (used if running as daemon)
--plugin-dirDirectory in which to look for plugins
--pluginsComma-separated list of plugins to load; must include "manager".
--verbose-shutdownAlways log the exit code when shutting down.
--versionShow the manager version.
--xcom-portSpecify the XCOM port.

MySQL Cluster Manager Agent (mcmd) Option Descriptions

The following list contains descriptions of each startup option available for use with mcmd, including allowed and default values. Options noted as boolean need only be specified in order to take effect; you should not try to set a value for these.

  • --agent-uuid=uuid

    Command-Line Format--agent-uuid=uuid
    Permitted ValuesTypestring
    Default[set internally]

    Set a UUID for this agent. Normally this value is set automatically, and needs to be specified only when running more than one mcmd process on the same host.

  • --basedir=dir_name

    Command-Line Format--basedir=dir_name
    Permitted ValuesTypedirectory name
    Default.

    Directory with path to use as prefix for relative paths in the configuration.

  • --bootstrap

    Command-Line Format--bootstrap
    Permitted ValuesTypeboolean
    Defaulttrue

    Start the agent with default configuration values, create a default one-machine cluster named mycluster, and start it. This option works only if no clusters have yet been created. This option is mutually exclusive with the --defaults-file option.

    Currently, any data stored in the default cluster mycluster is not preserved between cluster restarts.

  • --daemon

    Command-Line Format--daemon
    Permitted ValuesTypeboolean
    Defaulttrue

    Run mcmd as a daemon.

  • --defaults-file=filename

    Command-Line Format--defaults-file=file_name
    Permitted ValuesTypefile name
    Defaultetc/mcmd.ini

    Set the file from which to read configuration options. The default is etc/mcmd.ini. See Section 2.4, “MySQL Cluster Manager Configuration File”, for more information.

  • --event-threads=#

    Command-Line Format--event-threads=#
    Permitted ValuesTypenumeric
    Default1
    Min Value1
    Max Value[system dependent]

    Number of event handler threads to use. The the default is 1, which is sufficient for most normal operations.

  • --help, -?

    Command-Line Format--help
    Permitted ValuesTypeboolean
    Defaulttrue

    mcmd help output is divided into Application and Manager sections. When used with mcmd, --help causes the Application options to be shown, as shown here:

    shell> mcmd --help
    Usage:
      mcmd [OPTION...] - MySQL Cluster Manager
    
    Help Options:
      -?, --help                          Show help options
      --help-all                          Show all help options
      --help-manager                      Show options for the manager-module
    
    Application Options:
      -V, --version                       Show version
      --defaults-file=<file>              configuration file
      --verbose-shutdown                  Always log the exit code when shutting down
      --daemon                            Start in daemon-mode
      --basedir=<absolute path>           Base directory to prepend to relative paths in the config
      --pid-file=<file>                   PID file in case we are started as daemon
      --plugin-dir=<path>                 Path to the plugins
      --plugins=<name>                    Plugins to load
      --log-level=<string>                Log all messages of level ... or higher
      --log-file=<file>                   Log all messages in a file
      --log-use-syslog                    Log all messages to syslog
      --log-backtrace-on-crash            Try to invoke debugger on crash
      --keepalive                         Try to restart mcmd if it crashed
      --max-open-files                    Maximum number of open files (ulimit -n)
      --event-threads                     Number of event-handling threads (default: 1)
    
  • --help-all

    Command-Line Format--help-all
    Permitted ValuesTypeboolean
    Defaulttrue

    mcmd help output is divided into Application and Manager sections. When used with --help-all, mcmd displays both the Application and the Manager options, like this:

    > mcmd --help-all
    Usage:
      mcmd [OPTION...] - MySQL Cluster Manager
    
    Help Options:
      -?, --help                          Show help options
      --help-all                          Show all help options
      --help-manager                      Show options for the manager-module
    
    manager-module
      --manager-port=<clientport>         Port to manage the cluster (default: 1862)
      --xcom-port=<xcomport>              Xcom port (default: 18620)
      --manager-username=<username>       Username to manage the cluster (default: mcmd)
      --manager-password=<password>       Password for the manager user-account (default: super)
      --bootstrap                         Bootstrap a default cluster on initial startup
      --manager-directory=<directory>     Path to managers config information
    
    Application Options:
      -V, --version                       Show version
      --defaults-file=<file>              configuration file
      --verbose-shutdown                  Always log the exit code when shutting down
      --daemon                            Start in daemon-mode
      --basedir=<absolute path>           Base directory to prepend to relative paths in the config
      --pid-file=<file>                   PID file in case we are started as daemon
      --plugin-dir=<path>                 Path to the plugins
      --plugins=<name>                    Plugins to load
      --log-level=<string>                Log all messages of level ... or higher
      --log-file=<file>                   Log all messages in a file
      --log-use-syslog                    Log all messages to syslog
      --log-backtrace-on-crash            Try to invoke debugger on crash
      --keepalive                         Try to restart mcmd if it crashed
      --max-open-files                    Maximum number of open files (ulimit -n)
      --event-threads                     Number of event-handling threads (default: 1)
    
  • --help-manager

    Command-Line Format--help-manager
    Permitted ValuesTypeboolean
    Defaulttrue

    mcmd help output is divided into Application and Manager sections. When used with --help-manager, mcmd displays the Manager options, like this:

    shell> mcmd --help-manager
    Usage:
      mcmd [OPTION...] - MySQL Cluster Manager
    
    manager-module
      --manager-port=<clientport>         Port to manage the cluster (default: 1862)
      --xcom-port=<xcomport>              Xcom port (default: 18620)
      --manager-username=<username>       Username to manage the cluster (default: mcmd)
      --manager-password=<password>       Password for the manager user-account (default: super)
      --bootstrap                         Bootstrap a default cluster on initial startup
      --manager-directory=<directory>     Path to managers config information
    
  • --keepalive

    Command-Line Format--keepalive
    Permitted ValuesTypeboolean
    Defaulttrue

    Use this option to cause mcmd to attempt to restart in the event of a crash.

  • --log-backtrace-on-crash

    Command-Line Format--log-backtrace-on-crash
    Permitted ValuesTypeboolean
    Defaulttrue

    Attempt to load the debugger in the event of a crash. Not normally used in production.

  • --log-file=filename

    Command-Line Format--log-file=file
    Permitted ValuesTypefile name
    Defaultmcmd.log

    Set the name of the file to write the log to. The default is mcmd.log in the installation directory. On Linux and other Unix-like platforms, you can use a relative path; this is in relation to the MySQL Cluster Manager installation directory, and not to the bin or etc subdirectory. On Windows, you must use an absolute path, and it cannot contain any spaces; in addition, you must replace any backslash (\) characters in the path with forward slashes (/).

  • --log-level=level

    Command-Line Format--log-level=level
    Permitted ValuesTypeenumeration
    Defaultmessage
    Valid Valuesmessage
    debug
    critical
    error
    info
    warning

    Sets the cluster log event severity level; see NDB Cluster Logging Management Commands, for definitions of the levels, which are the same as these except that ALERT is mapped to critical and the Unix syslog LOG_NOTICE level is used (and mapped to message). For additional information, see Event Reports Generated in NDB Cluster.

    Possible values for this option are (any one of) debug, critical, error, info, message, and warning. message is the default.

    You should be aware that the debug, message, and info levels can result in rapid growth of the agent log, so for normal operations, you may prefer to set this to warning or error.

    You can also change the cluster logging level at runtime using the change log-level command in the mcm client. The option applies its setting to all hosts running on all sites, whereas change log-level is more flexible; its effects can be constrained to a specific management site, or to one or more hosts within that site.

  • --log-use-syslog

    Command-Line Format--log-use-syslog
    Permitted ValuesTypeboolean
    Defaulttrue

    Write logging output to syslog.

  • --manager-directory=dir_name

    Command-Line Format--manager-directory=dir
    Permitted ValuesTypedirectory name
    Default/opt/mcm_data

    Set the location of the agent repository, which contains collections of MySQL Cluster Manager data files and MySQL Cluster configuration and data files. The value must be a valid absolute path. On Linux, if the directory does not exist, it is created; on Windows, the directory must be created if it does not exist. additionally on Windows, the path may not contain any spaces or backslash (\) characters; backslashes must be replaced with forward slashes (/).

    The default location is /opt/mcm_data. If you change the default, you should use a standard location external to the MySQL Cluster Manager installation directory, such as /var/opt/mcm on Linux.

    In addition to the MySQL Cluster Manager data files, the manager-directory also contains a rep directory in which MySQL Cluster data files for each MySQL Cluster under MySQL Cluster Manager control are kept. Normally, there is no need to interact with these directories beyond specifying the location of the manager-directory in the agent configuration file (mcmd.ini).

    However, in the event that an agent reaches an inconsistent state, it is possible to delete the contents of the rep directory, in which case the agent attempts to recover its repository from another agent. In such cases, you must also delete the repchksum file and the high_water_mark file from the manager-directory. Otherwise, the agent reads these file and raise errors due to the now-empty rep directory.

  • --manager-password=password

    Command-Line Format--manager-password=password
    Permitted ValuesTypestring
    Defaultsuper

    Set a password to be used for the manager agent user account. The default is super.

    Using this option together with manager-username causes the creation of a MySQL user account, having the username and password specified using these two options. This account is created with all privileges on the MySQL server including the granting of privileges. In other words, it is created as if you had executed GRANT ALL PRIVILEGES ON *.* ... WITH GRANT OPTION in the mysql client.

  • --manager-port=#

    Command-Line Format--manager-port=port
    Permitted ValuesTypenumeric
    Defaultlocalhost:1862

    Specify the port used by MySQL Cluster Manager client connections. Any valid TC/IP port number can be used. Normally, there is no need to change it from the default value (1862).

    Previously, this option could optionally take a host name in addition to the port number, but in MySQL Cluster Manager 1.1.1 and later the host name is no longer accepted.

  • --manager-username=user_name

    Command-Line Format--manager-username=name
    Permitted ValuesTypestring
    Defaultmcmd

    Set a user name for the MySQL account to be used by the MySQL Cluster Manager agent. The default is mcmd.

    When used together with manager-password, this option also causes the creation of a new MySQL user account, having the user name and password specified using these two options. This account is created with all privileges on the MySQL server including the granting of privileges. In other words, it is created as if you had executed GRANT ALL PRIVILEGES ON *.* ... WITH GRANT OPTION in the mysql client. The existing MySQL root account is not altered in such cases, and the default test database is preserved.

  • --max-open-files=#

    Command-Line Format--max-open-files=#
    Permitted ValuesTypenumeric
    Default1
    Min Value1
    Max Value[system dependent]

    Set the maximum number of open files (as with ulimit -n).

  • --pid-file=file

    Command-Line Format--pid-file=file_name
    Permitted ValuesTypefile name
    Defaultmcmd.pid

    Set the name and path to a process ID (.pid) file. Not normally used or needed. This option is not supported on Windows systems.

  • --plugin-dir

    Command-Line Format--plugin-dir=dir_name
    Permitted ValuesTypedirectory name
    Defaultlib/mcmd

    Set the directory to search for plugins. The default is lib/mcmd, in the MySQL Cluster Manager installation directory; normally there is no need to change this.

  • --plugins

    Command-Line Format--plugins=list
    Permitted ValuesTypedirectory name
    Default

    Specify a list of plugins to be loaded on startup. To enable MySQL Cluster Manager, this list must include manager (the default value). Please be aware that we currently do not test MySQL Cluster Manager with any values for plugins other than manager. Therefore, we recommend using the default value in a production setting.

  • --verbose-shutdown

    Command-Line Format--verbose-shutdown
    Permitted ValuesTypeboolean
    Defaulttrue

    Force mcmd to log the exit code whenever shutting down, regardless of the reason.

  • --version, -V

    Command-Line Format--version
    Permitted ValuesTypeboolean
    Defaulttrue

    Display version information and exit. Output may vary according to the MySQL Cluster Manager software version, operating platform, and versions of libraries used on your system, but should closely resemble what is shown here, with the first line of output containing the MySQL Cluster Manager release number (emphasized text):

    shell> mcmd -V
    MySQL Cluster Manager 1.4.1 (64bit)
      chassis: mysql-proxy 0.8.3
      glib2: 2.16.6
      libevent: 1.4.13-stable
    -- modules
      manager: 1.4.1
    
  • --xcom-port

    Command-Line Format--xcom-port=port
    Permitted ValuesTypenumeric
    Default18620

    Allows you to specify the XCOM port. The default in 18620.

3.2 Starting and Stopping the MySQL Cluster Manager Agent

Before you can start using MySQL Cluster Manager to create and manage a MySQL Cluster, the MySQL Cluster Manager agent must be started on each computer that is intended to host one or more nodes in the MySQL Cluster to be managed.

The MySQL Cluster Manager agent employs a MySQL user account for administrative access to mysqld processes. It is possible, but not a requirement, to change the default user name, the default password used for this account, or both. For more information, see Section 2.3.3, “Setting the MySQL Cluster Manager Agent User Name and Password”.

3.2.1 Starting and Stopping the Agent on Linux

To start the MySQL Cluster Manager agent on a given host running a Linux or similar operating system, you should run mcmd, found in the bin directory within the manager installation directory on that host. Typical options used with mcmd are shown here:

mcmd [--defaults-file | --bootstrap] [--log-file] [--log-level]

See Section 3.1, “mcmd, the MySQL Cluster Manager Agent”, for information about additional options that can be used when invoking mcmd from the command line, or in a configuration file.

mcmd normally runs in the foreground. If you wish, you can use your platform's usual mechanism for backgrounding a process. On a Linux system, you can do this by appending an ampersand character (&), like this (not including any options that might be required):

shell> ./bin/mcmd &

By default, the agent assumes that the agent configuration file is etc/mcmd.ini, in the MySQL Cluster Manager installation directory. You can tell the agent to use a different configuration file by passing the path to this file to the --defaults-file option, as shown here:

shell> ./bin/mcmd --defaults-file=/home/mcm/mcm-agent.conf

The --bootstrap option causes the agent to start with default configuration values, create a default one-machine cluster named mycluster, and start it. This option works only if no cluster has yet created, and is mutually exclusive with the --defaults-file option. Currently, any data stored in the default cluster mycluster is not preserved between cluster restarts; this is a known issue which we may address in a future release of MySQL Cluster Manager.

The use of the --bootstrap option with mcmd is shown here on a system having the host name torsk, where MySQL Cluster Manager has been installed to /home/jon/mcm:

shell> ./mcmd --bootstrap
MySQL Cluster Manager 1.4.1 started
Connect to MySQL Cluster Manager by running "/home/jon/mcm/bin/mcm" -a torsk:1862
Configuring default cluster 'mycluster'...
Starting default cluster 'mycluster'...
Cluster 'mycluster' started successfully
        ndb_mgmd        torsk:1186
        ndbd            torsk
        ndbd            torsk
        mysqld          torsk:3306
        mysqld          torsk:3307
        ndbapi          *
Connect to the database by running "/home/jon/mcm/cluster/bin/mysql" -h torsk -P 3306 -u root

You can then connect to the agent using the mcm client (see Section 3.3, “Starting the MySQL Cluster Manager Client”), and to either of the MySQL Servers running on ports 3306 and 3307 using mysql or another MySQL client application.

The --log-file option allows you to override the default location for the agent log file (normally mcmd.log, in the MySQL Cluster Manager installation directory).

You can use --log-level option to override the log-level set in the agent configuration file.

See Section 3.1, “mcmd, the MySQL Cluster Manager Agent”, for more information about options that can be used with mcmd.

The MySQL Cluster Manager agent must be started on each host in the MySQL Cluster to be managed.

To stop one or more instances of the MySQL Cluster Manager agent, use the stop agents command in the MySQL Cluster Manager client. If the client is unavailable, you can stop each agent process using the system's standard method for doing so, such as ^C or kill.

You can also set the agent up as a daemon or service on Linux and other Unix-like systems. (See Section 2.3.1, “Installing MySQL Cluster Manager on Unix Platforms”.) If you also want data node failed processes from a running MySQL Cluster to be started when the agent fails and restarts in such cases, you must make sure that StopOnError is set to 0 on each data node (and not to 1, the default).

3.2.2 Starting and Stopping the MySQL Cluster Manager Agent on Windows

To start the MySQL Cluster Manager agent manually on a Windows host, you should invoke mcmd.exe, found in the bin directory under the manager installation directory on that host. By default, the agent uses etc/mcmd.ini in the MySQL Cluster Manager installation directory as its configuration file; this can be overridden by passing the desired file's location as the value of the --defaults-file option.

Typical options for mcmd are shown here:

mcmd[.exe] [--defaults-file | --bootstrap] [--log-file] [--log-level]

For information about additional options that can be used with mcmd on the command line or in an option file, see Section 3.1, “mcmd, the MySQL Cluster Manager Agent”.

By default, the agent assumes that the agent configuration file is etc/mcmd.ini, in the MySQL Cluster Manager installation directory. You can tell the agent to use a different configuration file by passing the path to this file to the --defaults-file option, as shown here:

C:\Program Files (x86)\MySQL\MySQL Cluster Manager 1.1.4\bin>
  mcmd --defaults-file="C:\Program Files (x86)\MySQL\MySQL Cluster
  Manager 1.4.1\etc\mcmd.ini"

The --bootstrap option causes the agent to start with default configuration values, create a default one-machine cluster named mycluster, and start it. The use of this option with mcmd is shown here on a system having the host name torsk, where MySQL Cluster Manager has been installed to the default location:

C:\Program Files (x86)\MySQL\MySQL Cluster Manager 1.4.1\bin>mcmd --bootstrap
MySQL Cluster Manager 1.4.1 started
Connect to MySQL Cluster Manager by running "C:\Program Files (x86)\MySQL\MySQL
Cluster Manager 1.4.1\bin\mcm" -a TORSK:1862
Configuring default cluster 'mycluster'...
Starting default cluster 'mycluster'...
Cluster 'mycluster' started successfully
        ndb_mgmd        TORSK:1186
        ndbd            TORSK
        ndbd            TORSK
        mysqld          TORSK:3306
        mysqld          TORSK:3307
        ndbapi          *
Connect to the database by running "C:\Program Files (x86)\MySQL\MySQL Cluster
Manager 1.4.1\cluster\bin\mysql" -h TORSK -P 3306 -u root

You can then connect to the agent using the mcm client (see Section 3.3, “Starting the MySQL Cluster Manager Client”), and to either of the MySQL Servers running on ports 3306 and 3307 using mysql or another MySQL client application.

When starting the MySQL Cluster Manager agent for the first time, you may see one or more Windows Security Alert dialogs, such as the one shown here:

Security Warning dialog, Windows Firewall

You should grant permission to connect to private networks for any of the programs mcmd.exe, ndb_mgmd.exe, ndbd.exe, ndbmtd.exe, or mysqld.exe. To do so, check the Private Networks... box and then click the Allow access button. It is generally not necessary to grant MySQL Cluster Manager or MySQL Cluster access to public networks such as the Internet.

Note

The --defaults-file and --bootstrap options are mutually exclusive.

The --log-file option allows you to override the default location for the agent log file (normally mcmd.log, in the MySQL Cluster Manager installation directory).

You can use --log-level option to override the log-level set in the agent configuration file.

See Section 3.1, “mcmd, the MySQL Cluster Manager Agent”, for more information about options that can be used with mcmd.

The MySQL Cluster Manager agent must be started on each host in the MySQL Cluster to be managed.

It is possible to install MySQL Cluster Manager as a Windows service, so that it is started automatically each time Windows starts. See Section 2.3.2.1, “Installing the MySQL Cluster Manager Agent as a Windows Service”.

To stop one or more instances of the MySQL Cluster Manager agent, use the stop agents command in the MySQL Cluster Manager client. You can also stop an agent process using the Windows Task Manager. In addition, if you have installed MySQL Cluster Manager as a Windows service, you can stop (and start) the agent using the Windows Service Manager, CTRL-C, or the appropriate NET STOP (or NET START) command. See Starting and stopping the MySQL Cluster Manager agent Windows service, for more information about each of these options.

3.3 Starting the MySQL Cluster Manager Client

This section covers starting the MySQL Cluster Manager client and connecting to the MySQL Cluster Manager agent.

MySQL Cluster Manager 1.4.1 includes a command-line client mcm, located in the installation bin directory. mcm can be invoked with any one of the options shown in the following table:

Long formShort formDescription
--help-?Display mcm client options
--version-VShows MySQL Cluster Manager agent/client version.
-WShows MySQL Cluster Manager agent/client version, with version of mysql used by mcm.
--address-aHost and optional port to use when connecting to mcmd, in host[:port] format; default is 127.0.0.1:1862.
--mysql-help-IShow help for mysql client (see following).

The client-server protocol used by MySQL Cluster Manager is platform-independent. You can connect to any MySQL Cluster Manager agent with an mcm client on any platform where it is available. This means, for example, that you can use an mcm client on Microsoft Windows to connect to a MySQL Cluster Manager agent that is running on a Linux host.

mcm actually acts as a wrapper for the mysql client that is included with the bundled MySQL Cluster distribution. Invoking mcm with no options specified is equivalent to the following:

shell> mysql -umcmd -psuper -h 127.0.0.1 -P 1862 --prompt="mcm>"

(These -u and -p options and values are hard-coded and cannot be changed.) This means that you can use the mysql client to run MySQL Cluster Manager client sessions on platforms where mcm itself (or even mcmd) is not available. For more information, see Connecting to the agent using the mysql client.

If you experience problems starting an MySQL Cluster Manager client session because the client fails to connect, see Can't connect to [local] MySQL server, for some reasons why this might occur, as well as suggestions for some possible solutions.

To end a client session, use the exit or quit command (short form: \q). Neither of these commands requires a separator or terminator character.

For more information, see Chapter 4, MySQL Cluster Manager Client Commands.

Connecting to the agent with the mcm client.  You can connect to the MySQL Cluster Manager agent by invoking mcm (or, on Windows, mcm.exe). You may also need to specify a hostname, port number, or both, using the following command-line options:

  • --host=hostname or -h[ ]hostname

    This option takes the name or IP address of the host to connect to. The default is localhost (which may not be recognized on all platforms when starting a mcm client session even if it works for starting mysql client sessions).

    You should keep in mind that the mcm client does not perform host name resolution; any name resolution information comes from the operating system on the host where the client is run. For this reason, it is usually best to use a numeric IP address rather than a hostname for this option.

  • --port=portnumber or -P[ ]portnumber

    This option specifies the TCP/IP port for the client to use. This must be the same port that is used by the MySQL Cluster Manager agent. As mentioned elsewhere, if no agent port is specified in the MySQL Cluster Manager agent configuration file (mcmd.ini), the default number of the port used by the MySQL Cluster Manager agent is 1862, which is also used by default by mcm.

mcm accepts additional mysql client options, some of which may possibly be of use for MySQL Cluster Manager client sessions. For example, the --pager option might prove helpful when the output of get contains too many rows to fit in a single screen. The --prompt option can be used to provide a distinctive prompt to help avoid confusion between multiple client sessions. However, options not shown in the current manual have not been extensively tested with mcm and so cannot be guaranteed to work correctly (or even at all). See mysql Options, for a complete listing and descriptions of all mysql client options.

Note

Like the mysql client, mcm also supports \G as a statement terminator which causes the output to be formatted vertically. This can be helpful when using a terminal whose width is restricted to some number of (typically 80) characters. See Chapter 4, MySQL Cluster Manager Client Commands, for examples.

Connecting to the agent using the mysql client.  As mentioned previously, mcm actually serves as a wrapper for the mysql client. In fact, a mysql client from any recent MySQL distribution (MySQL 5.1 or later) should work without any issues for connecting to mcmd. In addition, since the client-server protocol used by MySQL Cluster Manager is platform-independent, you can use a mysql client on any platform supported by MySQL. (This means, for example, that you can use a mysql client on Microsoft Windows to connect to a MySQL Cluster Manager agent that is running on a Linux host.) Connecting to the MySQL Cluster Manager agent using the mysql client is accomplished by invoking mysql and specifying a hostname, port number, username and password, using the following command-line options:

  • --host=hostname or -h[ ]hostname

    This option takes the name or IP address of the host to connect to. The default is localhost. Like the mcm client, the mysql client does not perform host name resolution, and relies on the host operating system for this task. For this reason, it is usually best to use a numeric IP address rather than a hostname for this option.

  • --port=portnumber or -P[ ]portnumber

    This option specifies the TCP/IP port for the client to use. This must be the same port that is used by the MySQL Cluster Manager agent. Although the default number of the port used by the MySQL Cluster Manager agent is 1862 (which is also used by default by mcm), this default value is not known to the mysql client, which uses port 3306 (the default port for the MySQL server) if this option is not specified when mysql is invoked.

    Thus, you must use the --port or -P option to connect to the MySQL Cluster Manager agent using the mysql client, even if the agent process is using the MySQL Cluster Manager default port, and even if the agent process is running on the same host as the mysql client. Unless the correct agent port number is supplied to it on startup, mysql is unable to connect to the agent.

  • --user=username or -u[ ]username

    Specifies the username for the user trying to connect. Currently, the only user permitted to connect is mcmd; this is hard-coded into the agent software and cannot be altered by any user. By default, the mysql client tries to use the name of the current system user on Unix systems and ODBC on Windows, so you must supply this option and the username mcmd when trying to access the MySQL Cluster Manager agent with the mysql client; otherwise, mysql cannot connect to the agent.

  • --password[=password] or -p[password]

    Specifies the password for the user trying to connect. If you use the short option form (-p), you must not leave a space between this option and the password. If you omit the password value following the --password or -p option on the command line, the mysql client prompts you for one.

    Specifying a password on the command line should be considered insecure. It is preferable that you either omit the password when invoking the client, then supply it when prompted, or put the password in a startup script or configuration file.

    Currently, the password is hard-coded as super, and cannot be changed or overridden by MySQL Cluster Manager users. Therefore, if you do not include the --password or -p option when invoking mysql, it cannot connect to the agent.

In addition, you can use the --prompt option to set the mysql client's prompt. This is recommended, since allowing the default prompt (mysql>) to be used could lead to confusion between a MySQL Cluster Manager client session and a MySQL client session.

Thus, you can connect to a MySQL Cluster Manager agent by invoking the mysql client on the same machine from the system shell in a manner similar to what is shown here.

shell> mysql -h127.0.0.1 -P1862 -umcmd -p --prompt='mcm> '

For convenience, on systems where mcm itself is not available, you might even want to put this invocation in a startup script. On a Linux or similar system, this script might be named mcm-client.sh, with contents similar to what is shown here:

#!/bin/sh
/usr/local/mysql/bin/mysql -h127.0.0.1 -P1862 -umcmd -p --prompt='mcm> '

In this case, you could then start up a MySQL Cluster Manager client session using something like this in the system shell:

shell> ./mcm-client

On Windows, you can create a batch file with a name such as mcm-client.bat containing something like this:

C:\mysql\bin\mysql.exe -umcmd -psuper -h localhost -P 1862 --prompt="mcm> "

(Adjust the path to the mysql.exe client executable as necessary to match its location on your system.)

If you save this file to a convenient location such as the Windows desktop, you can start a MySQL Cluster Manager client session merely by double-clicking the corresponding file icon on the desktop (or in Windows Explorer); the client session opens in a new cmd.exe (DOS) window.

3.4 Setting Up MySQL Clusters with MySQL Cluster Manager

This section provides basic information about setting up a new MySQL Cluster with MySQL Cluster Manager. It also supplies guidance on migration of an existing MySQL Cluster to MySQL Cluster Manager.

For more information about obtaining and installing the MySQL Cluster Manager agent and client software, see Chapter 2, MySQL Cluster Manager Installation, Configuration, Cluster Setup.

See Chapter 4, MySQL Cluster Manager Client Commands, for detailed information on the MySQL Cluster Manager client commands shown in this chapter.

3.4.1 Creating a MySQL Cluster with MySQL Cluster Manager

In this section, we discuss the procedure for using MySQL Cluster Manager to create and start a new MySQL Cluster. We assume that you have already obtained the MySQL Cluster Manager and MySQL Cluster software, and that you are already familiar with installing MySQL Cluster Manager (see Chapter 2, MySQL Cluster Manager Installation, Configuration, Cluster Setup).

MySQL Cluster Manager also supports importing existing, standalone MySQL Clusters; for more information, see Section 3.5, “Importing MySQL Clusters into MySQL Cluster Manager”.

We also assume that you have identified the hosts on which you plan to run the cluster and have decided on the types and distributions of the different types of nodes among these hosts, as well as basic configuration requirements based on these factors and the hardware characteristics of the host machines.

Note

You can create and start a MySQL Cluster on a single host for testing or similar purposes, simply by invoking mcmd with the --bootstrap option. See Section 3.2, “Starting and Stopping the MySQL Cluster Manager Agent”.

Creating a new cluster consists of the following tasks:

  • MySQL Cluster Manager agent installation and startup.  Install the MySQL Cluster Manager software distribution, make any necessary edits of the agent configuration files, and start the agent processes as explained in Chapter 2, MySQL Cluster Manager Installation, Configuration, Cluster Setup. Agent processes must be running on all cluster hosts before you can create a cluster. This means that you need to place a complete copy of the MySQL Cluster Manager software distribution on every host. The MySQL Cluster Manager software does not have to be in a specific location, or even the same location on all hosts, but it must be present; you cannot manage any cluster processes hosted on a computer where mcmd is not also running.

  • MySQL Cluster Manager client session startup.  Starting the MySQL Cluster Manager client and connect to the MySQL Cluster Manager agent. You can connect to an agent process running on any of the cluster hosts, using the mcm client on any computer that can establish a network connection to the desired host. See Section 3.3, “Starting the MySQL Cluster Manager Client”, for details.

    On systems where mcm is not available, you can use the mysql client for this purpose. See Connecting to the agent using the mysql client.

  • MySQL Cluster software deployment.  The simplest and easiest way to do this is to copy the complete MySQL Cluster distribution to the same location on every host in the cluster. (If you have installed MySQL Cluster Manager 1.4.1 on each host, the MySQL Cluster NDB 7.5.4 distribution is already included, in mcm_installation_dir/cluster.) If you do not use the same location on every host, be sure to note it for each host. Do not yet start any MySQL Cluster processes or edit any configuration files; when creating a new cluster, MySQL Cluster Manager takes care of these tasks automatically.

    On Windows hosts, you should not install as services any of the MySQL Cluster node process programs, including ndb_mgmd.exe, ndbd.exe, ndbmtd.exe, and mysqld.exe. MySQL Cluster Manager manages MySQL Cluster processes independently of the Windows Service Manager and does not interact with the Service Manager or any Windows services when doing so.

    Note

    You can actually perform this step at any time up to the point where the software package is registered (using add package). However, we recommend that you have all required software—including the MySQL Cluster software—in place before executing any MySQL Cluster Manager client commands.

  • Management site definition.  Using the create site command in the MySQL Cluster Manager client, define a MySQL Cluster Manager management site—that is, the set of hosts to be managed. This command provides a name for the site, and must reference all hosts in the cluster. Section 4.2.6, “The create site Command”, provides syntax and other information about this command. To verify that the site was created correctly, use the MySQL Cluster Manager client commands list sites and list hosts.

  • MySQL Cluster software package registration.  In this step, you provide the location of the MySQL Cluster software on all hosts in the cluster using one or more add package commands. To verify that the package was created correctly, use the list packages and list processes commands.

  • Cluster definition.  Execute a create cluster command to define the set of MySQL Cluster nodes (processes) and hosts on which each cluster process runs, making up a the MySQL Cluster. This command also uses the name of the package registered in the previous step so that MySQL Cluster Manager knows the location of the binary running each cluster process. You can use the list clusters and list processes commands to determine whether the cluster has been defined as desired.

    If you wish to use SQL node connection pooling, see Setup for mysqld connection pooling before creating the cluster.

  • Initial configuration.  Perform any configuration of the cluster that is required or desired prior to starting it. You can set values for MySQL Cluster Manager configuration attributes (MySQL Cluster parameters and MySQL Server options) using the MySQL Cluster Manager client set command. You do not need to edit any configuration files directly—in fact, you should not do so. Keep in mind that certain attributes are read-only, and that some others cannot be reset after the cluster has been started for the first time. You can use the get command to verify that attributes have been set to the correct values.

  • Cluster startup.  Once you have completed the previous steps, including necessary or desired initial configuration, you are ready to start the cluster. The start cluster command starts all cluster processes in the correct order. You can verify that the cluster has started and is running normally after this command has completed, using the MySQL Cluster Manager client command show status. At this point, the cluster is ready for use by MySQL Cluster applications.

3.5 Importing MySQL Clusters into MySQL Cluster Manager

It is possible to bring a wild MySQL Cluster—that is, a cluster not created using MySQL Cluster Manager—under the control of MySQL Cluster Manager. The following sections provide an outline of the procedure required to import such a cluster into MySQL Cluster Manager, followed by a more detailed example.

3.5.1 Importing a Cluster Into MySQL Cluster Manager: Basic Procedure

The importing process consists generally of the steps listed here:

  1. Prepare the wild cluster for migration.

  2. Verify PID files for cluster processes.

  3. Create and configure in MySQL Cluster Manager a target cluster whose configuration matches that of the wild cluster.

  4. Perform a test run, and then execute the import cluster command.

This expanded listing breaks down each of the tasks just mentioned into smaller steps:

  1. Prepare the wild cluster for migration

    1. It is highly recommended that you take a complete backup of the wild cluster before you make changes to it, using the ndb_mgm client. For more information, see Using The NDB Cluster Management Client to Create a Backup.

    2. Any cluster processes that are under the control of the system's boot-time process management facility, such as /etc/init.d on Linux systems or the Services Manager on Windows platforms, should be removed from its control.

    3. The wild cluster's configuration must meet the following requirements, and it should be reconfigured and restarted if it does not:

      • NodeID must be assigned for every node.

      • DataDir must be specified for each management and data node, and the data directories for different nodes cannot overlap with each other.

      • A free API node not bounded to any host must be provisioned, through which the mcmd agent can communicate with the cluster.

    4. Create a MySQL user named mcmd on each SQL node, and grant root privileges to the user.

    5. Make sure that the configuration cache is disabled for each management node. Since the configuration cache is enabled by default, unless the management node has been started with the --config-cache=false option, you will need to stop and restart it with that option, in addition to other options that it has been started with previously.

    6. Kill each data node angel process using your system's facility for doing so. Do not kill any non-angel data node daemons.

  2. Verify cluster process PID files.

    1. Verify that each process in the wild cluster has a valid PID file.

    2. If a given process does not have a valid PID file, you must create one for it.

    See Section 3.5.2.2, “Verify All Cluster Process PID Files”, for a more detailed explanation and examples.

  3. Create and configure target cluster under MySQL Cluster Manager control

    1. Install MySQL Cluster Manager and start mcmd on all hosts with the same system user who started the wild cluster processes.

    2. Create a MySQL Cluster Manager site encompassing these hosts, using the create site command.

    3. Add a MySQL Cluster Manager package referencing the MySQL Cluster binaries, using the add package command. Use this command's --basedir option to point to the location of the MySQL Cluster installation directory.

    4. Create the target cluster using the create cluster command, including the same processes and hosts used by the wild cluster. Use the command's --import option to specify that the cluster is a target for import.

      If the wild cluster adheres to the recommendation for node ID assignments given in the description for the create cluster command (that is, having node ID 1 to 48 assigned to data nodes, and 49 and above assigned to other node types), you need not specify the node IDs for the processes in the create cluster command.

      Also, this step may be split into a create cluster command followed by one or more add process commands (see Section 3.5.2.3, “Creating and Configuring the Target Cluster”).

    5. Use import config to copy the wild cluster's configuration data into the target cluster. Use this command's --dryrun option (short form: -y) to perform a test run that merely logs the configuration information the command copies when it is executed without the option.

      If any ndb_mgmd or mysqld processes in the wild cluster are running on ports other than the default, you must first perform set commands to assign the correct port numbers for them in the target cluster. When all such processes are running on the correct ports and the dry run is successful, you can execute import config (without the --dryrun option) to copy the wild cluster's configuration data. Following this step, you should check the log as well as the configuration of the target cluster to ensure that all configuration attributes were copied correctly and with the correct scope. Correct any inconsistencies with the wild cluster's configuration using the appropriate set commands.

  4. Test and perform migration of wild cluster.

    1. Perform a test run of the proposed migration using import cluster with the --dryrun option, which causes MySQL Cluster Manager to check for errors, but not actually migrate any processes or data.

    2. Correct any errors found using --dryrun. Repeat the dry run from the previous step to ensure that no errors were missed.

    3. When the dry run no longer reports any errors, you can perform the migration using import cluster, but without the --dryrun option.

3.5.2 Importing a Cluster Into MySQL Cluster Manager: Example

As discussed previously (see Section 3.5.1, “Importing a Cluster Into MySQL Cluster Manager: Basic Procedure”), importing a standalone or wild cluster that was not created with MySQL Cluster Manager into the manager requires the completion of four major tasks. The example provided over the next few sections shows all the steps required to perform those tasks.

Sample cluster used in example.  The wild cluster used in this example consists of four nodes—one management node, two data nodes, and one SQL node. Each of these nodes resides on one of three hosts, the IP address for each is shown in the following table:

Node type (executable)Host name
Management node (ndb_mgmd)192.168.56.102
Data node (ndbd)192.168.56.103
Data node (ndbd)192.168.56.104
SQL node (mysqld)192.168.56.102

We assume that these hosts are on a dedicated network or subnet, and that each of them is running only the MySQL Cluster binaries and applications providing required system and network services. We assume on each host that the MySQL Cluster software has been installed from a release binary archive (see Installing an NDB Cluster Binary Release on Linux). We also assume that management node is using /home/ari/bin/cluster/wild-cluster/config.ini as the cluster's global configuration file, which is shown here:

[ndbd default]
NoOfReplicas= 2

[ndb_mgmd]
HostName= 192.168.56.102
DataDir= /home/ari/bin/cluster/wild-cluster/50/data
NodeId= 50


[ndbd]
HostName= 192.168.56.103
DataDir= /home/ari/bin/cluster/wild-cluster/2/data
NodeId=2

[ndbd]
HostName= 192.168.56.104
DataDir= /home/ari/bin/cluster/wild-cluster/3/data
NodeId=3

[mysqld]
HostName= 192.168.56.102
NodeId= 51

[api]
NodeId= 52

Notice that for the import into MySQL Cluster Manager to be successful, the following must be true for the cluster's configuration:

  • NodeID must be explicitly assigned for every node.

  • DataDir must be specified for each management and data node, and the data directories for different nodes cannot overlap with each other.

  • A free API node not bounded to any host must be provisioned, through which the mcmd agent can communicate with the cluster.

3.5.2.1 Preparing the Standalone Cluster for Migration

The next step in the import process is to prepare the wild cluster for migration. This requires, among other things, removing cluster processes from control by any system service management facility, making sure all management nodes are running with configuration caching disabled, and killing any data node angel processes that may be running. More detailed information about performing these tasks is provided in the remainder of this section.

  1. Before you make any changes to your wild cluster, it is strongly recommended that you back it up using the ndb_mgm client. See Using The NDB Cluster Management Client to Create a Backup for more information.

  2. Any cluster processes that are under the control of a system boot process management facility such as /etc/init.d on Linux systems or the Services Manager on Windows platforms should be removed from this facility's control. Consult your operating system's documentation for information about how to do this. Be sure not to stop any running cluster processes in the course of doing so.

  3. Create a MySQL user named mcmd on each of the wild cluster's SQL nodes. It is required for MySQL Cluster Manager to run the import config and import cluster commands in the steps to follow. To create this user with root privileges on the SQL nodes of our sample wild cluster, log in to an SQL node with the mysql client as root and execute the SQL statements shown here:

    CREATE USER 'mcmd'@'localhost' IDENTIFIED BY 'super';
    
    GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'localhost' WITH GRANT OPTION;
    
    

    Keep in mind that this must be done on all the SQL nodes, unless distributed privileges are enabled on the wild cluster.

  4. Make sure every node of the wild cluster has been started with its node ID specified with the --ndb-nodeid option at the command line, not just in the cluster configuration file. That is required for each process to be correctly identified by mcmd during the import. You can check if the requirement is fulfilled by the ps -ef | grep command, which shows the options the process has been started with:

    shell> ps -ef | grep ndb_mgmd
    ari       8118     1  0 20:51 ?        00:00:04 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini 
    --configdir=/home/ari/bin/cluster/wild-cluster --initial --ndb-nodeid=50
    

    (For clarity's sake, in the command output for the ps -ef | grep command in this and the upcoming sections, we are skipping the line of output for the grep process itself.)

    If the requirement is not fulfilled, restart the process with the --ndb-nodeid option; the restart can also be performed in step (e) or (f) below for any nodes you are restarting in those steps.

  5. Make sure that the configuration cache is disabled for each management node. Since the configuration cache is enabled by default, unless the management node has been started with the --config-cache=false option, you will need to stop and restart it with that option, in addition to other options that it has been started with previously.

    On Linux, we can once again use ps to obtain the information we need to accomplish this step. In a shell on host 192.168.56.102, on which the management node resides:

    shell> ps -ef | grep ndb_mgmd
    ari       8118     1  0 20:51 ?        00:00:04 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini 
    --configdir=/home/ari/bin/cluster/wild-cluster --initial --ndb-nodeid=50
    

    The process ID is 8118. The configuration cache is turned on by default, and a configuration directory has been specified using the --configdir option. First, terminate the management node using kill as shown here, with the process ID obtained from ps previously:

    shell> kill -15 8118
    

    Verify that the management node process was stopped—it should no longer appear in the output of another ps command.

    Now, restart the management node as described previously, with the configuration cache disabled and with the options that it was started with previously. Also, as already stated in step (d) above, make sure that the --ndb-nodeid option is specified at the restart:

    shell> /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini --config-cache=false  --ndb-nodeid=50
    MySQL Cluster Management Server mysql-5.7.16-ndb-7.5.4
    2016-11-08 21:29:43 [MgmtSrvr] INFO     -- Skipping check of config directory since config cache is disabled.
    
    Caution

    Do not use 0 or OFF for the value of the --config-cache option when restarting ndb_mgmd in this step. Using either of these values instead of false at this time causes the migration of the management node process to fail at later point in the import process.

    Verify that the process is running as expected, using ps:

    shell> ps -ef | grep ndb_mgmd
    ari      10221     1  0 19:38 ?        00:00:09 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini --config-cache=false --ndb-nodeid=50
    

    The management node is now ready for migration.

    Important

    While our example cluster has only a single management node, it is possible for a MySQL Cluster to have more than one. In such cases, you must make sure the configuration cache is disabled for each management with the steps described in this step.

  6. Kill each data node angel process using the system's facility. The angel process monitors a data node process during a cluster's operation and, if necessary, attempts to restart the data node process (see this FAQ for details) . Before a cluster can be imported, the angel processes must be stopped first. On a Linux system, you can identify angel processes by the output of the ps -ef command executed on the processes' host; here is an example of doing that on the host 192.168.56.103 of the sample cluster :

    shell> ps -ef | grep ndbd
    ari      12836     1  0 20:52 ?        00:00:00 ./bin/ndbd --initial --ndb-nodeid=2 --ndb-connectstring=192.168.56.102
    ari      12838 12836  2 20:52 ?        00:00:00 ./bin/ndbd --initial --ndb-nodeid=2 --ndb-connectstring=192.168.56.102
    

    While both the actual data node process and its angel process appear as processes ndbd, you can identify each by looking at the process IDs. The process ID of the angel process (italicized in the sample output above) appears twice in the command output, once for itself (in the first line of the output), and once as the ID of the parent process of the actual data node daemon (in the second line). Use the kill command to terminate the process with the identified process ID, like this:

    shell> kill -9 12836
    

    Verify that the angel process has been killed and the other ndbd process (the non-angel data node daemon) is still running by issuing the ps -ef command again, as shown here:

    shell> ps -ef | grep ndbd
    ari      12838     1  0 20:52 ?        00:00:02 ./bin/ndbd --initial --ndb-nodeid=2 --ndb-connectstring=192.168.56.102
    

    Now repeat this process in a shell on host 192.168.56.104, as shown here:

    shell> ps -ef | grep ndbd
    ari      11274     1  0 20:57 ?        00:00:00 ./cluster//bin/ndbd --initial --ndb-nodeid=3 --ndb-connectstring=192.168.56.102
    ari      11276 11274  0 20:57 ?        00:00:01 ./cluster//bin/ndbd --initial --ndb-nodeid=3 --ndb-connectstring=192.168.56.102
    
    shell> kill -9 11274
    
    shell> ps -ef | grep ndbd
    ari      11276     1  0 20:57 ?        00:00:01 ./cluster//bin/ndbd --initial --ndb-nodeid=3 --ndb-connectstring=192.168.56.102
    

    The wild cluster's data nodes are now ready for migration.

3.5.2.2 Verify All Cluster Process PID Files

You must verify that each process in the wild cluster has a valid PID file. For purposes of this discussion, a valid PID file has the following characteristics:

  • The file name is in the format of ndb_node_id.pid, where node_id is the node ID used for the process.

  • The file is located in the data directory used by the process.

  • The first line of the file contains the process ID of the node process, and only that process ID.

  1. To check the PID file for the management node process, log in to a system shell on host 192.168.56.102, change to the management node's data directory as specified by the Datadir parameter in the cluster's configuration file, then check to see whether the PID file is present. On Linux, you can use the command shown here:

    shell> ls ndb_*.pid
    ndb_50.pid
    

    Check the content of the matching .pid file using a pager or text editor. We use more for this purpose here:

    shell> more ndb_50.pid
    10221
    

    The number shown should match the ndb_mgmd process ID. We can check this on Linux using the ps command:

    shell> ps -ef | grep ndb_mgmd
    ari      10221     1  0 19:38 ?        00:00:09 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini --config-cache=false --ndb-nodeid=50
    

    The management node PID file satisfies the requirements listed at the beginning of this section.

  2. Next, we check the PID files for the data nodes, on hosts 192.168.52.103 and 192.168.52.104. Log in to a system shell on 192.168.52.103, then obtain the process ID of the ndbd process on this host, as shown here:

    shell> ps -ef | grep ndbd
    ari      12838     1  0 Nov08 ?        00:10:12 ./bin/ndbd --initial --ndb-nodeid=2 --ndb-connectstring=192.168.56.102
    

    As specified in the cluster's configuration file, the node's DataDir is /home/ari/bin/cluster/wild-cluster/2/data. Go to that directory to look for a file named ndb_2.pid:

    shell> ls ndb_*.pid
    ndb_2.pid
    

    Now check the content of this file, and you are going to see the process ID for angel process for the data node (see earlier instructions on stopping the angel process for more explanations on the angel process):

    shell> more ndb_2.pid
    12836
    

    Change the number in the PID file to the data node's own PID:

    shell> sed -i 's/12836/12838/' ndb_2.pid
    shell> more ndb_2.pid
    12838
    

    Similarly, we locate and adjust the content of the PID file for the remaining data node (node ID 3, whose data directory is /home/ari/bin/cluster/wild-cluster/3/data) on host 192.168.52.104:

    shell> ps -ef | grep ndbd
    ari      11276     1  0 Nov09 ?        00:09:44 ./cluster//bin/ndbd --initial --ndb-nodeid=3 --ndb-connectstring=192.168.56.102
    
    shell> more /home/ari/bin/cluster/wild-cluster/3/data/ndb_3.pid
    11274
    

    Edit the .pid file, so it contains the data node process's own PID:

    shell> cd /home/ari/bin/cluster/wild-cluster/3/data/
    shell> sed -i 's/11274/11276/' ndb_3.pid
    shell> more ndb_3.pid
    11276
    

    The PID file for this data node also meets our requirements now, so we are ready to proceed to the mysqld node running on host 192.168.52.102.

  3. To check the PID file for the mysqld node: the default location for it is the data directory of the node, specified by the datadir option in either a configuration file or at the command line at the start of the mysqld process. Let's go to the data directory /home/ari/bin/cluster/wild-cluster/51/data on host 192.168.52.104 and look for the PID file.

    shell> ls *.pid
    localhost.pid
    

    Notice that the MySQL Server could have been started with the --pid-file option, which puts a PID file at a specified location. In the following case, the same mysqld node has been started with the mysqld_safe script, and the ps command reveals the value for the --pid-file used:

    shell>  ps -ef | grep mysqld
    ari      11999  5667  0 13:15 pts/1    00:00:00 /bin/sh ./bin/mysqld_safe --defaults-file=/home/ari/bin/cluster/wild-cluster.cnf --ndb-nodeid=51
    ari      12136 11999  1 13:15 pts/1    00:00:00 /home/ari/bin/cluster/bin/mysqld --defaults-file=/home/ari/bin/cluster/wild-cluster.cnf 
    --basedir=/home/ari/bin/cluster/ --datadir=/home/ari/bin/cluster/wild-cluster/51/data/ --plugin-dir=/home/ari/bin/cluster//lib/plugin 
    --ndb-nodeid=51 --log-error=/home/ari/bin/cluster/wild-cluster/51/data//localhost.localdomain.err 
    --pid-file=/home/ari/bin/cluster/wild-cluster/51/data//localhost.localdomain.pid
    

    As in the example, it is likely that you have a PID file that is not named in the required format for cluster import (ndb_node_id.pid); and if the --pid-file option was used, the PID file might not be at the required location (the data directory). Let us look into the PID file being referred to in the last example:

    shell> more /home/ari/bin/cluster/wild-cluster/51/data//localhost.localdomain.pid
    12136
    

    The PID file for the SQL node is at an acceptable location (inside the data directory) and has the correct contents (the right PID), but has the wrong name. Let us just copy the PID file into a correctly named file in the same directory, like this

    shell> cd /home/ari/bin/cluster/wild-cluster/51/data/
    shell> cp localhost.localdomain.pid ndb_51.pid
    

3.5.2.3 Creating and Configuring the Target Cluster

The next task is to create a target cluster. Once this is done, we modify the target cluster's configuration until it matches that of the wild cluster that we want to import. At a later point in the example, we also show how to test the configuration in a dry run before attempting to perform the actual configuration import.

To create and then configure the target cluster, follow these steps:

  1. Install MySQL Cluster Manager and start mcmd on all hosts with the same system user who started the wild cluster processes. Once you have done this, you can start the mcm client (see Section 3.3, “Starting the MySQL Cluster Manager Client”) on any one of these hosts to perform the next few steps.

    Important

    The cluster import is going to fail due to insufficient rights for the mcmd agents to perform their tasks if the mcmd agents are not started by the same system user who started the wild cluster processes.

  2. Create a MySQL Cluster Manager site encompassing all four of the wild cluster's hosts, using the create site command, as shown here:

    mcm> create site --hosts=192.168.56.102,192.168.56.103,192.168.56.104 newsite;
    +---------------------------+
    | Command result            |
    +---------------------------+
    | Site created successfully |
    +---------------------------+
    1 row in set (0.15 sec)
    

    We have named this site newsite. You should be able to see it listed in the output of the list sites command, similar to what is shown here:

    mcm> list sites;
    +---------+------+-------+----------------------------------------------+
    | Site    | Port | Local | Hosts                                        |
    +---------+------+-------+----------------------------------------------+
    | newsite | 1862 | Local | 192.168.56.102,192.168.56.103,192.168.56.104 |
    +---------+------+-------+----------------------------------------------+
    1 row in set (0.01 sec)
    
  3. Add a MySQL Cluster Manager package referencing the MySQL Cluster binaries using the add package command; use the command's --basedir option to point to the correct location of the MySQL Cluster executables. The command shown here creates such a package, named newpackage:

    mcm> add package --basedir=/home/ari/bin/cluster newpackage;
    +----------------------------+
    | Command result             |
    +----------------------------+
    | Package added successfully |
    +----------------------------+
    1 row in set (0.70 sec)
    

    You do not need to include the bin directory containing the MySQL Cluster executables in the --basedir path. If the executables are in /home/ari/bin/cluster/bin, it is sufficient to specify /home/ari/bin/cluster; MySQL Cluster Manager automatically checks for the binaries in a bin directory within the directory specified by --basedir.

  4. Create the target cluster including at least some of the same processes and hosts used by the standalone cluster. Do not include any processes or hosts that are not part of this cluster. In order to prevent potentially disruptive process or cluster operations from interfering by accident with the import process, it is strongly recommended that you create the cluster for import using the --import option for the create cluster command.

    You must also take care to preserve the correct node ID (as listed in the config.ini file shown previously) for each node.

    The following command creates the cluster newcluster for import and includes the management and data nodes, but not the SQL or free API node (which we add in the next step):

    mcm> create cluster --import --package=newpackage \
          --processhosts=ndb_mgmd:50@192.168.56.102,ndbd:2@192.168.56.103,ndbd:3@192.168.56.104 \
           newcluster;
    +------------------------------+
    | Command result               |
    +------------------------------+
    | Cluster created successfully |
    +------------------------------+
    1 row in set (0.96 sec)
    

    You can verify that the cluster was created correctly by checking the output of show status with the --process (-r) option, like this:

    mcm> show status -r newcluster;
    +--------+----------+----------------+--------+-----------+------------+
    | NodeId | Process  | Host           | Status | Nodegroup | Package    |
    +--------+----------+----------------+--------+-----------+------------+
    | 50     | ndb_mgmd | 192.168.56.102 | import |           | newpackage |
    | 2      | ndbd     | 192.168.56.103 | import | n/a       | newpackage |
    | 3      | ndbd     | 192.168.56.104 | import | n/a       | newpackage |
    +--------+----------+----------------+--------+-----------+------------+
    3 rows in set (0.05 sec)
    
  5. If necessary, add any remaining processes and hosts from the wild cluster not included in the previous step using one or more add process commands. We have not yet accounted for two of the nodes from the wild cluster: the SQL node with node ID 51, on host 192.168.56.102, and the API node with node ID 52, which is not bound to any specific host. You can use the following command to add both of these processes to newcluster:

    mcm> add process --processhosts=mysqld:51@192.168.56.102,ndbapi:52@* newcluster;
    +----------------------------+
    | Command result             |
    +----------------------------+
    | Process added successfully |
    +----------------------------+
    1 row in set (0.41 sec)
    

    Once again checking the output from show status -r, we see that the mysqld and ndbapi processes were added as expected:

    mcm> show status -r newcluster;
    +--------+----------+----------------+--------+-----------+------------+
    | NodeId | Process  | Host           | Status | Nodegroup | Package    |
    +--------+----------+----------------+--------+-----------+------------+
    | 50     | ndb_mgmd | 192.168.56.102 | import |           | newpackage |
    | 2      | ndbd     | 192.168.56.103 | import | n/a       | newpackage |
    | 3      | ndbd     | 192.168.56.104 | import | n/a       | newpackage |
    | 51     | mysqld   | 192.168.56.102 | import |           | newpackage |
    | 52     | ndbapi   | *              | import |           |            |
    +--------+----------+----------------+--------+-----------+------------+
    5 rows in set (0.06 sec)
    

    You can also see that since newcluster was created using the create cluster command's --import option, the status of all processes in this cluster—including those we just added—is import. This means we cannot yet start newcluster or any of its processes. The import status and its effects on newcluster and its cluster processes persist until we have completed importing another cluster into newcluster.

    The target newcluster cluster now has the same processes, with the same node IDs, and on the same hosts as the original standalone cluster. We are ready to proceed to the next step.

  6. Now, test first the effects of the import config command by running it with the --dryrun option (the step only works if you have created the mcmd user on the cluster's mysqld nodes):

    Important

    Before executing this command it is necessary to set any non-default ports for ndb_mgmd and mysqld processes using the set command in the mcm client.

    mcm> import config --dryrun newcluster;
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Command result                                                                                                                                                                      |
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Import checks passed. Please check /home/ari/bin/mcm_data/clusters/newcluster/tmp/import_config.49d541a9_294_0.mcm on host localhost.localdomain for settings that will be applied. |
    +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    1 row in set (6.87 sec)
    
    

    As indicated by the output from import config --dryrun, you can see the configuration attributes and values that would be copied to newcluster by the command without the --dryrun option in the file /path-to-mcm-data-repository/clusters/clustername/tmp/import_config.message_id.mcm. If you open this file in a text editor, you will see a series of set commands that would accomplish this task, similar to what is shown here:

    # The following will be applied to the current cluster config:
    set NoOfReplicas:ndbd=2 newcluster;
    set DataDir:ndb_mgmd:50=/home/ari/bin/cluster/wild-cluster/50/data newcluster;
    set DataDir:ndbd:2=/home/ari/bin/cluster/wild-cluster/2/data newcluster;
    set DataDir:ndbd:3=/home/ari/bin/cluster/wild-cluster/3/data newcluster;
    set basedir:mysqld:51=/home/ari/bin/cluster/ newcluster;
    set datadir:mysqld:51=/home/ari/bin/cluster/wild-cluster/51/data/ newcluster;
    set sql_mode:mysqld:51="NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES" newcluster;
    set ndb_connectstring:mysqld:51=192.168.56.102 newcluster;
    

    After the successful dry run, you are now ready to import the wild cluster's configuration into newcluster, with the command shown here:

    mcm> import config newcluster;
    +------------------------------------------------------------------------------------------------------------------+
    | Command result                                                                                                   |
    +------------------------------------------------------------------------------------------------------------------+
    | Configuration imported successfully. Please manually verify plugin options, abstraction level and default values |
    +------------------------------------------------------------------------------------------------------------------+
    

    As an alternative, instead of importing all the settings using the import config command, you can make changes to the /path-to-mcm-data-repository/clusters/clustername/tmp/import_config.message_id.mcm file generated by the dry run as you wish, and then import the settings by executing the file with the mcm agent:

    mcm> source /path-to-mcm-data-repository/clusters/clustername/tmp/import_config.message_id.mcm

    You should check the resulting configuration of newcluster carefully against the configuration of the wild cluster. If you find any inconsistencies, you must correct these in newcluster using the appropriate set commands.

3.5.2.4 Testing and Migrating the Standalone Cluster

Testing and performing the migration of a standalone MySQL Cluster into MySQL Cluster Manager consists of the following steps:

  1. Perform a test run of the proposed import using import cluster with the --dryrun option. When this option is used, MySQL Cluster Manager checks for mismatched configuration attributes, missing or invalid processes or hosts, missing or invalid PID files, and other errors, and warns of any it finds, without actually performing any migration of processes or data (the step only works if you have created the mcmd user on the cluster's mysqld nodes):

    mcm> import cluster --dryrun newcluster;
    
    
  2. If errors occur, correct them, and repeat the dry run shown in the previous step until it returns no more errors. The following list contains some common errors you may encounter, and their likely causes:

    • MySQL Cluster Manager requires a specific MySQL user and privileges to manage SQL nodes. If the mcmd MySQL user account is not set up properly, you may see No access for user..., Incorrect grants for user..., or possibly other errors. Follow the instructions given in this step in Section 3.5.2.1, “Preparing the Standalone Cluster for Migration” to remedy the issue.

    • As described previously, each cluster process (other than a process whose type is ndbapi) being brought under MySQL Cluster Manager control must have a valid PID file. Missing, misnamed, or invalid PID files can produce errors such as PID file does not exist for process..., PID ... is not running ..., and PID ... is type .... See Section 3.5.2.2, “Verify All Cluster Process PID Files”.

    • Process version mismatches can also produce seemingly random errors whose cause can sometime prove difficult to track down. Ensure that all nodes are supplied with the correct release of the MySQL Cluster software, and that it is the same release and version of the software.

    • Each data node angel process in the standalone cluster must be stopped prior to import. A running angel process can cause errors such as Angel process pid exists ... or Process pid is an angel process for .... See this step in Section 3.5.2.1, “Preparing the Standalone Cluster for Migration”.

    • The number of processes, their types, and the hosts where they reside in the standalone cluster must be reflected accurately when creating the target site, package, and cluster for import. Otherwise, you may get errors such as Process id reported # processes ..., Process id ... does not match configured process ..., Process id not configured ..., and Process id does not match configured process .... See Section 3.5.2.3, “Creating and Configuring the Target Cluster”.

    • Other factors that can cause specific errors include processes in the wrong state, processes that were started with unsupported command-line options or without required options, and processes having the wrong process ID, or using the wrong node ID.

  3. When import cluster --dryrun no longer warns of any errors, you can perform the import with the import cluster command, this time omitting the --dryrun option.

    mcm> import cluster newcluster;
    +-------------------------------+
    | Command result                |
    +-------------------------------+
    | Cluster imported successfully |
    +-------------------------------+
    1 row in set (5.58 sec)
    

    You can check that the wild cluster has now been imported, and is now under management of MySQL Cluster Manager:

    mcm> show status -r newcluster;
    +--------+----------+----------------+---------+-----------+------------+
    | NodeId | Process  | Host           | Status  | Nodegroup | Package    |
    +--------+----------+----------------+---------+-----------+------------+
    | 50     | ndb_mgmd | 192.168.56.102 | running |           | newpackage |
    | 2      | ndbd     | 192.168.56.103 | running | 0         | newpackage |
    | 3      | ndbd     | 192.168.56.104 | running | 0         | newpackage |
    | 51     | mysqld   | 192.168.56.102 | running |           | newpackage |
    | 52     | ndbapi   | *              | added   |           |            |
    +--------+----------+----------------+---------+-----------+------------+
    5 rows in set (0.01 sec)
    

3.6 MySQL Cluster Backup and Restore Using MySQL Cluster Manager

This section describes usage of the NDB native backup and restore functionality implemented in MySQL Cluster Manager, to perform a number of common tasks.

3.6.1 Requirements for Backup and Restore

This section provides information about basic requirements for performing backup and restore operations using MySQL Cluster Manager.

Requirements for MySQL Cluster backup.  Basic requirements for performing MySQL backups using MySQL Cluster Manager are minimal. At least one data node in each node group must be running, and there must be sufficient disk space on the node file systems. Partial backups are not supported.

Requirements for MySQL Cluster restore.  In general, the following requirements apply when you try to restore a MySQL Cluster using MySQL Cluster Manager:

3.6.2 Basic MySQL Cluster Backup and Restore Using MySQL Cluster Manager

This section describes backing up and restoring a MySQL Cluster, with examples of complete and partial restore operations. Note that the backup cluster and restore cluster commands work with NDB tables only; tables using other MySQL storage engines (such as InnoDB or MyISAM) are ignored.

For purposes of example, we use a MySQL Cluster named mycluster whose processes and status can be seen here:

mcm> show status -r mycluster;
+--------+----------+----------+---------+-----------+-----------+
| NodeId | Process  | Host     | Status  | Nodegroup | Package   |
+--------+----------+----------+---------+-----------+-----------+
| 49     | ndb_mgmd | tonfisk  | running |           | mypackage |
| 1      | ndbd     | tonfisk  | running | 0         | mypackage |
| 2      | ndbd     | tonfisk  | running | 0         | mypackage |
| 50     | mysqld   | tonfisk  | running |           | mypackage |
| 51     | mysqld   | tonfisk  | running |           | mypackage |
| 52     | ndbapi   | *tonfisk | added   |           |           |
| 53     | ndbapi   | *tonfisk | added   |           |           |
+--------+----------+----------+---------+-----------+-----------+
7 rows in set (0.08 sec)

You can see whether there are any existing backups of mycluster using the list backups command, as shown here:

mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host    | Timestamp           | Comment |
+----------+--------+---------+---------------------+---------+
| 1        | 1      | tonfisk | 2012-12-04 12:03:52 |         |
| 1        | 2      | tonfisk | 2012-12-04 12:03:52 |         |
| 2        | 1      | tonfisk | 2012-12-04 12:04:15 |         |
| 2        | 2      | tonfisk | 2012-12-04 12:04:15 |         |
| 3        | 1      | tonfisk | 2012-12-04 12:17:41 |         |
| 3        | 2      | tonfisk | 2012-12-04 12:17:41 |         |
+----------+--------+---------+---------------------+---------+
6 rows in set (0.12 sec)

3.6.2.1 Simple backup

To create a backup, use the backup cluster command with the name of the cluster as an argument, similar to what is shown here:

mcm> backup cluster mycluster;
+-------------------------------+
| Command result                |
+-------------------------------+
| Backup completed successfully |
+-------------------------------+
1 row in set (3.31 sec)

backup cluster requires only the name of the cluster to be backed up as an argument; for information about additional options supported by this command, see Section 4.7.2, “The backup cluster Command”. To verify that a new backup of mycluster was created with a unique ID, check the output of list backups, as shown here (where the rows corresponding to the new backup files are indicated with emphasized text):

mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host    | Timestamp           | Comment |
+----------+--------+---------+---------------------+---------+
| 1        | 1      | tonfisk | 2012-12-04 12:03:52 |         |
| 1        | 2      | tonfisk | 2012-12-04 12:03:52 |         |
| 2        | 1      | tonfisk | 2012-12-04 12:04:15 |         |
| 2        | 2      | tonfisk | 2012-12-04 12:04:15 |         |
| 3        | 1      | tonfisk | 2012-12-04 12:17:41 |         |
| 3        | 2      | tonfisk | 2012-12-04 12:17:41 |         |
| 4        | 1      | tonfisk | 2012-12-12 14:24:35 |         |
| 4        | 2      | tonfisk | 2012-12-12 14:24:35 |         |
+----------+--------+---------+---------------------+---------+
8 rows in set (0.04 sec)

If you attempt to create a backup of a MySQL Cluster in which each node group does not have at least one data node running, backup cluster fails with the error Backup cannot be performed as processes are stopped in cluster cluster_name.

3.6.2.2 Simple complete restore

To perform a complete restore of a MySQL Cluster from a backup with a given ID, follow the steps listed here:

  1. Identify the backup to be used.

    In this example, we use the backup having the ID 4, that was created for mycluster previously in this section.

  2. Wipe the MySQL Cluster data.

    The simplest way to do this is to stop and then perform an initial start of the cluster as shown here, using mycluster:

    mcm> stop cluster mycluster;
    +------------------------------+
    | Command result               |
    +------------------------------+
    | Cluster stopped successfully |
    +------------------------------+
    1 row in set (15.24 sec)
    
    mcm> start cluster --initial mycluster;
    +------------------------------+
    | Command result               |
    +------------------------------+
    | Cluster started successfully |
    +------------------------------+
    1 row in set (34.47 sec)
    
  3. Restore the backup.

    This is done using the restore cluster command, which requires the backup ID and the name of the cluster as arguments. Thus, you can restore backup 4 to mycluster as shown here:

    mcm> restore cluster --backupid=4 mycluster;
    +--------------------------------+
    | Command result                 |
    +--------------------------------+
    | Restore completed successfully |
    +--------------------------------+
    1 row in set (16.78 sec)
    

3.6.2.3 Partial restore—missing images

It is possible using MySQL Cluster Manager to perform a partial restore of a MySQL Cluster—that is, to restore from a backup in which backup images from one or more data nodes are not available. This is required if we wish to restore mycluster to backup number 6, since an image for this backup is available only for node 1, as can be seen in the output of list backups in the mcm client :

mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host    | Timestamp           | Comment |
+----------+--------+---------+---------------------+---------+
| 1        | 1      | tonfisk | 2012-12-04 12:03:52 |         |
| 1        | 2      | tonfisk | 2012-12-04 12:03:52 |         |
| 2        | 1      | tonfisk | 2012-12-04 12:04:15 |         |
| 2        | 2      | tonfisk | 2012-12-04 12:04:15 |         |
| 3        | 1      | tonfisk | 2012-12-04 12:17:41 |         |
| 3        | 2      | tonfisk | 2012-12-04 12:17:41 |         |
| 4        | 1      | tonfisk | 2012-12-12 14:24:35 |         |
| 4        | 2      | tonfisk | 2012-12-12 14:24:35 |         |
| 5        | 1      | tonfisk | 2012-12-12 14:31:31 |         |
| 5        | 2      | tonfisk | 2012-12-12 14:31:31 |         |
| 6        | 1      | tonfisk | 2012-12-12 14:32:09 |         |
+----------+--------+---------+---------------------+---------+
11 rows in set (0.08 sec)

To perform a restore of only those nodes for which we have images (in this case, node 1 only), we can use the --skip-nodeid option when executing a restore cluster command. This option causes one or more nodes to be skipped when performing the restore. Assuming that mycluster has been cleared of data (as described earlier in this section), we can perform a restore that skips node 2 as shown here:

mcm> restore cluster --backupid=6 --skip-nodeid=2 mycluster;
+--------------------------------+
| Command result                 |
+--------------------------------+
| Restore completed successfully |
+--------------------------------+
1 row in set (17.06 sec)

Because we excluded node 2 from the restore process, no data has been distributed to it. To cause MySQL Cluster data to be distributed to any such excluded or skipped nodes following a partial restore, it is necessary to redistribute the data manually by executing an ALTER ONLINE TABLE ... REORGANIZE PARTITION statement in the mysql client for each NDB table in the cluster. To obtain a list of NDB tables from the mysql client, you can use multiple SHOW TABLES statements or a query such as this one:

SELECT CONCAT('' TABLE_SCHEMA, '.', TABLE_NAME)
    FROM INFORMATION_SCHEMA.TABLES
    WHERE ENGINE='ndbcluster';

You can generate the necessary SQL statements using a more elaborate version of the query just shown, such the one employed here:

mysql> SELECT
    ->     CONCAT('ALTER ONLINE TABLE `', TABLE_SCHEMA,
    ->            '`.`', TABLE_NAME, '` REORGANIZE PARTITION;')
    ->     AS Statement
    -> FROM INFORMATION_SCHEMA.TABLES
    -> WHERE ENGINE='ndbcluster';
+--------------------------------------------------------------------------+
| Statement                                                                |
+--------------------------------------------------------------------------+
| ALTER ONLINE TABLE `mysql`.`ndb_apply_status` REORGANIZE PARTITION;      |
| ALTER ONLINE TABLE `mysql`.`ndb_index_stat_head` REORGANIZE PARTITION;   |
| ALTER ONLINE TABLE `mysql`.`ndb_index_stat_sample` REORGANIZE PARTITION; |
| ALTER ONLINE TABLE `db1`.`n1` REORGANIZE PARTITION;                      |
| ALTER ONLINE TABLE `db1`.`n2` REORGANIZE PARTITION;                      |
| ALTER ONLINE TABLE `db1`.`n3` REORGANIZE PARTITION;                      |
| ALTER ONLINE TABLE `test`.`n1` REORGANIZE PARTITION;                     |
| ALTER ONLINE TABLE `test`.`n2` REORGANIZE PARTITION;                     |
| ALTER ONLINE TABLE `test`.`n3` REORGANIZE PARTITION;                     |
| ALTER ONLINE TABLE `test`.`n4` REORGANIZE PARTITION;                     |
+--------------------------------------------------------------------------+
10 rows in set (0.09 sec)

3.6.2.4 Partial restore—data nodes added

A partial restore can also be performed when new data nodes have been added to a MySQL Cluster following a backup. In this case, you can exclude the new nodes using --skip-nodeid when executing the restore cluster command. Consider the MySQL Cluster named mycluster as shown in the output of the following show status command:

mcm> show status -r mycluster;
+--------+----------+----------+---------+-----------+-----------+
| NodeId | Process  | Host     | Status  | Nodegroup | Package   |
+--------+----------+----------+---------+-----------+-----------+
| 49     | ndb_mgmd | tonfisk  | stopped |           | mypackage |
| 1      | ndbd     | tonfisk  | stopped | 0         | mypackage |
| 2      | ndbd     | tonfisk  | stopped | 0         | mypackage |
| 50     | mysqld   | tonfisk  | stopped |           | mypackage |
| 51     | mysqld   | tonfisk  | stopped |           | mypackage |
| 52     | ndbapi   | *tonfisk | added   |           |           |
| 53     | ndbapi   | *tonfisk | added   |           |           |
+--------+----------+----------+---------+-----------+-----------+
7 rows in set (0.03 sec)

The output of list backups shows us the available backup images for this cluster:

mcm> list backups mycluster;
+----------+--------+---------+---------------------+---------+
| BackupId | NodeId | Host    | Timestamp           | Comment |
+----------+--------+---------+---------------------+---------+
| 1        | 1      | tonfisk | 2012-12-04 12:03:52 |         |
| 1        | 2      | tonfisk | 2012-12-04 12:03:52 |         |
| 2        | 1      | tonfisk | 2012-12-04 12:04:15 |         |
| 2        | 2      | tonfisk | 2012-12-04 12:04:15 |         |
| 3        | 1      | tonfisk | 2012-12-04 12:17:41 |         |
| 3        | 2      | tonfisk | 2012-12-04 12:17:41 |         |
| 4        | 1      | tonfisk | 2012-12-12 14:24:35 |         |
| 4        | 2      | tonfisk | 2012-12-12 14:24:35 |         |
+----------+--------+---------+---------------------+---------+
8 rows in set (0.06 sec)

Now suppose that, at a later point in time, 2 data nodes have been added to mycluster using an add process command. The show status output for mycluster now looks like this:

mcm> show status -r mycluster;
+--------+----------+----------+---------+-----------+-----------+
| NodeId | Process  | Host     | Status  | Nodegroup | Package   |
+--------+----------+----------+---------+-----------+-----------+
| 49     | ndb_mgmd | tonfisk  | running |           | mypackage |
| 1      | ndbd     | tonfisk  | running | 0         | mypackage |
| 2      | ndbd     | tonfisk  | running | 0         | mypackage |
| 50     | mysqld   | tonfisk  | running |           | mypackage |
| 51     | mysqld   | tonfisk  | running |           | mypackage |
| 52     | ndbapi   | *tonfisk | added   |           |           |
| 53     | ndbapi   | *tonfisk | added   |           |           |
| 3      | ndbd     | tonfisk  | running | 1         | mypackage |
| 4      | ndbd     | tonfisk  | running | 1         | mypackage |
+--------+----------+----------+---------+-----------+-----------+
9 rows in set (0.01 sec)

Since nodes 3 and 4 were not included in the backup, we need to exclude them when performing the restore. You can cause restore cluster to skip multiple data nodes by specifying a comma-separated list of node IDs with the --skip-nodeid option. Assume that we have just cleared mycluster of MySQL Cluster data using the mcm client commands stop cluster and start cluster --initial as described previously in this section; then we can restore mycluster (now having 4 data nodes numbered 1, 2, 3, and 4) from backup number 4 (made when mycluster had only 2 data nodes numbered 1 and 2) as shown here:

mcm> restore cluster --backupid=4 --skip-nodeid=3,4 mycluster;
+--------------------------------+
| Command result                 |
+--------------------------------+
| Restore completed successfully |
+--------------------------------+
1 row in set (17.61 sec)

No data is distributed to the skipped (new) nodes; you must force nodes 3 and 4 to be included in a redistribution of the data using ALTER ONLINE TABLE ... REORGANIZE PARTITION as described previously in this section.

For MySQL Cluster Manager 14.1 and later: An alternative to generating and running the ALTER ONLINE TABLE ... REORGANIZE PARTITION steps is to make use of the logical backup of the NDB tables' metadata, which is part of the cluster backup created by MySQL Cluster Manager. To do this, before you run the restore cluster step outlined above:

You can then run the restore cluster step, and the data is going to be redistributed across all the data nodes, without the need for further manual intervention.

3.6.2.5 Restoring a Backup to a Cluster with Fewer Data Nodes

Sometimes, you want to transfer data from your cluster to another one that has fewer data nodes—for example, when you want to scale down your cluster or prepare a smaller slave cluster for a replication setup. While the methods described in Section 3.6.2, “Basic MySQL Cluster Backup and Restore Using MySQL Cluster Manager” will not work in that case, starting from MySQL Cluster Manager 1.4.1, you can perform the transfer by just using the backup cluster command and the ndb_restore program.

The process starts with creating a backup for the original cluster using the backup cluster command. Next, create a new cluster with fewer data nodes using the create cluster command. Before the NDB table data can be transferred, the metadata for the NDB tables must first be restored to the new cluster. Starting from MySQL Cluster Manager 1.4.1, the backup cluster command also creates a logical backup for the metadata of the NDB tables (see Logical Backup for NDB Table Metadata for details). Use the --all option with the list backups command to list all backups, including the logical backups for the NDB tables' metadata, which are marked by the comment Schema:

mcm> list backups --all mycluster;
+----------+--------+---------+----------------------+---------+
| BackupId | NodeId | Host    | Timestamp            | Comment |
+----------+--------+---------+----------------------+---------+
| 1        | 1      | tonfisk | 2016-09-21 21:13:09Z |         |
| 1        | 2      | tonfisk | 2016-09-21 21:13:09Z |         |
| 1        | 3      | tonfisk | 2016-09-21 21:13:09Z |         |
| 1        | 4      | tonfisk | 2016-09-21 21:13:09Z |         |
| 1        | 50     | tonfisk | 2016-09-21 21:13:12Z | Schema  |
| 2        | 1      | tonfisk | 2016-09-21 21:17:50Z |         |
| 2        | 2      | tonfisk | 2016-09-21 21:17:50Z |         |
| 2        | 3      | tonfisk | 2016-09-21 21:17:50Z |         |
| 2        | 4      | tonfisk | 2016-09-21 21:17:50Z |         |
| 2        | 50     | tonfisk | 2016-09-21 21:17:52Z | Schema  |
+----------+--------+---------+----------------------+---------+
10 rows in set (0.01 sec)

Next, we have to find out the locations of the logical backup file and the backup files for each data node of the original cluster.

Locations of backup files.  The backup files for each node are to be found under the folder specified by the cluster parameter BackupDataDir for data nodes and the parameter backupdatadir for mysqld nodes. Because the get command is not case sensitive, you can use this single command to check the values of both parameters:

mcm> get BackupDataDir mycluster;
+---------------+----------------+----------+---------+----------+---------+---------+----------+
| Name          | Value          | Process1 | NodeId1 | Process2 | NodeId2 | Level   | Comment  |
+---------------+----------------+----------+---------+----------+---------+---------+----------+
| BackupDataDir | /opt/mcmbackup | ndbmtd   | 1       |          |         | Process |          |
| BackupDataDir | /opt/mcmbackup | ndbmtd   | 2       |          |         | Process |          |
| BackupDataDir | /opt/mcmbackup | ndbmtd   | 3       |          |         | Process |          |
| BackupDataDir | /opt/mcmbackup | ndbmtd   | 4       |          |         | Process |          |
| backupdatadir | /opt/mcmbackup | mysqld   | 50      |          |         | Process | MCM only |
+---------------+----------------+----------+---------+----------+---------+---------+----------+
5 rows in set (0.18 sec)

The backup files for each backup of a specific BackupID are found under BackupDataDir/BACKUP/BACKUP-ID/ for each data node, and under backupdatadir/BACKUP/BACKUP-ID/ for each mysqld node. The comment MCM only in the row returned for the parameter backupdatadir indicates that backupdatadir is used by MySQL Cluster Manager only, and the directory it specifies contains only backups for the NDB tables' metadata. Notice that If BackupDataDir is not specified, the get command will return no value for it, and it takes up the value of DataDir, so that the image is stored in the directory Datadir/BACKUP/BACKUP-backup_id. If backupdatadir has not been specified, the get command will again return no value for it, and the logical backup files for the mysqld node are to be found at the default locations of /path-to-mcm-data-repository/clusters/clustername/nodeid/BACKUP/BACKUP-Id.

The process of restoring the backed-up data from the original cluster to the new one consists of the following steps:

  1. Stop the original cluster:

    mcm> stop cluster mycluster;
    +------------------------------+
    | Command result               |
    +------------------------------+
    | Cluster stopped successfully |
    +------------------------------+
    1 row in set (19.54 sec)
    
    mcm> show status mycluster;
    +-----------+---------+---------+
    | Cluster   | Status  | Comment |
    +-----------+---------+---------+
    | mycluster | stopped |         |
    +-----------+---------+---------+
    1 row in set (0.05 sec)
    
  2. Start your new cluster. Make sure the new cluster is operational and it has at least one free ndbapi slot for the ndb_restore utility to connect to the cluster:

    mcm> start cluster newcluster2nodes;
    +------------------------------+
    | Command result               |
    +------------------------------+
    | Cluster started successfully |
    +------------------------------+
    1 row in set (33.68 sec)
    
    mcm> show status -r newcluster2nodes;
    +--------+----------+---------+---------+-----------+-----------+
    | NodeId | Process  | Host    | Status  | Nodegroup | Package   |
    +--------+----------+---------+---------+-----------+-----------+
    | 49     | ndb_mgmd | tonfisk | running |           | mypackage |
    | 1      | ndbmtd   | tonfisk | running | 0         | mypackage |
    | 2      | ndbmtd   | tonfisk | running | 0         | mypackage |
    | 50     | mysqld   | tonfisk | running |           | mypackage |
    | 51     | ndbapi   | *       | added   |           |           |
    +--------+----------+---------+---------+-----------+-----------+
    5 rows in set (0.09 sec)
    
  3. Restore the logical backup of the metadata of the NDB tables onto the new cluster. See Reloading SQL-Format Backups for different ways to restore a logical backup. One way to do it is to open a mysql client, connect it to a mysqld node of the cluster, and then source the logical backup file with the mysql client:

    mysql> source path-to-logical-backup-file/BACKUP-BackupID.mysqld_nodeid.schema.sql

    See Locations of backup files above on how to find the path of the logical backup file. For our sample clusters, this is how the command looks like for restoring the NDB table metadata from the backup with the BackupID 2:

    mysql> source /opt/mcmbackup/BACKUP/BACKUP-2/BACKUP-2.50.schema.sql
  4. Restore one by one the backup for each data node of the original cluster to the new cluster, using the ndb_restore program:

    shell> ndb_restore -b BackupID -n nodeID -r --backup_path=backup-folder-for-data_node

    See Locations of backup files above on how to find the path of the data node backup files. For our sample clusters, to restore the data from the backup with the BackupID 2 for data node 1 to 4 of mycluster, execute the following commands:

    shell> ndb_restore --backupid=2 --nodeid=1 --restore_data --backup_path=/opt/mcmbackup/BACKUP/BACKUP-2/ --disable-indexes
    shell> ndb_restore --backupid=2 --nodeid=2 --restore_data --backup_path=/opt/mcmbackup/BACKUP/BACKUP-2/ --disable-indexes
    shell> ndb_restore --backupid=2 --nodeid=3 --restore_data --backup_path=/opt/mcmbackup/BACKUP/BACKUP-2/ --disable-indexes
    shell> ndb_restore --backupid=2 --nodeid=4 --restore_data --backup_path=/opt/mcmbackup/BACKUP/BACKUP-2/ --disable-indexes

    The option --disable-indexes is used so indexes are ignored during the restores. This is because if we also try to restore the indexes node by node, they might not be restored in the right order for the foreign keys and unique key constraints to work properly. Therefore, the --disable-indexes option is used in the above commands, after the execution of which we rebuild the indexes with the following ndb_restore command and the --rebuild-indexes option (you only need to run this on one of the data nodes):

    shell> ndb_restore --backupid=2 --nodeid=1 --rebuild-indexes --backup_path=/opt/mcmbackup/BACKUP/BACKUP-2/ 

The data and indexes have now been fully restored to the new cluster.

3.7 Backing Up and Restoring MySQL Cluster Manager Agents

This section explains how to back up configuration data for mcmd agents and how to restore the backed-up agent data. Used together with the backup cluster command, the backup agents command allows you to backup and restore a complete cluster-plus-manager setup.

If no host names are given with the backup agents command, backups are created for all agents of the site:

mcm> backup agents mysite;
+-----------------------------------+
| Command result                    |
+-----------------------------------+
| Agent backup created successfully |
+-----------------------------------+
1 row in set (0.07 sec)

To backup one or more specific agents, specify them with the --hosts option:

mcm> backup agents --hosts=tonfisk mysite;
+-----------------------------------+
| Command result                    |
+-----------------------------------+
| Agent backup created successfully |
+-----------------------------------+
1 row in set (0.07 sec)

If no site name is given, only the agent that the mcm client is connected to is backed up.

The backup for each agent includes the following contents from the agent repository (mcm_data folder):

  • The rep subfolder

  • The metadata files high_water_mark and repchksum

The repository is locked while the backup are in progress, to avoid creating an inconsistent backup. The backup for each agent is created in a subfolder named rep_backup/timestamp under the agent's mcm_data folder, with timestamp reflecting the time the backup began. If you want the backup to be at another place, create a soft link from mcm_data/rep_backup to your desired storage location.

To restore the backup for an agent:

  • Wipe the contents of the agent's mcm_data/rep folder

  • Delete the metadata files high_water_mark and repchksum from the mcm_data folder

  • Copy the contents in the mcm_data/rep_backup/timestamp/rep folder back into the mcm_data/rep folder

  • Copy the metadata files high_water_mark and repchksum from the mcm_data/rep_backup/timestamp folder back into the mcm_data folder

  • Restart the agent

The steps are illustrated below:

mysql@tonfisk$ cd mcm_data

mysql@tonfisk$ cp mcm_data/rep_backup/timestamp/rep/* ./rep/

mysql@tonfisk$ cp mcm_data/rep_backup/timestamp/high_water_mark ./

mysql@tonfisk$ cp mcm_data/rep_backup/timestamp/repchksum ./

mysql@tonfisk$ mcm1.4.1/bin/mcmd

The backup may be manually restored on just one, or more than one agents. If backup is restored for only one agent on, say, host A, host A will contact the other agents of the site to make them recover their repositories from host A using the usual mechanism for agent recovery. If all agents on all hosts are restored and restarted manually, the situation will be similar to the normal restarting all agents after stopping them at slightly different points in time.

If configuration changes has been made to the cluster since the restored backup was created, the same changes must be made again after the agent restores have been completed, to ensure that the agents' configurations match those of the actual running cluster. For example: sometime after a backup was done, a set MaxNoOfTables:ndbmtd=500 mycluster command was issued and soon afterward, something happened and corrupted the agent repository; after the agent backup was restored, the same set command has to be rerun in order to update the mcmd agents' configurations. While the command does not effectively change anything on the cluster itself, after it has been run, a rolling restart of the cluster processes using the restart cluster command is still required.

3.8 Restoring a MySQL Cluster Manager Agent with Data from Other Agents

Sometimes, an mcmd agent can fail to restart after a failure because its configuration store has been corrupted (for example, by an improper shutdown of the host). If there is at least one other mcmd agent that is still functioning properly on another host of the cluster, you can restore the failed agent by the following steps:

  • Make sure the mcmd agent has really been stopped.

  • Go to the agent repository (the agent's mcm_data folder).

  • Wipe the contents of the rep folder.

  • Delete the metadata files high_water_mark and repchksum.

  • Delete the manager.lck file.

  • Restart the agent.

The agent then recovers the configuration store from other agents on the other hosts.

However, if all the mcmd agents for the cluster are malfunctioning, you will have to do one of the following:

3.9 Setting Up MySQL Cluster Replication with MySQL Cluster Manager

This section provides sample steps for setting up a MySQL Cluster replication with a single replication channel using the MySQL Cluster Manager.

Before trying the following steps, it is recommended that you first read NDB Cluster Replication to familiarize yourself with the concepts, requirements, operations, and limitations of MySQL Cluster replication.

  1. Create and start a master cluster:

    mcm> create site --hosts=tonfisk msite;

    mcm> add package --basedir=/usr/local/cluster-mgt/cluster-7.3.2 7.3.2;

    mcm> create cluster -P 7.3.2 -R \
           ndb_mgmd@tonfisk,ndbmtd@tonfisk,ndbmtd@tonfisk,mysqld@tonfisk,mysqld@tonfisk,ndbapi@*,ndbapi@* \
           master;

    mcm> set portnumber:ndb_mgmd=4000 master;

    mcm> set port:mysqld:51=3307 master;

    mcm> set port:mysqld:50=3306 master;

    mcm> set server_id:mysqld:50=100 master;

    mcm> set log_bin:mysqld:50=binlog master;

    mcm> set binlog_format:mysqld:50=ROW master;

    mcm> set ndb_connectstring:mysqld:50=tonfisk:4000 master;

    mcm> start cluster master;
    

  2. Create and start a slave cluster (we begin with creating a new site called ssite just for the slave cluster; you can also skip that and put the master and slave cluster hosts under the same site instead):

    mcm> create site --hosts=flundra ssite;

    mcm> add package --basedir=/usr/local/cluster-mgt/cluster-7.3.2 7.3.2;

    mcm> create cluster -P 7.3.2 -R \
          ndb_mgmd@flundra,ndbmtd@flundra,ndbmtd@flundra,mysqld@flundra,mysqld@flundra,ndbapi@*,ndbapi@* \
          slave;

    mcm> set portnumber:ndb_mgmd=4000 slave;

    mcm> set port:mysqld:50=3306 slave;

    mcm> set port:mysqld:51=3307 slave;

    mcm> set server_id:mysqld:50=101 slave;

    mcm> set ndb_connectstring:mysqld:50=flundra:4000 slave;

    mcm> set slave_skip_errors:mysqld=all slave;

    mcm> start cluster slave;
    

  3. Create a slave account (with the user name myslave and password mypw) on the master cluster with the appropriate privilege by logging into the master replication client (mysqlM) and issuing the following statements:

    
    mysqlM> GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'flundra'
        -> IDENTIFIED BY 'mypw';

  4. Log in to the slave cluster client (mysqlS) and issue the following statements:

    mysqlS> CHANGE MASTER TO
        -> MASTER_HOST='tonfisk',
        -> MASTER_PORT=3306,
        -> MASTER_USER='myslave',
        -> MASTER_PASSWORD='mypw';
    

  5. Start replication by issuing the following statement with the slave cluster client:

    mysqlS> START SLAVE;
    

The above example assumes that the master and slave clusters are created at about the same time, with no data on both before replication starts. If the master cluster has already been operating and has data on it when the salve cluster is created, after step 3 above, follow these steps to transfer the data from the master cluster to the slave cluster and prepare the slave cluster for replication:

  1. Back up your master cluster using the backup cluster command of MySQL Cluster Manager:

    mcm> backup cluster master;

    Note

    Only NDB tables are backed up by the command; tables using other MySQL storage engines are ignored.

  2. Look up the backup ID of the backup you just made by listing all backups for the master cluster:

    mcm> list backups master;
    +----------+--------+---------+---------------------+---------+
    | BackupId | NodeId | Host    | Timestamp           | Comment |
    +----------+--------+---------+---------------------+---------+
    | 1        | 1      | tonfisk | 2014-10-17 20:03:23 |         |
    | 1        | 2      | tonfisk | 2014-10-17 20:03:23 |         |
    | 2        | 1      | tonfisk | 2014-10-17 20:09:00 |         |
    | 2        | 2      | tonfisk | 2014-10-17 20:09:00 |         |
    +----------+--------+---------+---------------------+---------+
    

    From the output, you can see that the latest backup you created has the backup ID 2, and backup data exists for node 1 and 2.

  3. Using the backup ID and the related node IDs, identify the backup files just created under /mcm_data/clusters/cluster_name/node_id/data/BACKUP/BACKUP-backup_id/ in the master cluster's installation directory (in this case, the files under the /mcm_data/clusters/master/1/data/BACKUP/BACKUP-2 and /mcm_data/clusters/master/2/data/BACKUP/BACKUP-2), and copy them over to the equivalent places for the slave cluster (in this case, /mcm_data/clusters/slave/1/data/BACKUP/BACKUP-2 and /mcm_data/clusters/slave/2/data/BACKUP/BACKUP-2 under the slave cluster's installation directory). After the copying is finished, use the following command to check that the backup is now available for the slave cluster:

    mcm> list backups slave;
    +----------+--------+---------+---------------------+---------+
    | BackupId | NodeId | Host    | Timestamp           | Comment |
    +----------+--------+---------+---------------------+---------+
    | 2        | 1      | flundra | 2014-10-17 21:19:00 |         |
    | 2        | 2      | flundra | 2014-10-17 21:19:00 |         |
    +----------+--------+---------+---------------------+---------+
    

  4. Restore the backed up data to the slave cluster (note that you need an unused ndbapi slot for the restore cluster command to work):

    mcm> restore cluster --backupid=2 slave;

  5. On the master cluster client, use the following command to identify the correct binary log file and position for replication to start:

    
    mysqlM> SHOW MASTER STATUS\G;
    *************************** 1. row ***************************
                 File: binlog.000017
             Position: 2857
         Binlog_Do_DB:
     Binlog_Ignore_DB:
    Executed_Gtid_Set:
    

  6. On the slave cluster client, provide to the slave cluster the information of the master cluster, including the binary log file name (with the MASTER_LOG_FILE option) and position (with the MASTER_LOG_POS option) you just discovered in step 5 above:

    mysqlS> CHANGE MASTER TO
        -> MASTER_HOST='tonfisk',
        -> MASTER_PORT=3306,
        -> MASTER_USER='myslave',
        -> MASTER_PASSWORD='mypw',
        -> MASTER_LOG_FILE='binlog.000017',
        -> MASTER_LOG_POS=2857;
    
    

  7. Start replication by issuing the following statement with the slave cluster client:

    mysqlS> START SLAVE;
    

As an alternative to these steps, you can also follow the steps described in NDB Cluster Backups With NDB Cluster Replication to copy the data from the master to the slave and to specify the binary log file and position for replication to start.