This section explains how to set up a single-primary InnoDB cluster and configure MySQL Router to achieve high availability. You create and administer your InnoDB clusters using MySQL Shell with the included AdminAPI. Familiarity with MySQL Shell is assumed, see Chapter 18, MySQL Shell User Guide for further information.
This tutorial shows how to use MySQL Shell to create an InnoDB cluster consisting of a MySQL Server instance which provides the seed instance of the InnoDB cluster and holds the initial data set. Two more sandbox MySQL server instances are created and added to the InnoDB cluster. Then MySQL Router is deployed and used to route connections to the InnoDB cluster, and high availability is tested. These are the steps:
MySQL Shell includes the AdminAPI, which is accessed
through the dba global variable and its
associated methods. The dba variable's
methods enable you to administer your cluster, for example you
use the dba.deploySandboxInstance() method to
add a sandbox MySQL instance.
To list all available dba commands, use the
dba.help() method. You can obtain detailed
information for a specific method using the general format
object.help('methodname'). For example:
mysql-js> dba.help('getCluster')
Retrieves a cluster from the Metadata Store.
SYNTAX
<Dba>.getCluster([name])
WHERE
name: Parameter to specify the name of the cluster to be returned.
DESCRIPTION
If name is not specified, the default cluster will be returned.
If name is specified, and no cluster with the indicated name is found, an error
will be raised.
In addition to this documentation, there is developer
documentation for all dba methods in the
JavaScript
and
Python
developer documentation.
Initially deploying and using local sandbox instances of MySQL is a good way to start your exploration of InnoDB cluster. You can fully test out InnoDB cluster locally, prior to deployment on your production servers. MySQL Shell has built in functionality for creating sandbox instances. MySQL Shell creates the sandbox instances correctly configured to work with Group Replication in a locally deployed clustered scenario.
Sandbox instance are only suitable for deploying and running on your local machine. In a production environment the MySQL Server instances would be deployed on various hosts on the network. This is explained later in this guide.
The dba module provides several functions for
administration of sandbox instances. For this example setup, you
create three sandbox instances. The AdminAPI provides a
function for that purpose:
dba.deploySandboxInstance().
Start MySQL Shell from a command prompt by issuing the command:
shell> mysqlsh
MySQL Shell provides two scripting languages: JavaScript and
Python. Throughout this guide MySQL Shell is used primarily in
JavaScript mode
. When MySQL Shell starts it is in JavaScript mode by default.
You switch into JavaScript mode, Python mode and SQL mode using
the commands \js, \py, and
\sql. Ensure you are in JavaScript mode by
issuing the \js command, then execute:
mysql-js> dba.deploySandboxInstance(3310)
Semi-colons are not required at the end of the line in JavaScript mode.
The argument passed to
deploySandboxInstance() is the TCP port
number where the MySQL Server instance listens for connections.
By default the sandbox is created in a directory named
$HOME/mysql-sandboxes/
on Unix systems. For Microsoft Windows systems the directory is
port%userprofile%\MySQL\mysql-sandboxes\.
port
The root password for the instance is prompted for.
Each instance has its own password. Defining the same password for all sandboxes in this tutorial makes it easier, but remember to use different passwords for each instance on production systems.
To add further server instances, use
deploySandboxInstance(). For this example
sandbox cluster add two more instances using different port
numbers by issuing:
mysql-js> dba.deploySandboxInstance(3320)
mysql-js> dba.deploySandboxInstance(3330)You now have three MySQL server sandbox instances running on ports 3310, 3320 and 3330.
The next step is to create the InnoDB cluster while connected to the seed MySQL Server instance. The seed instance is the instance that you are connected to via MySQL Shell and that you want to replicate to the other instances. In this example, the sandbox instances are blank instances, therefore we can choose any instance. In a production set up the seed instance would be the one which contains your existing data set and would be replicated to the other instances in the cluster.
Connect MySQL Shell to the seed instance, in this case the one at port 3310:
mysql-js> \connect root@localhost:3310
The syntax \connect is a shortcut for the
MySQL Shell connect method shell.connect().
Alternatively use the following command:
mysql-js> shell.connect('root@localhost:3310')
Use the createCluster() method to create the
InnoDB cluster with the currently connected instance as the
seed:
mysql-js> var cluster = dba.createCluster('testCluster')
The createCluster() method returns the
created cluster, the above statement assigns this to the
cluster variable. The parameter passed to the
createCluster() method is a symbolic name
given to this InnoDB cluster, in this case
testCluster. The resulting InnoDB cluster
is assigned to the cluster variable. This
function deploys the metadata to the selected instance,
configures it for Group Replication and adds the instance as the
seed of the new InnoDB cluster.
After validating that the instance is properly configured, it is added to the InnoDB cluster as the seed instance and the replication subsystem is started.
The provided sandbox instances are pre-configured to work with
Group Replication, but if you use a pre-existing instance, it is
possible that some configuration options might not be set in a
compatible way. The createCluster() command
ensures that the settings are correct and if not, it changes
their values. If a change requires MySQL Server to be restarted,
you are prompted to restart it manually whenever convenient.
In summary, when dba.createCluster() is
executed, the following steps are carried out:
The InnoDB cluster Metadata Schema is created (if it does not already exist) or is updated to the latest version. Schema objects or columns are only added, never removed.
The new InnoDB cluster information, including the specified name and password, is inserted into the InnoDB cluster Metadata.
The seed instance is added to the InnoDB cluster.
The seed instance information is inserted into the InnoDB cluster Metadata.
Once you have created a cluster, obtain the cluster instance variable using a command such as:
mysql-js> var cluster = dba.getCluster("testCluster")You specify the name of the cluster you wish to obtain the instance variable for. If you do not specify the name of the cluster the default cluster is returned.
The next step is to add secondary instances to the InnoDB cluster. Any transactions that were executed by the seed instance are re-executed by each secondary instance as it is added. We use the sandbox instances that were created earlier.
The seed instance in this example was recently created, so it is nearly empty. Therefore, there is little data that needs to be replicated from the seed instance to the secondary instances. In a production environment, where you have an existing database on the seed instance, you could use a tool such as MySQL Enterprise Backup to ensure that the secondaries have matching data before replication starts. This avoids the possibility of lengthy delays while data replicates from the primary to the secondaries. Once the cluster is formed, writes to the primary result in data being replicated to the secondaries.
Add the second instance to the InnoDB cluster:
mysql-js> cluster.addInstance('root@localhost:3320')The root user's password is prompted for.
Add the third instance:
mysql-js> cluster.addInstance('root@localhost:3330')The root user's password is prompted for.
At this point you have created a cluster with three instances: a primary, and two secondaries.
You could have added additional details to the logs when adding an instance to a cluster. Pass in 'verbose' to enable additional logging, so our last example would have looked like this:
mysql-js> cluster.addInstance('root@localhost:3330', {verbose: true})
You can only specify localhost in
addInstance() if the instance is a sandbox
instance. This also applies to the implicit
addInstance() after issuing
createCluster().
In order for client applications to handle failover, they need
to be aware of the InnoDB cluster topology. They also need to
know whether an instance is the PRIMARY in
single-primary mode, or is "R/W" in multi-primary mode. While it
is possible for applications to implement that logic, MySQL Router
provides this functionality for you and is designed for
InnoDB cluster.
The recommended deployment of MySQL Router is on the same host as the application. In this tutorial, everything is running on a single host, so you deploy MySQL Router to the same host.
Assuming MySQL Router is already installed (see Installation), the only required step is to bootstrap it with the location of the InnoDB cluster metadata server. The following does this using all default settings:
shell> mysqlrouter --bootstrap root@localhost:3310 --user=mysqlrouterYou are prompted for the instance password and encryption key for MySQL Router to use. This encryption key is used to encrypt the instance password used by MySQL Router to connect to the cluster. The ports you can use to connect to the InnoDB cluster are also displayed.
Currently only Classic Protocol connections are supported between MySQL Router and InnoDB cluster.
MySQL Router connects to the InnoDB cluster, fetches its metadata
and configures itself for use. The generated configuration
creates two TCP ports: one for read-write sessions (which
redirects connections to "R/W" instances) and one for read-only
sessions (which redirects connections to one of the
SECONDARY instances).
Once bootstrapped and configured, start MySQL Router (or set up a service for it to start automatically when the system boots):
shell> mysqlrouter &
You can now connect a MySQL client, such as MySQL Shell to one
of the incoming MySQL Router ports and see how the client gets
transparently connected to one of the InnoDB cluster
instances. To see which instance you are actually connected to,
simply query the port status variable.
shell> mysqlsh --uri root@localhost:6442
mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
| 3310 |
+--------+
1 row in set (0.00 sec)
To test if failover works, simulate an unexpected halt by
killing the PRIMARY instance using the
dba.killSandboxInstance() function and check
that one of the other instances takes over automatically.
mysql-js> dba.killSandboxInstance(3310)
Then you can again check which instance you are connected to.
The first SELECT statement fails as the
connection to the original PRIMARY was lost.
MySQL Shell automatically reconnects for you and when you
issue the command again the new port is confirmed.
mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> SELECT @@port;
ERROR: 2013 (HY000): Lost connection to MySQL server during query
The global session got disconnected.
Attempting to reconnect to 'root@localhost:6446'...
The global session was successfully reconnected.
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
| 3330 |
+--------+
1 row in set (0.00 sec)
This shows that the InnoDB cluster provided us with automatic
failover, that MySQL Router has automatically reconnected us to the
new PRIMARY instance, and that we have high
availability.
You can bring the instance that you killed back online.
mysql-js> dba.startSandboxInstance(3310)
mysql-js> cluster.rejoinInstance('root@localhost:3310')
mysql-js> cluster.status()