This section explains how to work with clusters, and how to handle common administration tasks.
Use the Cluster object's
status() method to check a cluster's status:
mysql-js> cluster.status()This retrieves the current InnoDB cluster status which the server instance you are connected to is aware of and outputs a status report. It is important to understand that the instance's state in the cluster influences the information in the status report. A member which has left the cluster would provide a different view of the cluster compared to a instance which belongs to the cluster.
The instance status is one of the following:
ONLINE: The instance is online and participating in the cluster.OFFLINE: The instance may have lost connection to the other instances.RECOVERING: The instance is attempting to synchronize with the cluster by pulling in transactions it needs before it can become anONLINEmember.UNREACHABLE: The instance has lost communication with the cluster.ERROR: The instance has encountered an error during the recovery phase or while applying a transaction.(MISSING): The state of an instance which is part of the configured cluster, but is currently unavailable.NoteThe
MISSINGstate is specific to InnoDB cluster, it is not a state generated by Group Replication. MySQL Shell uses this state to indicate instances that are registered in the metadata, but cannot be found in the live cluster view.
Once an instance enters ERROR state, the
super_read_only option is set
to ON. To leave the
ERROR state you must manually configure the
instance with
super_read_only=OFF.
The mode indicates either
R/W (read and writable) or
R/O (read only). In single-primary mode, only
the instance marked "R/W" can execute transactions that update
the database, so it is the PRIMARY. If that instance becomes
unreachable for any reason (like an unexpected halt), one of the
remaining "R/O" instances automatically takes over its place and
becomes the new "R/W" PRIMARY. In
multi-primary mode, multiple instances are marked as "R/W" and
there is no elected PRIMARY.
To check the status of the InnoDB cluster at a later time, you
can get a reference to the InnoDB cluster object by connecting
to any of its instances. However, if you want to make changes to
the InnoDB cluster, you must connect to a "R/W" instance. For
information about how the InnoDB cluster is running, use the
status() method:
mysql-js> var cluster = dba.getCluster()
mysql-js> cluster.status()
{
"clusterName": "test",
"defaultReplicaSet": {
"status": "Cluster tolerant to up to ONE failure.",
"topology": {
"localhost:3310": {
"address": "localhost:3310",
"status": "ONLINE",
"role": "HA",
"mode": "R/W",
"leaves": {
"localhost:3320": {
"address": "localhost:3320",
"status": "ONLINE",
"role": "HA",
"mode": "R/O",
"leaves": {}
},
"localhost:3330": {
"address": "localhost:3330",
"status": "ONLINE",
"role": "HA",
"mode": "R/O",
"leaves": {}
}
}
}
}
}
}As the above output demonstrates, status information includes the InnoDB cluster name, topology, PRIMARY, and more.
To get information about the structure of the InnoDB cluster
itself, use the cluster.describe() function:
mysql-js> cluster.describe();
{
"clusterName": "test",
"adminType": "local",
"defaultReplicaSet": {
"name": "default",
"instances": [
{
"name": "localhost:3310",
"host": "localhost:3310",
"role": "HA"
},
{
"name": "localhost:3320",
"host": "localhost:3320",
"role": "HA"
},
{
"name": "localhost:3330",
"host": "localhost:3330",
"role": "HA"
}
]
}
}The output from this function shows the structure of the InnoDB cluster including all of its configuration information, and so on.
Once a sandbox instance is running, it is possible to change its status at any time using the following:
Stop:
dba.stopSandboxInstance()Start:
dba.startSandboxInstance()Kill:
dba.killSandboxInstance()Kills the MySQL Server instance process on the local host, Useful to help simulate an unexpected halt while testing failover.
Delete:
dba.deleteSandboxInstance()Completely removes the sandbox instance from your file system.
You can remove an instance from a cluster at any time should you
wish to do so. This can be done with the
removeInstance() method, as in the following
example:
mysql-js> cluster.removeInstance("192.168.1.1:3306")
If an instance leaves the cluster, for example because it lost connection and did not or could not automatically rejoin the cluster, it may be necessary to rejoin it to the cluster at a later stage. Because the Group Replication configuration is not persisted in the instance's local configuration file, restarting an instance causes it to leave the Replication Group, so it must rejoin to add the instance back.
The command to rejoin an instance to a cluster is
cluster.rejoinInstance().
In the case where an instance has been configured using
dba.configureLocalInstance(), its Group
Replication information is persisted to the configuration file,
and will rejoin the cluster automatically. More information on
this can be found in the section
Configuring the Instance.
When MySQL Router is bootstrapped it records the bootstrap server
addresses in its configuration. These servers contain metadata
used my MySQL Router in order to route correctly. If any additional
instances are added to the cluster after bootstrapping the
MySQL Router, they are automatically detected and used for
connection routing. If however, all of the original metadata
servers go offline for some reason, MySQL Router would no longer be
able to route correctly. Consider the following line in a
mysqlrouter.conf file:
...
bootstrap_server_addresses=mysql://192.168.56.101:3310,mysql://192.168.56.101:3320,mysql://192.168.56.101:3330
...There are three original metadata servers specified here. Now if two additional servers (call them D and E) were added, you would have a five instance cluster, and MySQL Router routes to these additional instances as required. If the original metadata instances, A, B and C, stopped unexpectedly and left the cluster, you would be left with only instances D and E running. At this point, instances D and E are still alive and form a quorum. So it should be possible to route calls to them. However, as all original metadata servers are down (instances A, B and C), MySQL Router shuts off all routing. In such a situation you can configure MySQL Router manually.
Modify the MySQL Router instance's configuration file to specify the correct metadata servers in the
bootstrap_server_addressesoption.Restart the MySQL Router instance, the updated metadata server is detetced and used.
If you want to remove all information associated with a cluster,
you can use the cluster.dissolve() method.
This removes all metadata and configuration associated with the
cluster. Once you have dissolved the cluster you need to create
it again from scratch, using
dba.createCluster().
After calling cluster.dissolve(), the
cluster object is no longer valid.
You can automate cluster configuration with scripts. For example:
shell> mysqlsh -f setup-innodb-cluster.js
Any command line options specified after the script file name
are passed to the script and not to
MySQL Shell. You can access those options using the
os.argv array in JavaScript, or the
sys.argv array in Python. In both cases,
the first option picked up in the array is the script name.
The contents for an example script file is shown here:
print('MySQL InnoDB cluster sandbox set up\n');
print('==================================\n');
print('Setting up a MySQL InnoDB cluster with 3 MySQL Server sandbox instances.\n');
print('The instances will be installed in ~/mysql-sandboxes.\n');
print('They will run on ports 3310, 3320 and 3330.\n\n');
var dbPass = shell.prompt('Please enter a password for the MySQL root account: ', {type:"password"});
try {
print('\nDeploying the sandbox instances.');
dba.deploySandboxInstance(3310, {password: dbPass});
print('.');
dba.deploySandboxInstance(3320, {password: dbPass});
print('.');
dba.deploySandboxInstance(3330, {password: dbPass});
print('.\nSandbox instances deployed successfully.\n\n');
print('Setting up InnoDB cluster...\n');
shell.connect('root@localhost:3310', dbPass);
var cluster = dba.createCluster("devCluster");
print('Adding instances to the cluster.');
cluster.addInstance({user: "root", host: "localhost", port: 3320, password: dbPass});
print('.');
cluster.addInstance({user: "root", host: "localhost", port: 3330, password: dbPass});
print('.\nInstances successfully added to the cluster.');
print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
print('\nThe InnoDB cluster could not be created.\n\nError: ' +
+ e.message + '\n');
}