When working in a production environment, the MySQL Server instances are running on hosts as part of a network rather than on your local machine as described in previous sections.
The following diagram illustrates the scenario you work with in the following section:
The user account used to administer an instance does not have to
be the root account, however the user needs to be assigned full
read and write privileges on the Metadata tables in addition to
full MySQL administrator privileges (SUPER,
GRANT OPTION, CREATE,
DROP and so on). To give the user
your_user the privileges needed to
administer InnoDB cluster issue:
GRANT ALL PRIVILEGES ON mysql_innodb_cluster_metadata.* TO your_user@'%' WITH GRANT OPTION;
GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO your_user@'%' WITH GRANT OPTION;
GRANT SELECT ON performance_schema.* TO your_user@'%' WITH GRANT OPTION;
GRANT SELECT, INSERT, UPDATE, DELETE ON mysql.* TO your_user@'%' WITH GRANT OPTION;
If only read operations are needed (such as for monitoring
purposes), an account with more restricted privileges may be used.
To give the user your_user the
privileges needed to monitor InnoDB cluster issue:
GRANT SELECT ON mysql_innodb_cluster_metadata.* TO your_user@'%';
GRANT SELECT ON performance_schema.global_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_* TO your_user@'%';When working with a production deployment it is a good idea to activate verbose logging for MySQL Shell initially. This is helpful in finding and resolving any issues that may arise when you are preparing the server to work as part of InnoDB cluster. To start MySQL Shell with a verbose logging level type:
shell> mysqlsh --log-level=DEBUG3
The log file is located in
~/.mysqlsh/mysqlsh.log for Unix-based
systems. On Microsoft Windows systems it is located in
%APPDATA%\MySQL\mysqlsh\mysqlsh.log. See
Section 18.5, “MySQL Shell Application Log”.
The cluster.checkInstanceState() function can
be used for the following purposes:
To validate if an instance can be added to the cluster.
The instance is consistent with the seed instances, meaning that it has not executed any transactions which the cluster has not, and can be recovered to the same state as the rest of the cluster.
Before creating a cluster from remote instances you need to
check that the servers are suitably configured. This can be done
using the dba.checkInstanceConfiguration()
function. For detailed help on this function you can type
dba.help('checkInstanceConfiguration').
The dba.checkInstanceConfiguration() function
checks if the server instances are valid for InnoDB cluster
usage.
The following demonstrates this:
mysql-js> dba.checkInstanceConfiguration('[email protected]:3306')
Please provide the password for '[email protected]:3306':
Validating instance...
The instance '139.59.177.10:3306' is not valid for Cluster usage.
The following issues were encountered:
- Some configuration options need to be fixed.
+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum | CRC32 | NONE | Update the server variable or restart the server |
| enforce_gtid_consistency | OFF | ON | Restart the server |
| gtid_mode | OFF | ON | Restart the server |
| log_bin | 0 | 1 | Restart the server |
| log_slave_updates | 0 | ON | Restart the server |
| master_info_repository | FILE | TABLE | Restart the server |
| relay_log_info_repository | FILE | TABLE | Restart the server |
| transaction_write_set_extraction | OFF | XXHASH64 | Restart the server |
+----------------------------------+---------------+----------------+--------------------------------------------------+
Please fix these issues , restart the server and try again.
{
"config_errors": [
{
"action": "server_update",
"current": "CRC32",
"option": "binlog_checksum",
"required": "NONE"
},
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
mysql-js>The report shows the configuration changes required for that instance before it can be added to the cluster.
Once the configuration issues have been identified you can reconfigure your server instance manually. Alternatively, if you can run MySQL Shell directly on the same machine where the instance of MySQL is running, log in to the server and run MySQL Shell on the server. On the server to be configured run:
shell> mysqlsh --log-level=DEBUG3 --uri=root@localhost
The function you use to configure a server for InnoDB cluster
use is dba.configureLocalInstance(). This
function runs provisioning scripts for you that modify the MySQL
server's configuration file.
The dba.configureLocalInstance() function can
only configure servers connected to locally. If you try to run
dba.configureLocalInstance() remotely you get
the following error:
mysql-js> dba.configureLocalInstance('[email protected]:3306')
Dba.configureLocalInstance: This function only works with local instances (RuntimeError)If MySQL Shell is started locally, then output will be similar to:
mysql-js> dba.configureLocalInstance('root@localhost:3306')
Please provide the password for 'root@localhost:3306':
Please specify the path to the MySQL configuration file: /etc/mysql/mysql.conf.d/mysqld.cnf
Validating instance...
The configuration has been updated but it is required to restart the server.
{
"config_errors": [
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
mysql-js>
As with dba.checkInstanceConfiguration(), the
configuration requirements are identified, but this time the
entered configuration file is modified. For the changes to take
effect you need to restart the MySQL Server. For example:
shell> sudo service mysql restart
If dba.configureLocalInstance() is used on
a instance that is already a member of a cluster, then its
Group Replication configuration information is persisted to
the server configuration file and a call to
rejoinInstance() is not required in that
case. When restarted, the instance is automatically joined to
the cluster. This is illustrated in the following example:
shell.connect({host: 'localhost', port: 3333, user: 'root', password: 'somePwd'});
var cluster = dba.createCluster('devCluster');
// Here, configureLocalInstance makes sure the instance is configured for Group Replication
dba.configureLocalInstance('localhost:3334', {password:'somePwd', mycnfPath:'some path'})
cluster.addInstance('localhost:3334', {password:'somePwd'})
dba.configureLocalInstance('localhost:3335', {password:'somePwd', mycnfPath:'some path'})
cluster.addInstance('localhost:3335', {password:'somePwd'})
// A restart here, would require using rejoin to put the instance back into the cluster
dba.killSandboxInstance(3335);
dba.startSandboxInstance(3335);
cluster.rejoinInstance('localhost:3335', {password:'somePwd'})
// Calling configureLocalInstance again, since the instance is already part of the cluster
// It will persist the Group Replication server variables
dba.configureLocalInstance('localhost:3335', {password:'somePwd', mycnfPath:'some path'})
// On a new restart, the instance automatically joins the Cluster (no need to rejoinInstance)
dba.killSandboxInstance(3335);
dba.startSandboxInstance(3335);Once the server has restarted, you can use MySQL Shell again to check the configuration:
mysql-js> dba.checkInstanceConfiguration('root@localhost:3306')
Please provide the password for 'root@localhost:3306':
Validating instance...
The instance 'localhost:3306' is valid for Cluster usage
{
"status": "ok"
}
mysql-js>
Log in to the remote instance and use MySQL Shell to configure the instance automatically and ensure the configuration changes are persisted.
shell> mysqlsh --uri [email protected]:3306
Creating a Session to '[email protected]:3306'
Enter password: *********
Classic Session successfully established. No default schema selected.Now create the cluster:
mysql-js> var cluster = dba.createCluster('devCluster');
A new InnoDB cluster will be created on instance '[email protected]:3306'.
Creating InnoDB cluster 'devCluster' on '[email protected]:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.First, check the instance configuration:
mysql-js> dba.checkInstanceConfiguration('[email protected]:3306')
Please provide the password for '[email protected]:3306':
Validating instance...
The instance '139.59.177.10:3306' is valid for Cluster usage
{
"status": "ok"
}You can also check the instance state:
mysql-js> cluster.checkInstanceState('[email protected]:3306')
Please provide the password for '[email protected]:3306':
Analyzing the instance replication state...
The instance '139.59.177.10:3306' is valid for the cluster.
The instance is fully recoverable.
{
"reason": "recoverable",
"state": "ok"
}Check the cluster status:
mysql-js> cluster.status()
{
"clusterName": "devCluster",
"defaultReplicaSet": {
"name": "default",
"status": "Cluster is NOT tolerant to any failures.",
"topology": {}
}
}You need to add two more instances to the cluster to make it tolerant to a server failure.
Check the configuration of the next instance to add to the cluster:
mysql-js> dba.checkInstanceConfiguration('[email protected]:3306')
Please provide the password for '[email protected]:3306':
Validating instance...
The instance '139.59.177.11:3306' is valid for Cluster usage
{
"status": "ok"
}The instance can now be added into the cluster:
mysql-js> cluster.addInstance("[email protected]:3306");
Please provide a password for '[email protected]:3306': *****
A new instance will be added to the InnoDB cluster. Depending on the
amount of data on the cluster this might take from a few seconds to
several hours.
Adding instance 139.59.177.11:3306 to the cluster...
The instance '139.59.177.11:3306' was successfully added to the
cluster.The next instance can now be added into the cluster:
mysql-js> cluster.addInstance("[email protected]:3306");
Please provide a password for '[email protected]:3306': *****
A new instance will be added to the InnoDB cluster. Depending on the
amount of data on the cluster this might take from a few seconds to
several hours.
Adding instance 139.59.177.12:3306 to the cluster...
The instance '139.59.177.12:3306' was successfully added to the
cluster.Now recheck cluster status.
When using the createCluster(),
addInstance(), and
rejoinInstance() methods you can optionally
specify a list of approved servers that belong to the cluster,
referred to a whitelist. By specifying the whitelist explicitly
in this way you can increase the security of your cluster
because only servers in the whitelist can connect to the
cluster.
By default, if not specified explicitly, the whitelist is
automatically set to the private network addresses that the
server has network interfaces on. To configure the whitelist,
specify the servers to add with the
ipWhitelist option when using the method. For
example:
mysql-js> c.addInstance("root:guidev!@localhost:3320", {ipWhitelist: "10.157.120.0/24, 192.168.1.110"})
This configures the instance to only accept connections from
servers at addresses 10.157.120.0/24 and
192.168.1.110.
Using the ipWhitelist option configures the
group_replication_ip_whitelist
system variable on the instance.
If a instance (or instances) fail, then a cluster can lose its
quorum, which is the ability to vote in a new primary. In this
case you can re-establish quorum using the method
cluster.forceQuorumUsingPartitionOf(), as
shown in the following MySQL Shell example:
// open session to a cluster
mysql-js> cluster = dba.getCluster("devCluster")
// The cluster lost its quorum and its status shows
// "status": "NO_QUORUM"
mysql-js> cluster.forceQuorumUsingPartitionOf("localhost:3310")
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [localhost:3310]
Please provide the password for 'root@localhost:3310': ******
Restoring the InnoDB cluster ...
The InnoDB cluster was successfully restored using the partition from the instance 'root@localhost:3310'.
WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset
are removed or joined back to the group that was restored.
If your cluster suffers from a complete outage, you can ensure
it is reconfigured correctly using
dba.rebootClusterFromCompleteOutage(). An
example of use is as follows:
mysql-js> shell.connect('root@localhost:3310');
mysql-js> var cluster = dba.rebootClusterFromCompleteOutage();This ensures the cluster is correctly reconfigured after a complete outage. It picks the instance the MySQL Shell is connected to as the new seed instance and recovers the cluster based on the existing metadata of that instance.
It is also possible to provide the cluster name as an input parameter:
mysql-js> var cluster = dba.createCluster("devCluster")
...
...
mysql-js> var cluster = dba.rebootClusterFromCompleteOutage("devCluster");
If this process fails, and the cluster metadata has become badly
corrupted, you may need to drop the metadata and create the
cluster again from scratch. You can drop the cluster metadata
using dba.dropMetaDataSchema().
The dba.dropMetaDataSchema() method should
only be used as a last resort, when it is not possible to
restore the cluster. It can not be undone.
If changes to the Group Replication configurations are made
without using MySQL Shell you need to rescan your cluster. For
example, if you create a cluster with three instances, and then
without using MySQL Shell you add a new instance to that Group
Replication group, the AdminAPI is not aware of that
instance. The same would apply if you removed an instance from a
Group Replication group without using MySQL Shell. It is
necessary to rescan the cluster with
cluster.rescan() in such scenarios.
After the command cluster.rescan() has been
run, instances are identified that are newly discovered
instances. You are prompted to add each of these newly
discovered instances into your cluster as required, or you can
choose to ignore them.
Nodes that no longer belong to the cluster or which are
unavailable are also reported. In this case you are prompted to
remove the instance, or you can later attempt to add it back
into the cluster using a command such as
cluster.rejoin('instancex.example.com:3340').