5. Percona XtraDB Installation Guide
5.1 requirements
- It is recommended that you deploy at least 3 nodes. Although Percona Cluster will work with 2 nodes this configuration lacks the fault tolerance that is expected for production environments.
- Linux distribution can be RedHat or Ubuntu, if you want a quick installation.
- Percona only works with the MySQL InnoDB table engine.
- Before proceeding you will need to uninstall the mysql-libs package from the system. This will uninstall
mysql-client
,mysql-server
,postfix...
and all the mysql related packages.
5.2 Installation Procedure
- Keep in mind that the Percona XtraDB is just a MySQL modified with the goal of using it as a multi-master database. It's just MySQL remastered.
- Everything that you need is also written in the
PerconaXtraDBCluster-5.6.22-25.8.pdf
for the cluster. Here is an outline of the process that was followed with a RedHat 6.6. For RedHat 7.X or higher you need to refer to the Latest Percona Documentation. - For specific information you can go to page 31: 5.1 Installing Percona XtraDB Cluster on CentOS.
- The main commands here to install everything are:
yum install socat # note: you may need to add the EPEL repository before installing socat yum remove mysql-libs yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm yum install Percona-XtraDB-Cluster-full-56
- Create a
my.cnf
for Node1, the first bootstrapping node of the cluster. You will need to know the IP addresses of the 3 nodes. You must put your IPs into this configuration.[mysqld] datadir=/var/lib/mysql user=mysql
####################### ####### PERCONA ####### ####################### # Path to Galera librarywsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3wsrep_cluster_address=gcomm://10.8.6.112,10.8.6.114,10.8.6.116
# In order for Galera to work correctly binlog format should be ROWbinlog_format=ROW
# MyISAM storage engine has only experimental supportdefault_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galerainnodb_autoinc_lock_mode=2
# Node #1 addresswsrep_node_address=10.8.6.112
# SST methodwsrep_sst_method=xtrabackup-v2
# Cluster namewsrep_cluster_name=my_centos_cluster
# Authentication for SST methodwsrep_sst_auth="sstuser:s3cret"
- Bootstrap node 1 running this: (NOTE: with RedHat 7 you need another command. See attached pdf or the Percona web site)
# /etc/init.d/mysql bootstrap-pxc
- Check the status of the server in mysql
mysql> show status like 'wsrep%';
and check that the service is ON. - Create a specific user in MySQL to be used by the Percona replication:
mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cret'; mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; mysql> flush privileges;
-
Now let's create /etc/my.cnf on Node 2.
[mysqld] datadir=/var/lib/mysql user=mysql
####################### ####### PERCONA ####### ####################### # Path to Galera librarywsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3wsrep_provider_options="gcache.size=512M"
# This value is used to determine the amount of transactions which a downed Percona node can catch up on using IST when rejoining the cluster.wsrep_cluster_address=gcomm://10.8.6.112,10.8.6.114,10.8.6.116
# In order for Galera to work correctly binlog format should be ROWbinlog_format=ROW
# MyISAM storage engine has only experimental supportdefault_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galerainnodb_autoinc_lock_mode=2
# Node #2 addresswsrep_node_address=10.8.6.114
# SST methodwsrep_sst_method=xtrabackup-v2
# Cluster namewsrep_cluster_name=my_centos_cluster
# Authentication for SST methodwsrep_sst_auth="sstuser:s3cret"
- Start the cluster on Node2:
root@dger02 ~]# /etc/init.d/mysql start Starting MySQL (Percona XtraDB Cluster).....State transfer in progress, setting sleep higher ... SUCCESS!
- Create /etc/my.cnf on node 3 as above but change the IP for that node!
- Start node 3:
[root@dger03 ~]# /etc/init.d/mysql start Starting MySQL (Percona XtraDB Cluster).....State transfer in progress, setting sleep higher ... SUCCESS!
-
Test that the cluster is working and ANY database is replicating (Note: database mysql will not replicate directly cause it's on the MyISAM table engine; but DDL will be replicated)
THEN ON NODE 1 , for example, check that the table is there:On node 3 or any: mysql> create database perconatest; Query OK, 1 row affected (0.38 sec) mysql> use perconatest; Database changed mysql> create table a(c int primary key not null auto_increment,a varchar(200)); Query OK, 0 rows affected (<0.23 sec) mysql> insert into a values(NULL,'ciccio'); Query OK, 1 row affected (0.22 sec) mysql> select * from a; +---+--------+ | c | a | +---+--------+ | 3 | ciccio | +---+--------+ 1 row in set (0.00 sec) mysql> insert into a values(NULL,'ciccio2'); Query OK, 1 row affected (0.31 sec) mysql> select * from a; +---+---------+ | c | a | +---+---------+ | 3 | ciccio | | 6 | ciccio2 | +---+---------+ 2 rows in set (0.00 sec) mysql>
mysql> select * from a; +---+---------+ | c | a | +---+---------+ | 3 | ciccio | | 6 | ciccio2 | +---+---------+ 2 rows in set (0.00 sec)
Important Tips
- At least 2 nodes are required. We strongly recommended a minimum of 3 nodes to ensure that the loss a single node doesn't stop production.
- You cannot modify a password directly into the "mysql" database, because this won't be replicated. You need to use SQL statements to create/modify users, passwords and permissions.
- If you are developing a new application to be used with Percona XtraDB be prepared to catch an exception on the commit() call, and retry the whole transaction, because if something goes wrong, the commit() will know.
5.3 Percona Database Configuration
These steps configure the database section of the gerrit config file and must be followed once you have completed the installation of the Percona XtraDB cluster with Gerrit.
Procedure
When installing Gerrit with Percona XTRADB using an 'n-nodes' configuration, you need to:
- Create the
reviewdb
database only on one node (the other nodes will replicate this). - Install vanilla gerrit on that node or on a node that connects to that database node.
- Proceed with the standard installation of GerritMS.
- Usually in a GerritMS-Percona configuration, each Gerrit node connects to an individual Percona XtraDB node, sitting maybe on the same host as Gerrit. So in the
gerrit.config property
file, in the dabase section, you will find localhost as the hostname to connect to. - Then, if you want, you can maximise the database access speed from Gerrit to Percona XtraDB by using connection pooling. For this you need to:
- edit the
etc/gerrit.config
file and - add or replace this piece of configuration in the database section:
Depending on the load of the machine you can raise or lower the[database] type = mysql hostname = localhost database = reviewdb username = gerrit connectionPool = true poolLimit = 100 poolMinIdle = 50
poolLimit
or thepoolMinIdle
properties. Just keep in mind that, since, as usual, the default max number of connections for a MySQL server database is151
, you need to raise that number if you need to set the poolLimit to a value close or higher than150
. If you need to raise the max number of connection to MySQL (Percona) server, the you have to modify the my.cnf file and add something like:[mysqld] ... open_files_limit = 8192 # only if you need to raise the max number of connections to MySQL. Not needed otherwise max_connections = 1000 # only if you need to raise the max number of connections to MySQL. Not needed otherwise ...
- edit the
- The last step is to modify the Git MultiSite configuration file (/opt/wandisco/git-multisite/replicator/properties/application.properties) for each node that will access a local master Percona database. Replace the following properties, or add them to the bottom of the file:
gerrit.db.slavemode.sleepTime=0 gerrit.db.mastermaster.retryOnDeadLocks=true
Note: Since Percona XtraDB cluster is based on MySQL server, the configuration is the same as the one for MySQL server.
5.4 Migrating from MySQL to Percona XtraDB Cluster
Requirements
- You will need a dump of the originating MySQL database, obtained using the mysqldump tool.
- If the originating MySQL database tables are not using the InnoDB engine, then you need to go through another step to transform it for the InnoDB engine, or you can edit the dump file to change all the "ENGINE=MyISAM"'s to "ENGINE=InnoDB".
Migration procedure
Follow these steps to complete the migration to Percona XtraDB Cluster.
- If you have not yet produced a dump from the old MySQL database, create it now:
$ mysqldump -u gerrit -pXXXXXX reviewdb > reviewdb.dmp
- If you need to modify the dump file, then make an additional backup copy of the dump you have just produced.
- Uninstall MySQL and install Percona XtraDB if you need to do so (follow instructions in Percona XtraDB Installation Guide).
- Take a look at the produced dump file: If the dump has all the tables with the ENGINE=InnoDB format, then it's ok. Otherwise you need to change the dump file (or transform the tables and redo the dump) replacing the ENGINE=MyISAM with ENGINE=InnoDB.
- Since the Percona XtraDB cluster is just a modified version of MySQL, you will have just to:
- Connect to a Percona Cluster node.
- Create the new database and quit the client:
[gerrit@dger01 ~]$ mysql -u root -pXXXXXXX Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1172696 Server version: 5.6.22-72.0-56 Percona XtraDB Cluster (GPL), Release rel72.0, Revision 978, WSREP version 25.8, wsrep_25.8.r4150 Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> create database reviewdb; Query OK, 1 row affected (0.32 sec) mysql> quit Bye
- Import the old MySQL database into the new Percona XtraDB cluster:
Note that the "mysql" client here is the Percona modified version.[gerrit@dger01 ~]$ mysql -u gerrit -pXXXXXX reviewdb < reviewdb.dmp
- On the other Percona nodes you should already have the database fully imported at this stage, because Percona XtraDB is a replicated active-active cluster, i.e. you don't need to import the database on the other nodes.
Percona Configuration Options
The default options for most of the Percona settings are generally good. If required however, various Percona specific settings can be used in the my.cnf file to best configure Percona to the level of load required on the database.
Also worth noting is the tool Percona has provided that will provide a recommended configuration based on the responses to questions: https://tools.percona.com/wizard
wsrep_provider_options
Many options exist for this, a full list of which can be browsed here: https://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-provider-index.html
The following are some options customers may be particularly interested in:
- evs.inactive_check_period - how often the node checks for peer inactivity.
- evs.inactive_timeout - the inactivity limit, beyond which a node will be pronounced dead.
- evs.user_send_window - the maximum number of packets in replication at a time. This defaults to 2, but Percona recommends going up to 512 in a WAN environment.
- gcache.size - the size of the transaction cache for replication. The larger this cache is, the better chance a node that is down for a period of time can catchup by IST instead of SST. Defaults to 128M, but this may need to be set larger in an environment with a large number of writes.
wsrep_auto_increment_control
This is enabled by default and is what is behind the occasional generation of changeIDs/patchset numbers which can skip entries. For example, 1, 2, 4, 5, 8, etc. Currently the only tested configuration in a multi-master environment is with this left on. But further investigation might be worthwhile into whether this might be worth disabling, to better match the "vanilla Gerrit" experience.
wsrep_debug
Sends debug messages to the error_log. Useful when trying to diagnose a problem. Defaults to false.
wsrep_retry_autocommit
The number of times to retry a transaction in the event of a replication conflict. In most cases, a transaction can be safely retried automatically. This defaults to one currently, but we have noticed in GerritMS operation that a system under heavy load for a period of several days can still generate occassional database commit failures due to a deadlock caused by replication. Currently, code has been added to all Gerrit database commits to detect this error and retry, but this may be better configured here.
wsrep_slave_threads
The number of threads that can apply replication transactions in parallel. By default this is set to one, which is the safest option. If however performance becomes an issue, particularly around database replication, this can be used to increase throughput.
wsrep_sst_donor
The name of the preferred "donor" node in the event that the local node needs to recover by the SST mechanism. As the donor node database must enter read-only mode to allow the local node to catch up, it may be required that this isn't determined at random, and instead picks a specified node.
wsrep_provider_options
The default value is 128M when {{gcache.size}} is not set. This value is used to determine the amount of transactions which a downed Percona node can catch up on using IST when rejoining the cluster. If too many transactions have taken place between a node going down, and rejoining the cluster, then SST will be required to synchronise the node which is joining the cluster. This should kick in automatically, and its performance depends on the SST method selected in the my.cnf configuration.
More information on the web:- Galera Library setting for Percona
- Performance Tuning gcache.size
- Avoiding Using SST when Adding a new Node