Percona Series / XtraDB Cluster, 8.0
I wrote a post about the setup of the 5.7 version. In this post I don’t want to repeat myself, so I just highlight the differences/new things under the setup of 8.0, so it’s strongly recommended to read that post first.
Note: If you read the previous post, keep in mind the setenforce 0
.
Installation
The only difference here is that we have to use pxc-80
instead of pxc-57
. Also the package name is percona-xtradb-cluster
so without version number. So use this for yum
installation.
Systemd file
I’m not a big expert in systemd files and at this point I didn’t feel that this post will change that, so I just want to tell you the important differences. If you do a systemctl cat mysql
you can see that this file is much bigger than the 5.7 was.
ExecStartPre=/usr/bin/mysql-systemd start-pre
ExecStartPre=/bin/sh -c "systemctl unset-environment _WSREP_START_POSITION"ExecStartPre=/bin/sh -c "VAR=`bash /usr/bin/mysql-systemd galera-recovery`; [ $? -eq 0 ] && systemctl set-environm># Start main service
ExecStart=/usr/sbin/mysqld $_WSREP_START_POSITION
The main point here is that the 5.7 used the mysqld_safe --basedir=/usr
. At this point there is the mysql-systemd shell script where the galera-recovery
calls the wsrep_recover_position
shell function which returns the flag which the systemctl sets. After the mysqld starts with this flag.
SST security
There was the wsrep_sst_auth
in 5.7, it’s removed from 8.0 so it would say this is insecure now. Anyone who has access to the network can connect to the cluster. I checked it and Percona also knows that it’s insecure so they have a post about how we can make it secure. At this point I want to say that it’s not something what I want to setup. So it will be insecure for now, but there is encryption option over the SST.
Encrypt SST
For the encryption we need an SSL key and certificate files. These are not somethings which are easy to generate properly, so there is a guide how to create them. It’s important to set the values properly because otherwise it won’t pass the openssl verify
.
Note: If you create the ssl certificates you have the change the owner of them.
chown -R mysql:mysql /opt/ssl
Once we have the certificate and key files we can set them under the /etc/my.cnf
. This is the mysql config file location, so we can notice that it’s changed since the 5.7, because it’s not as separated.
# Spoiler: I know this config isn't good, we will fix it later.wsrep_provider_options=”socket.ssl_key=/opt/ssl/server-key.pem;socket.ssl_cert=/opt/ssl/server-cert.pem;socket.ssl_ca=/opt/ssl/ca.pem”[sst]
encrypt=4
ssl-key=/opt/ssl/server-key.pem
ssl-ca=/opt/ssl/ca.pem
ssl-cert=/opt/ssl/server-cert.pem
The only one config that I want to talk about is the sst.encrypt, because it’s not obvious. Fortunately they provide documentation for that value as well.
Set
encrypt=4
for SST encryption with SSL files generated by MySQL.
You mean generated by MySQL? Hm… Ok… rm -rf /opt/ssl
. So the new config will be:
wsrep_provider_options=”socket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem”[sst]
encrypt=4
ssl-key=server-key.pem
ssl-ca=ca.pem
ssl-cert=server-cert.pem
Configure the first node and bootstrap
I just set the wsrep_cluster_name
to pxc-cluster-2
, because it’s our second cluster and I guess it can conflict once we want to setup a cluster-to-cluster replication. Also change the wsrep_node_name
to something else which won’t conflict with the 5.7 setup.
systemctl start mysql@bootstrap.service
systemctl status mysql@bootstrap
mysql@bootstrap.service - Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap
Loaded: loaded (/usr/lib/systemd/system/mysql@.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2020-11-09 22:47:52 UTC; 12s ago
Configure and start the second node
Do the same wsrep_cluster_name
and wsrep_node_name
as we did with the first node, everything else is the same as we did in the 5.7. Start it.
systemctl start mysql
Job for mysql.service failed because the control process exited with error code.
See "systemctl status mysql.service" and "journalctl -xe" for details.
Hm, that’s not something what we should see here. I guess it’s because the SSL encryption. “handshake with remote endpoint ssl://X.X.X.13:4567 failed: asio.ssl:67567754: ‘invalid padding”
Finalise the setup
After a massive suffering I finally have a working cluster. The problem was in the SSL encryption, because the “generated by MySQL” what I have mentioned previously means that you generate it… So let me show you the setup of the first cluster again.
# SST method
wsrep_sst_method=xtrabackup-v2
# ---- until this point everything is them same as previously# ---- from here it will be the same on both node
ssl-ca=/opt/ssl/ca.pem
ssl-cert=/opt/ssl/server-cert.pem
ssl-key=/opt/ssl/server-key.pemwsrep_provider_options=”socket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem”[sst]
encrypt=4
ssl-key=server-key.pem
ssl-ca=ca.pem
ssl-cert=server-cert.pem[client]
ssl-ca=/opt/ssl/ca.pem
ssl-cert=/opt/ssl/client-cert.pem
ssl-key=/opt/ssl/client-key.pem
So I regenerate the SSL keys and certificates after I have removed them previously and placed them under the /opt/ssl
so I could add them. I set up the [client]
section with the corresponding keys and the [mysqld]
section as well. The [sst]
uses these keys, but here we don’t have to specify the path, since they use the already specified keys from the [mysqld]
, we only have to add the identifier for them. Also the same in the wsrep_provider_options
.
After that I copied these keys to the second node. Here I used python2 -m SimpleHTTPServer
since I had a private internal network and get them with wget
on the second node. Then I copied over the configuration to the /etc/my.cnf
.
Finally I did systemctl restart mysql@bootstrap
on the first node, and systemctl start mysql
on the second node. Then I stopped the bootstrap on the first node with systemctl stop mysql@bootstrap
and started in normal mode systemctl start mysql
. It’s recommended to wait until the bootstrap stops properly. You can check with ps aux | grep mysql
with the state of the stop.
After the start you can check the cluster state:
mysql -pmysql> show status like 'wsrep%';
| wsrep_cluster_size | 2
At this point we have a working cluster with 8.0 and 5.7, but they don’t know each other. Yet.