We wanted to migrate from one database to an other. For the sake of simplicity let’s say we moved from MySQL to PostgreSQL. Our database had 100 million rows in certain tables, so that’s not the easiest migration. Many things can occur because even in the best case scenarios it takes at least 1 hour to move this amount of data. Since it’s really sensitive production data we wanted to know every possible issue which can occur before it’s even happening. This allows us to plan failover scenarios.
This post will guide you how to setup Keep Network’s local environment. It will follow the original guidance but in more details. At the time of writing this guide I used CentOS 8, so it aims that OS, but I tried to cover the other ones too with links.
These are Go, Geth, Solidity compiler, Protobuf compiler, protoc-gen-gogoslick, jq, Docker, Docker compose, pyenv, pipenv.
Let’s start with Go. I’m mainly a Go developer so I prefer installing it with the tar.gz. It is much easier to update in the future for different versions. At the moment the version of Go…
This is a story about my latest question on Stackoverflow (and probably the last). The question is mostly about implementing a trait in Rust, but I have faced some issues. The question itself doesn’t matter at this point, because based on the comments I’d be able to improve it later on.
There are two links, the first one is the crate I’m using. The second one is part of this crate, specifically a trait which I wanted to implement. I had a crystal clear question about what I wanted to do. So if someone ever used that crate and the…
Okay…I wasn’t actually hacked, they only tried to hack me. This is the story of me analysing what could have really happened.
Back in the days I used to play travian, I liked to compete with other players, build the WW as a clan and win. But what I really liked was building my empire. Since I am older I don’t really have time for such a competitive game having to spend hours daily to be in the top 100. …
However the storage under the VM instances are limited, it doesn’t mean that the storage is limited. If we have a scenario where the database is close to the storage limitation we can enlarge that with the flexibility of the xfs.
We have two scenarios where I used Couchbase (because it’s easy to scale) for setup an emulated database environment on a mounted xfs disk on Google Cloud Platform. The first scenario will be a single-node Couchbase cluster and the second will be a multi-node Couchbase cluster where the data is rebalanced between the nodes.
I won’t go into details…
Co-author: Robert Boros
The idea behind the zero-downtime migration is to setup a replication between the 5.7 and 8.0. Initially load the necessary data with xtrabackup tool and start the slave with the correct binlog information.
This post will use the binlog-based replication. You can find more details about how this works under the link. …
I wrote a post about the setup of the 5.7 version. In this post I don’t want to repeat myself, so I just highlight the differences/new things under the setup of 8.0, so it’s strongly recommended to read that post first.
Note: If you read the previous post, keep in mind the
The only difference here is that we have to use
pxc-80 instead of
pxc-57. Also the package name is
percona-xtradb-cluster so without version number. So use this for
I’m not a big expert in systemd files and at this point I didn’t feel that…
This post will walk you through how to setup Percona XtraDB Cluster 5.7. There are 2 nodes available for which I have previously created. The first one is a CentOS 8 server and the second one is a Debian 10. Based on these servers I can explain both installation processes.
Quick note: Creating clusters with odd number nodes can increase the chance of the Cluster failover, because the consensus algorithm has a chance for not agreeing (majority votes of 50%).
I didn’t want to spend more time on this, since there is a proper guide. …
If you are not familiar with Percona online schema change, check the previous post about it. It has a massive explanation describing what it is and how it works. With some real-life scenarios.
But it still lacks the answer for the question what happens if something goes wrong. As you have seen it takes 20–40 mins to perform a schema change. This time range is so long, anything can happen there like: power outage, some other service consumes the necessary resources and stops it or you just change your mind and press Ctrl+C. …
If you ever run into an ALTER TABLE on larger tables on your Percona Cluster (spoiler: not just for Percona Cluster) you have probably noticed the problem that there is a lock which doesn’t allow the application use that table at all.
The solution comes with the Percona Toolkit helper. This is the pt-online-schema-change. The online schema change…
…alters a table’s structure without blocking reads or writes. — Original documentation
By using this tool we perform ANY alter table events without locking. (Based on the restriction, detailed in the original documentation.)
I’m an RPM-based person so I will only touch…
Gopher, Rustacean, Hobby Hacker