If you test Galera synchronous replication with Percona XtraDB Cluster or MariaDB Galera Cluster you must have tried to use a load balancer like HA Proxy or Galera Load Balancer.
On very heavy load, you may have issue with a large amount of TCP in TIME_WAIT like this one:
tcp 0 0 127.0.0.1:59035 127.0.0.1:3306 TIME_WAIT
This can lead to a TCP port exhaustion as explained on this post.
On HA proxy since version 1.4.19, you can use the nolinger option also on TCP backends. This terminate the connection (TCP RST) as soon as the loadbalancer finished the communication.
The counter part is that Aborted_clients status counter in MySQL increases with each connections’ end. This counter becomes then useless.
This option is not available on glb (with -l parameter) if you apply the patch attached to this post.
I provide also rpm package with the patch applied :
Name : glb Version : 0.9.2 Release : 2 Architecture: x86_64 Install Date: (not installed) Group : Productivity/Networking/Routing Size : 208489 License : GNU General Public License version 2 or later (GPL v2 or later) Signature : (none) Source RPM : glb-0.9.2-2.src.rpm Build Date : mer 27 fév 2013 17:15:54 CET Build Host : percona1 Relocations : (not relocatable) URL : http://www.codership.com/products/galera-load-balancer Summary : TCP Connection Balancer Description : glb is a simple user-space TCP connection balancer made with scalability and performance in mind. It was inspired by pen, but unlike pen its functionality is limited only to balancing generic TCP connections. Features: * list of backend servers is configurable in runtime. * supports server "draining", i.e. does not allocate new connections to server, but does not kill existing ones, waiting for them to end gracefully. * on Linux 2.6 and higher glb uses epoll API for ultimate performance. * glb is multithreaded, so it can utilize multiple CPU cores. In fact even on a single core CPU using several threads can significantly improve performance when using poll()-based IO. * connections are distributed proportionally to weights assigned to backend servers. * this is a patched version providing SO_LINGER
Example:
[root@macbookair ~]# glbd -K -l --threads 6 --control 127.0.0.1:4444 127.0.0.1:3308 127.0.0.1:3306 glb v0.9.2 (epoll) Incoming address: 127.0.0.1:3308 , control FIFO: /tmp/glbd.fifo Control address: 127.0.0.1:4444 Number of threads: 6, max conn: 493, policy: 'least connected', top: NO, nodelay: ON, keepalive: OFF, defer accept: OFF, verbose: OFF, linger: ON, daemon: NO Destinations: 1 0: 127.0.0.1:3306 , w: 1.000 Router: ------------------------------------------------------ Address : weight usage map conns 127.0.0.1:3306 : 1.000 0.000 N/A 0 ------------------------------------------------------ Destinations: 1, total connections: 0 of 493 max Pool: connections per thread: 0 0 0 0 0 0
If you test it please post a comment.