I recreated rpms for the latest revision (119) of sysbench.
I made them available for Fedora 19 and CentOS 6.
My colleague Kenny hit a bug with Innotop and MySQL 5.6. He submitted a bug with a patch.
This is a new pre-release rpm of Innotop (1.9.0-3) that includes that patch.
Enjoy it !
[UPDATE] New rpms are now available directly on Innotop’s download page
On very heavy load, you may have issue with a large amount of TCP in TIME_WAIT like this one:
tcp 0 0 127.0.0.1:59035 127.0.0.1:3306 TIME_WAIT
This can lead to a TCP port exhaustion as explained on this post.
On HA proxy since version 1.4.19, you can use the nolinger option also on TCP backends. This terminate the connection (TCP RST) as soon as the loadbalancer finished the communication.
The counter part is that Aborted_clients status counter in MySQL increases with each connections’ end. This counter becomes then useless.
This option is not available on glb (with -l parameter) if you apply the patch attached to this post.
I provide also rpm package with the patch applied :
Name : glb Version : 0.9.2 Release : 2 Architecture: x86_64 Install Date: (not installed) Group : Productivity/Networking/Routing Size : 208489 License : GNU General Public License version 2 or later (GPL v2 or later) Signature : (none) Source RPM : glb-0.9.2-2.src.rpm Build Date : mer 27 fév 2013 17:15:54 CET Build Host : percona1 Relocations : (not relocatable) URL : http://www.codership.com/products/galera-load-balancer Summary : TCP Connection Balancer Description : glb is a simple user-space TCP connection balancer made with scalability and performance in mind. It was inspired by pen, but unlike pen its functionality is limited only to balancing generic TCP connections. Features: * list of backend servers is configurable in runtime. * supports server "draining", i.e. does not allocate new connections to server, but does not kill existing ones, waiting for them to end gracefully. * on Linux 2.6 and higher glb uses epoll API for ultimate performance. * glb is multithreaded, so it can utilize multiple CPU cores. In fact even on a single core CPU using several threads can significantly improve performance when using poll()-based IO. * connections are distributed proportionally to weights assigned to backend servers. * this is a patched version providing SO_LINGER
[root@macbookair ~]# glbd -K -l --threads 6 --control 127.0.0.1:4444 127.0.0.1:3308 127.0.0.1:3306 glb v0.9.2 (epoll) Incoming address: 127.0.0.1:3308 , control FIFO: /tmp/glbd.fifo Control address: 127.0.0.1:4444 Number of threads: 6, max conn: 493, policy: 'least connected', top: NO, nodelay: ON, keepalive: OFF, defer accept: OFF, verbose: OFF, linger: ON, daemon: NO Destinations: 1 0: 127.0.0.1:3306 , w: 1.000 Router: ------------------------------------------------------ Address : weight usage map conns 127.0.0.1:3306 : 1.000 0.000 N/A 0 ------------------------------------------------------ Destinations: 1, total connections: 0 of 493 max Pool: connections per thread: 0 0 0 0 0 0
If you test it please post a comment.
Codership released a new version of the load balancer for Galera. I made new rpms but I forgot to share them 😉
Here they are !
This new version provides ] a “single” balancing policy where all connections are directed to a single destination chosen by highest weight, a –top option that forces balancing only between the destinations with the highest weight, and a SO_KEEPALIVE option on destination connections (default: on) for timely detection of the destination failure.
But this week-end while preparing selinux policies for Percona XtraDB Cluster, I noticed that it was really slow…. really really very very slooooow :’-(
And I found the reason ! I first tried to add some kernel parameters like :
noacpi noapic divider=10 notsc
But that didn’t help.
Then I just enabled IO APIC on the VM’s configuration and it worked much faster ! The boot of the machine was faster and in my case loading selinux policies too !
Have a look to the difference:
Without IO APIC:
[root@node2 ~]# time semodule -i percona-xtradb-cluster-full.pp real 6m3.646s user 1m34.430s sys 3m42.805s
With IO APIC:
[root@node2 ~]# time semodule -i percona-xtradb-cluster-full.pp real 0m14.611s user 0m13.829s sys 0m0.769s
To enable IO APIC from Vagrant, these are the parameters to use in your Vagrantfile:
config.vm.customize ["modifyvm", :id, "--memory", "256", "--ioapic", "on"]