lefred

lefred

I am MySQL Community Manager for EMEA & APAC. I joined the MySQL Community Team in May 2016. I have been an OpenSource and MySQL consultant for more than 15 years. My favorite topics are High Availability and Performance.

MySQL facter

Yesterday I started to play with mcollective I added some agents like service and facter. I really liked the facter agent.... and then I decided to add facts for MySQL. Of course I needed to learn some ruby first :-) A fact is created for the version and all other MySQL facts come from the SHOW STATUS statement. All the new facts start by mysql_ I plan to add new facts related to the replication like Seconds_Behind_Master The current version is available on github here

Some usage examples:

with facter:
[root@delvaux facter]# facter mysql_version 2>/dev/null
5.5.10
[root@delvaux facter]# facter mysql_max_used_connections 2>/dev/null
3
with mcollective:
[root@delvaux facter]# mc-facts mysql_version
Report for fact: mysql_version

        5.0.51a-24+lenny5-log                   found 1 times
        5.5.10                                  found 1 times

Finished processing 2 / 2 hosts in 64.36 ms



[root@delvaux facter]# mc-facts mysql_open_files
Report for fact: mysql_open_files

        16                                      found 1 times
        18                                      found 1 times

Finished processing 2 / 2 hosts in 2418.13 ms

[root@delvaux facter]# mc-facts mysql_open_files -v
Determining the amount of hosts matching filter for 2 seconds .... 2
Report for fact: mysql_open_files

        16                                      found 1 times

            delvaux.maladree.be

        18                                      found 1 times

            debian1.maladree.be


---- rpc stats ----
           Nodes: 2 / 2
     Pass / Fail: 0 / 0
      Start Time: Sat Apr 02 00:11:46 +0200 2011
  Discovery Time: 2001.84ms
      Agent Time: 1344.24ms
      Total Time: 3346.08ms

[root@delvaux facter]# mc-facts mysql_threads_connected 
Report for fact: mysql_threads_connected

        2                                       found 2 times

Finished processing 2 / 2 hosts in 3270.86 ms

[root@delvaux facter]# mc-facts mysql_threads_connected -v
Determining the amount of hosts matching filter for 2 seconds .... 2
Report for fact: mysql_threads_connected

        2                                       found 2 times

            debian1.maladree.be
            delvaux.maladree.be


---- rpc stats ----
           Nodes: 2 / 2
     Pass / Fail: 0 / 0
      Start Time: Sat Apr 02 00:12:47 +0200 2011
  Discovery Time: 2001.73ms
      Agent Time: 50.43ms
      Total Time: 2052.15ms

April 1st

My contribution to this heavy day for our rss readers is: The April's Fool Centipede !
    .:/          .:/           .:/            
  ,,///;,   ,;/,,///;,   ,;/,,///;,   ,;/
 o:::::::;;///o:::::::;;///o:::::::;;/// 
>::::::::;;\\\::::::::;;\\\::::::::;;\\\   
  ''\\\\\'" ';\''\\\\\'" ';\''\\\\\'" ';\ 
beware, don't believe all news today :)

Puppet and 64bits packages

Since I use puppet to manage my machines (and the machines of customers), I noticed that I had more packages installed then before, I noticed also obviously the same behavior in packages to update and bandwidth consumption during updates. I realize that on 64bits machines, most of the time, the 32bits version of the packages managed by puppet were also installed. This is what I did in my recipes before:
    package { "corosync":
        ensure => "installed",
        require => Yumrepo["clusterlabs"];
    }
This kind of package declaration installed then the two version of the package, in this case corosync and the dependencies too. To avoid this I added the fact hardwaremodel and used the alias to keep my recipes consistent:
    package { "corosync.$hardwaremodel":
        ensure => "installed",
        alias => "corosync",
        require => Yumrepo["clusterlabs"];
    }
Hope this could help people having noticed the same behavior... or not :-)

High Availability Open-Xchange Server

Since I tested it 4 years ago, I like Open-Xchange (even if I'm not a java app fan). I like the layout and also all the feature it provides. The calendar is very complete. For a customer where I set it up 4 years ago, I've migrated this service to a cluster running the last version. The machines are fully installed via kickstart from a pxeboot (using cobbler) This post describes the solution. The setup is based on CentOS and use the pair corosync / pacemaker as cluster. The solution consists in two nodes where only one machine provides the service. The components are : - one ip balancing between the two nodes - apache running on the "active/master" server (the server providing the service) - open-xchange running on one node at the time - funambol running on one node at the time - openldap running on both machines in mirroring - cyrus running on both machines as master/slave - mysql running on both machines as master/master replication. This is an overview of the crm: Most of the needed steps are put in some puppet recipes to help the provisioning (you can find them on my github account) With the cyrus-imapd delivered by default on redhat/centos, when the cyrus master starts without the slave running, cyrus won't reply for a long time... the bug we are hitting here as been resolved in newer version. I use cyrus-imapd 2.4.6, package from Simon Matter. You can find the source of this package here

MySQL & Friends Meetup at Fosdem 2011

Like last year I'll be present at the MySQL & Friends Meetup on Saturday evening of the Fosdem. If you wanna share some experience around MySQL, please join ! You can register here

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.