Category devops

CentOS 6 very slow with Vagrant

I use Vagrant to test almost everything. But since I upgraded to VirtuabBox 4.2.x and CentOS 6 as guess OS, I had the impression that everything was slower... and I get use to it... But this week-end while preparing selinux policies for Percona XtraDB Cluster, I noticed that it was really slow.... really really very very slooooow :'-( And I found the reason ! I first tried to add some kernel parameters like :
noacpi 
noapic 
divider=10 
notsc
But that didn't help. Then I just enabled IO APIC on the VM's configuration and it worked much faster ! The boot of the machine was faster and in my case loading selinux policies too ! Have a look to the difference: Without IO APIC:
[root@node2 ~]# time semodule -i percona-xtradb-cluster-full.pp

real	6m3.646s
user	1m34.430s
sys	3m42.805s
With IO APIC:
[root@node2 ~]# time semodule -i percona-xtradb-cluster-full.pp

real	0m14.611s
user	0m13.829s
sys	0m0.769s
To enable IO APIC from Vagrant, these are the parameters to use in your Vagrantfile:
config.vm.customize ["modifyvm", :id, "--memory", "256", "--ioapic", "on"]

MySQL facter

Yesterday I started to play with mcollective I added some agents like service and facter. I really liked the facter agent.... and then I decided to add facts for MySQL. Of course I needed to learn some ruby first :-) A fact is created for the version and all other MySQL facts come from the SHOW STATUS statement. All the new facts start by mysql_ I plan to add new facts related to the replication like Seconds_Behind_Master The current version is available on github here

Some usage examples:

with facter:
[root@delvaux facter]# facter mysql_version 2>/dev/null
5.5.10
[root@delvaux facter]# facter mysql_max_used_connections 2>/dev/null
3
with mcollective:
[root@delvaux facter]# mc-facts mysql_version
Report for fact: mysql_version

        5.0.51a-24+lenny5-log                   found 1 times
        5.5.10                                  found 1 times

Finished processing 2 / 2 hosts in 64.36 ms



[root@delvaux facter]# mc-facts mysql_open_files
Report for fact: mysql_open_files

        16                                      found 1 times
        18                                      found 1 times

Finished processing 2 / 2 hosts in 2418.13 ms

[root@delvaux facter]# mc-facts mysql_open_files -v
Determining the amount of hosts matching filter for 2 seconds .... 2
Report for fact: mysql_open_files

        16                                      found 1 times

            delvaux.maladree.be

        18                                      found 1 times

            debian1.maladree.be


---- rpc stats ----
           Nodes: 2 / 2
     Pass / Fail: 0 / 0
      Start Time: Sat Apr 02 00:11:46 +0200 2011
  Discovery Time: 2001.84ms
      Agent Time: 1344.24ms
      Total Time: 3346.08ms

[root@delvaux facter]# mc-facts mysql_threads_connected 
Report for fact: mysql_threads_connected

        2                                       found 2 times

Finished processing 2 / 2 hosts in 3270.86 ms

[root@delvaux facter]# mc-facts mysql_threads_connected -v
Determining the amount of hosts matching filter for 2 seconds .... 2
Report for fact: mysql_threads_connected

        2                                       found 2 times

            debian1.maladree.be
            delvaux.maladree.be


---- rpc stats ----
           Nodes: 2 / 2
     Pass / Fail: 0 / 0
      Start Time: Sat Apr 02 00:12:47 +0200 2011
  Discovery Time: 2001.73ms
      Agent Time: 50.43ms
      Total Time: 2052.15ms

Puppet and 64bits packages

Since I use puppet to manage my machines (and the machines of customers), I noticed that I had more packages installed then before, I noticed also obviously the same behavior in packages to update and bandwidth consumption during updates. I realize that on 64bits machines, most of the time, the 32bits version of the packages managed by puppet were also installed. This is what I did in my recipes before:
    package { "corosync":
        ensure => "installed",
        require => Yumrepo["clusterlabs"];
    }
This kind of package declaration installed then the two version of the package, in this case corosync and the dependencies too. To avoid this I added the fact hardwaremodel and used the alias to keep my recipes consistent:
    package { "corosync.$hardwaremodel":
        ensure => "installed",
        alias => "corosync",
        require => Yumrepo["clusterlabs"];
    }
Hope this could help people having noticed the same behavior... or not :-)

devops… to package or not to package… this is the question !

During the Devopsdays in Hambourg, one of the most recuring discussion was about "packaging vs non-packaging, when and what?" I won't try to convince people on what do do when, neither will I say I have the absolute best solution, this post just illustrates the solution I implemented with @zipkid. Some points aren't finished yet, not implemented... or we have not yet decided which direction to follow.

First, let's start we the description of the environment:

A web based application (J2EE) with a MySQL backend, this product is delireved to us as a tgz package. There are many interconections between gateways, applications, databases, map servers, etc... all these defined in configuration files. We are using SLES from 10 to 11sp1 and we maintain a bunch of servers: physical machines of different types (dell, IBM blades,..) and virtual machines.

What tools do we use ?

- GNU Linux - redmine + kanban board plugin to define the tasks - a pxe installation system (autoyast in sles and cobbler in redhat/centos/fedora) to (re)install the machines - puppet to deploy the configurations - git to save all our configurations of puppet - svn to save other things like specs files (this should be migrated to git) - puppet-dashboard to have an overview of the deployed machines, an overview of puppet and define some variables we use in our recipes - rpmbuild to ... euh... build the rpms :) - jmeter to perform load test - nagios to monitor the systems

What is the process then ?

To define the processe, we must first divide it in several categories : - OS installation and maintenance - "our business product" To install a machine, we install a basic image on a machine (virtual or physical) via pxeboot using kind of kickstart files for redhat base system or autoyast for SLES. We create the node in the dashboard, we add some variables if needed like ip, environment, task. We add the server in the autosign file of puppet. In the dashboard and puppet we have several different environments that are linked to some git branches. This allow us to test recipes or settings without modifying the production. Then puppet is started and takes care of everything : vlan interfaces, bonding the interfaces, dns resolving, install the needed package and change the configuration files via puppet. Nagios checks are also configured by puppet. For our product, we first create the package (rpm) from the tgz provided by the developers, and put it in our own repository. After having installed it on the test servers we start some load test scenario.

Back to the big question then: do we package ?

The answer is definetively YES ! To keep a control of what is installed on the system (package version, release and not having orphaned files). BUT the default configuration files are overidden by the puppet run. conf files, xml, shell scripts, cron jobs are indeed provided by puppet and available in git (which provides us version control too) Of course puppet runs constantly on every machine to constantly guarantee the desired state, both on production and on the test machines! This is only dangerous if you don't test your puppet recepies enough during the development phase. We don't start the puppet client in deamon mode but we start the process via cronjobs to avoid any memory usage issues which we encountered with puppetd in daemon mode.

How to improve ?

We would like to improve the load test and automate the build, installation and test on the test server of "our product". We plan to use hudson for the CI with jmeter for unit tests and why not tsung for bigger load tests ? Some open question we still have if we deploy a CI system is how to link a build version with a puppet configuration ? Using a new branch in git linked to a new environment in puppet (and puppetdashboard) doesn't seem to be an optimal solution. We opted then with a git tag corresponding to the build release and only the last one in testing is deployed on the test machines. If needed we can rollback to a previous tag and package. It would be also great to automatically test our puppet recipes with a tool like cucumber-puppet. I think we are going in the right direction, but the road is still long to a fully automated processes with an overview control of all aspects. But we all agree that puppet already helped us a lot to maintain all our servers.

This is a schema illustrating the process :

1. developers provides a tgz with their application (a java compiled application, they also use Hudson to test their package) 2. the "DEVOPS" machine is started ! Devs and Ops collaborate to write the specs for the rpm package and the puppet recipe (dependencies, configuration settings) 3. test the package build and the puppet recipe (with cucumber-puppet) 4. add the package to the rpm repository and commit the puppet recipe to git (and the rpm spec to svn in our case) 5. puppetmaster gets updated with the new recipes 6. this is only in case of a new machine, the machine is automaticaly installed via pxe 7. puppet client installs the needed packages and configure the system as needed 8. puppet also configures nagios and nagios automaticaly startsmonitoring the machine and the services, hudson also starts unit tests and load tests if needed 9. same as point 6 10. puppet installs the needed packages and configuration to the production machine. it also configures nagios to monitor the machine and its services