Archives d’un auteur

2012 in review

31 décembre 2012 1 commentaire

Les lutins statisticiens chez ont préparé un rapport annuel 2012 pour ce blog.

Cliquez ici pour voir le rapport complet.

Catégories :Uncategorized

A minimal linux system – Part 3

28 décembre 2012 Laisser un commentaire

We have managed, using only our bare hands, to :
– build a gcc-based cross-compiler
– build a linux kernel binary
– build a C standard library

Now we need some executables for our system. A minimum set would include a shell interpreter, some basic commands (cat, grep, ls, cp and so on) and an ssh server, because ssh is great.

The most current implementation of these tools on ‘desktop’ linux distributions are provided by GNU coreutils, GNU bash and openssh.


As more convenient way is to use the busybox project : busybox combines a shell interpreter all all the needed commonly used programs into a single binary, detecting how to behave depending on the name that was used to invoke it.

As usual it is simple to build, as long as you give the good configuration in the .config file (use mine or make menuconfig !)

curl "$BUSYBOX_VERSION.tar.bz2" -o busybox-$BUSYBOX_VERSION.tar.bz2
tar -jxvf busybox-$BUSYBOX_VERSION.tar.bz2
ln -s busybox-$BUSYBOX_VERSION.tar.bz2 busybox
pushd busybox
    #edit .config


The zlib compression library is really common and offently used, so I decided to embed it as a shared library.

curl "$ZLIB_VERSION.tar.gz" -o zlib-$ZLIB_VERSION.tar.gz
tar -zxvf zlib-$ZLIB_VERSION.tar.gz
ln -s zlib-$ZLIB_VERSION zlib
pushd zlib
	export LDSHARED="$TOOLCHAIN_TARGET-gcc -shared -Wl,-soname,"
	./configure --shared --prefix=/usr/$TOOLCHAIN_TARGET
	make -B


dropbear is a super lightweight implementation of the ssh2 protocol, which make it the perfect alternative to openssh ! it compiles without any dependency and word pretty well – just like busybox, several executables are bundled in the same binary. I choosed to include the ssh server, the client, and scp.



curl "$DROPBEAR_VERSION.tar.bz2" -o dropbear-$DROPBEAR_VERSION.tar.bz2
tar -jxvf dropbear-$DROPBEAR_VERSION.tar.bz2
ln -s dropbear-$DROPBEAR_VERSION dropbear
pushd dropbear
	export CFLAGS=-Os

	./configure --host=$TOOLCHAIN_TARGET \

	make MULTI=1 PROGRAMS="dropbear scp dbclient"

Now we have all the ingredients : kernel, libraries, executables. on next episode I’ll try to explain how integrate all of these in order to get a functional system – A part that I found more more tricky than compiling the binaries themselves…

Catégories :Uncategorized

A minimal linux system – Part 2

28 décembre 2012 Laisser un commentaire

Ok, we have a cross-compiler toolchain. now we can deal with the real thing. At this point, I was tempted to compile standard C and C++ standard libraries – But this is simply not possible without the header files provided .. by the linux kernel.
It is quite logical because implementations of standard C functions rely on system calls (it seems stupid once said) – So one cannot simply compile a standard library without kernel binaries, that’s why it is called a kernel after all.

so the most dramatic moment of this tutorial begins now : building the linux kernel itself.

Linux kernel


curl "$LINUX_KERNEL_VERSION.tar.bz2" -o linux-$LINUX_KERNEL_VERSION.tar.bz2
tar -jxvf linux-$LINUX_KERNEL_VERSION.tar.bz2
ln -s linux-$LINUX_KERNEL_VERSION linux
pushd linux

#linux kernel compilation options is normally setup by editing a .config file at the root of the linux source tree.
#This configuration file may be easily modified by doing :
# apt-get install libncurses5-dev
# make menuconfig
# I have my own settings right in this file, so
curl "" -o .config

#At this step I can apply various kernel patches, for example Prempt-RT (
#curl "" -o patch-3.6.11-rt25.patch.bz2
#cat patch-3.6.11-rt25.patch.bz2 | bunzip2 | patch -p1


#we also copy some headers that will be needed by libc headers
for h in `find ./include/asm-generic -name "*.h"`; do
    name=`basename $h`
    if [ ! -f "$TOOLCHAIN_INSTALL_DIR/include/asm/$name" ]; then
        ln -s $h $TOOLCHAIN_INSTALL_DIR/include/asm/$name


standard C library : uclibc

On a decent platform, executable are linked with a standard library, which provides implementation of all standard C functions (no ‘hello world’ without it 🙂 )
We use our freshly-compiled compiler in order to compile it (uh, a lot of compilations don’t you think ?)
There are several implementation of the standard C library : the GNU libc (which may be quite bloated for my use), newlib, dietlibc, musl, uclibc. I choosed uclibc because I have no imagination, but I think that any of the listed ones would be fine.

curl "$UCLIBC_VERSION.tar.bz2" -o uClibc-$UCLIBC_VERSION.tar.bz2
tar -jxvf uClibc-$UCLIBC_VERSION.tar.bz2
ln -s uClibc-$UCLIBC_VERSION uClibc
pushd uClibc

#we have to provide a configuration - it can be done by running make menuconfig just like linux kernel, or you can reuse mine
curl -o .config

#and copy the headers and libraries at the right places into the toolchain's tree : 

Now the build chain is quite complete, we can now build plain executables for our linux system. The only missing thing is the c++ runtime, which is only useful if you are going to run apps written in c++ on your system – as it is not the case with mine, I skipped it. But here are the steps for building it :

C++ runtime

As usual, fetch the sources, configure, and compile !

curl "$UCLIBCXX_VERSION.tar.bz2" -o uClibc++-$UCLIBCXX_VERSION.tar.bz2
tar -jxvf uClibc++-$UCLIBCXX_VERSION.tar.bz2
ln -s uClibc++-$UCLIBCXX_VERSION uClibc++
pushd uClibc++
     #make menuconfig, or get my .config at
Catégories :Uncategorized

A minimal linux system – Part 1

28 décembre 2012 Laisser un commentaire

As a challenge to myself, in order to understand the way one can build a linux system, I started to play building a running os from scratch.
The challenge was the following : having a running operating system on a Pc (a virtualbox virtual machine was my target platform, for more convenience),
with a shell, at least a decent set of standard commands (ls, cp, grep, etc…), networking support, and an ssh server.

The main goals were :
– prooving to myself that building a linux Os is easy
– seeing how far one can go in order to have the ligthest linux-based system possible

So I aimed at reducing the size of the system, so I had to cut corners each time it was possible in order to make the smallest binaries. That’s the reason why it does’nt embed fancy hardware detection mechanisms, filesystems support,
software package systems like apt, etc…


I choosed to compile for x86 pc. By changing some options in the following tutorial, it might be easy to setup things in order to compile for another
target platform – arm for instance. Linux kernel configuration and modules compilation is the main thing that make compiling linux for alternative
platforms a bit difficult. The matter is not the processor itself, but the integration between linux and underlying hardware (the arch/ directory in the kernel source tree), which
may be hard to configure properly on exotic boards. Actually i didn’t go into these issues. as x86 pc is a basic platform for linux.
I also choosed x86 because I tought it would produce smaller binaries.

fitted for one machine

Classical kernel and modules compilation that can be found on linux distribution (debian, fedora and so on) are configured to potentially work on a big amount of
different machines, which is not my case. My goal was to be able to run on one (virtual) machine, so the set of supported hardware in my linux image is limited to this
particular machine :
one floppy disk unit
one ide channel with a cdrom drive and one hard disk.
generic intel pro/1000MT network card
no sound support

Disclaimer : You should use buildroot !

Some good project help building cross-compiler based toolchains for various platforms. buildroot ( allow to easily generate a complete
embedded linux system. The fact is that I actually wanted to do it myself in order to understand how it works – That’s the reason why I did not use such tool.
If you are just aiming at having an embedded linux system, you should use it instead of following this tutorial, as it is imo the most proper and standard way to do it.
The fact it that it automates a lot of things – As I wanted to customize almost everything, I did not use it – But you shall know that using buildroot is much more efficient !!
Many good tutorials explain how to use it, just google for it !

Let’s begin

Part 1 of this tutorial covers the compilation of the compiler itself : The first thing we need for building executables for another platform is a compiler that can compile code for my target platform. Such activity is named cross-compiling. Actually my development machine is a 64-bits ubuntu-based system,
and I want to compile for ia32 (I could also compile for arm or any instruction set supported by gcc) – So I need a customer gcc + binutils.

GNU binutils

binutils  are programs that allow to manipulate binaries, there is a linker, and an assembler. These will be used by our compiler. compiling it quite easy

apt-get install curl build-essential flex bison


mkdir -p $TOOLCHAIN_INSTALL_DIR/build_src
curl${BINUTILS_VERSION}.tar.bz2 -o "binutils-${BINUTILS_VERSION}.tar.bz2"
tar -jxvf binutils-${BINUTILS_VERSION}.tar.bz2
ln -s binutils-${BINUTILS_VERSION} binutils
pushd binutils
./configure "--target=$TOOLCHAIN_TARGET" "--prefix=$TOOLCHAIN_INSTALL_DIR"
make install
popd #binutils
popd #toolchain


The compiler suite itself.
Gcc relies on third party libraries like gmp, mpfr and mpc which have to be retreived separately. The gcc makefile just need the source code of these libs to be
uncompressed over the gcc source code in order to be detected and used during the compilation of gcc itself.

apt-get install zip zlib1g-dev libgmp-dev libmpfr-dev


mkdir -p $TOOLCHAIN_INSTALL_DIR/build_src/gcc
pushd $TOOLCHAIN_INSTALL_DIR/build_src/gcc

curl "$GCC_VERSION/gcc-$GCC_VERSION.tar.bz2" -o gcc-$GCC_VERSION.tar.bz2
tar -jxvf gcc-$GCC_VERSION.tar.bz2
ln -s gcc-$GCC_VERSION gcc
pushd gcc

curl "$GMP_VERSION/gmp-$GMP_VERSION.tar.bz2" -o "gmp-$GMP_VERSION.tar.bz2"
tar -jxvf gmp-$GMP_VERSION.tar.bz2
ln -s gmp-$GMP_VERSION gmp

curl "$MPFR_VERSION.tar.bz2" -o "mpfr-$MPFR_VERSION.tar.bz2"
tar -jxvf mpfr-$MPFR_VERSION.tar.bz2
ln -s mpfr-$MPFR_VERSION mpfr

curl "$MPC_VERSION.tar.gz" -o "mpc-$MPC_VERSION.tar.gz"
tar -zxvf mpc-$MPC_VERSION.tar.gz
ln -s mpc-$MPC_VERSION.tar.gz mpc

#gcc configure script should not be run with its source tree as current path ! - we may use a different (empty) directory during compilation
mkdir -p $TOOLCHAIN_INSTALL_DIR/build_obj/gcc
pushd $TOOLCHAIN_INSTALL_DIR/build_obj/gcc

#generate makefile
./configure     --target=$TOOLCHAIN_TARGET \
--with-system-zlib \
--disable-nls \
--disable-shared \
--disable-libssp \
--disable-multilib \
--disable-libgcj \
--disable-libada \
--enable-interwork \
--without-headers \
--with-gnu-ld \
--with-gnu-as \
--disable-decimal-float \
--disable-libmudflap \
--disable-libquadmath \
--disable-libgomp \
--disable-libstdc++-v3 \
--disable-libitm \
--without-ppl \
--without-cloog \
--enable-languages=c,c++ \
make install

#and finally add all produced executable to the path.

Note that at this point, we have do not have support for the c++ language, even if g++ has been created, we bypassed the compilation of the c++ runtime library.
This will be fixed in the next step by relying on a lightweight c++ runtime (ucLibC++) instead of the implementation provided with gcc (named libstdc++-v3).

Catégories :Uncategorized

Installing Pootle 2.1.6 on Ubuntu

Pootle is a web-based tool that eases collaborative translation effort on software projects. This tool allow people to log in into a single interface in order to translate projects’ files, review them, etc…

The tool seems to be used by many well known open-source projects (mozilla, filezilla, …) – It also seems that the reason for Pootle’s popularity is primarly its ‘open-sourceness », and visibly the lack of alternative projects that do the same particular thing : allow non-developper to contribute text translations on software projects through a web interface.

IMO it lacks some important features concerning the synchronization with the sources of a project, for example under subversion. When creating a new project, things have to be done manually on the server, and this involves creating files and symlinks in a way that is really complicated. It gives the test of something that is quite unifinish, and needs a bunch of extra hand written scripts in order to work properly.

I found it to be too simplistic : I wanted to be able to get some non-developpers provide their translation, and being able to seemlessly commit their improvements into the project’s repository under subversion.

Anyway I wanted to give a try to this tool, and I found that the package provided for ubuntu was a bit old, that’s the reason why I give you the steps I followed to to install pootle 2.1.6 (which is the latest version at the time I write this article)

This installation involves apache server + mod_wsgi and mysql, so it is attended to be reliable enough for production mode, as far as I know.

1) you need python => should be already installed / if it’s not the case, apt-get install python

2) you need apache2, mysql, and various python libraries.

sudo apt-get install apache2 libapache2-mod-wsgi python-django python-mysqldb python-lxml python-levenshtein pylucene

3 ) You need ‘translate-toolkit’ : This is the library that pootle uses to manage translations. pootle 2.1.6 need translate-toolkit 1.9.0. Installing it is quite easy :
cd ~
tar -jxvf translate-toolkit-1.9.0.tar.bz2
cd translate-toolkit-1.9.0
python install

4) You need pootle (off course) : I install it into /var/www :

cd /var/www
tar -jxvf Pootle-2.1.6.tar.bz2
chown -R www-data:ww-data Pootle-2.1.6
ln -s Pootle-2.1.6/ pootle
rm Pootle-2.1.6.tar.bz2

Install it (copies configuration files into various standard places)

/var/www/pootle/ install

add execution right on the script that apache will run through mod_wsgi

chmod 770 /var/www/pootle/

Ensure that the settings file that will be used is the one that stand now in /etc/pootle :

rm /var/www/pootle/
ln -s /etc/pootle/ /var/www/pootle/

5) mysql database creation
No particular things to do, just create a database as usual

apt-get install mysql-server

mysql -u root -p
GRANT ALL PRIVILEGES ON pootle.* TO pootle@localhost IDENTIFIED BY 'secretpassword';


6) Pootle configuration

edit the /etc/pootle/ file in order to rely on the mysql database you just created :

DATABASE_ENGINE = 'mysql' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
DATABASE_NAME = 'pootle' # Or path to database file if using sqlite3.
DATABASE_USER = 'pootle' # Not used with sqlite3.
DATABASE_PASSWORD = 'secretpassword' # Not used with sqlite3.
DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3.

7) memcached configuration

Instructions from the pootle installation guide recommended to use memcached in order to improve performances,
so I just installed/configured it simply by doing :

sudo apt-get install memcached python-memcache

in/etc/pootle/ :

CACHE_BACKEND = ‘memcached://localhost:11211/’
SESSION_ENGINE = ‘django.contrib.sessions.backends.cached_db’

8) Apache configuration :

Pootle runs on python, and the pages are served by apache using mod_wsgi. Here is the configuration file for apache.

I put this configuration in /etc/apache2/sites-available/pootle. static content will be served directly by apache :

<IfModule wsgi_module>

WSGIDaemonProcess pootle_wsgi user=www-data group=www-data processes=10 threads=1 maximum-requests=10000
WSGIProcessGroup pootle_wsgi
WSGIApplicationGroup pootle_wsgi
WSGIPassAuthorization On

WSGIScriptAlias /pootle /var/www/pootle/

# directly serve static files like css and images, no need to go through mod_wsgi and django
Alias /pootle/html /var/www/pootle/html

<Directory /var/www/pootle/html>
Order deny,allow
Allow from all

# Allow downloading translation files directly
Alias /pootle/export /var/www/pootle/po
<Directory /var/www/pootle/po>
Order deny,allow
Allow from all


, and then enable this settings it by running

a2ensite pootle ; /etc/init.d/apache2 reload

Pootle is now reachable at   http://localhost/pootle

Catégories :Uncategorized Étiquettes : , ,

Installation de virtualbox chez OVH

26 décembre 2011 3 commentaires

Voici comment j’ai installé virtualbox, ainsi qu’une excellent appli d’aministration par le web nommée phpVirtualBox, sur un serveur kimsufi chez OVH.

Avant tout, mauvaise nouvelle, il faut recompiler le noyau de linux (procedure ici), en effet l’installation de virtualbox induit l’installation de modules noyau, et le support des modules est désactivé par défaut.

J’utilise le paquet pour Ubuntu (Oneiric Ocelot) plus récent fournit par Oracle (4.1.8), et non pas le paquet packagé par ubuntu (4.1.2) car il me semble que la partie ‘démon’ manque dans le paquet d’Ubuntu.

1) Téléchargement et installation depuis le site de virtualbox
dpkg -i virtualbox-4.1_4.1.8-75467~Ubuntu~oneiric_amd64.deb

2) On crée un utilisateur non privilégier pour lancer virtualbox en mode démon (on évite d’utiliser root)
useradd vbox

Puis on Crée/édite le fichier /etc/default/virtualbox (fichier de conf du démon):

Par défaut le serveur écoute sur le port 18083, uniquement en local. D’autres paramètres sont documentés ici :

3) le démon vboxweb-service permet de lancer virtualbox en mode ‘démon’, pilotable par une interface soap.

#lancement automatique au démarrage
update-rc.d vboxweb-service defaults
#et on le démarre dans la foulée
/etc/init.d/vboxweb-service restart

4) Oracle fournit un « extension pack » non open-source pour le support du protocole rdp, d’USB 2.0, etc…
J’ai installé cette extension pour pouvoir par la suite utiliser le client rdp en flash dans phpVbox, et ansi afficher les « écrans » des
machines virtuelles via le client web.

#Installation extensions Oracle (pour rdp, support usb 2.0, ...)
vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.8-75467.vbox-extpack

5) Installation de phpVirtualBox

Il s’agit d’une excellente interface web pour virtualbox :

Si apache n’est pas installé sur votre serveur :

apt-get install apache2 php5 php5-suhosin php5-xcache php5-gd

Puis l’installation de l’application propement dite :

unzip -d /var/www
ln -s /var/www/phpvirtualbox-4.1-5 /var/www/vbox

#on ajoute la petite icone pour navigateur
wget -O /var/www/favicon.ico

Renommer le fichier vbox/config.php-example => config.php
Dans ce fichier, renseigner le mot de passe de l’utilisateur vbox

Puis on accéde simplement au serveur l’adresse
Le login par défaut est admin/admin, et doit être changé dés que possible dans le menu File/change password de l’application

Catégories :Uncategorized Étiquettes : , ,

recompilation du noyau linux chez OVH

26 décembre 2011 2 commentaires

Voici une recette pour recompiler le noyau linux sur votre serveur OVH.
Pourquoi mutiler son pauvre serveur a ce point ? Et bien dans mon cas c’est simplement pour avoir accés au support des modules.

En effet les modules sont désactivés par défaut dans les images d’OS fournies par OVH. Même si j’ai toujours aimé
un bon troll de temps en temps, je vous laisse cette fois si seul juge des vraies raisons qui poussent les ingénieurs d’OVH à faire ça.
La raison souvent invoquée est la sécurité (!)

La procédure est testée pour Ubuntu, mais devrait marcher sans problème si vous avez opté pour Debian.
si vous avez opté pour autre chose, désolé mais je ne peux pas vous aider 😦

1) Les ingrédients : Installer les paquets nécessaire pour compiler linux :

sudo apt-get install fakeroot kernel-wedge build-essential makedumpfile kernel-package libncurses5-dev lzma

2) Récuperer les sources de linux patchées OVH.

cd /usr/src/
tar -jxvf linux-
ln -s /usr/src/linux- /usr/src/linux

3) récuperer également la configuration du noyau

cp 2.6-config-xxxx-std-ipv6-64-hz1000 linux-

J’ai choisi la configuration avec timer 1000Hz,
4) Dans mon cas, je suis allé re-cocher l’option de support des modules,

cd linux-
make menuconfig #et réactiver le support des modules

5) Il faut reconfigurer grub pour prendre en compte notre nouveu kernel :


Attention cela ne marchera bien sur que si grub2 est le bootloader ! Si ce n’est pas le cas je ne saurais que vous conseiller de l’installer : apt-get install grub2 devrait suffir…

5) et ensuite mixer le tout. Cette méthode permet d’obtenir un magnifique fichier .deb qu’il suffit d’installer grâce à la commande dpkg.

make-kpkg --rootcmd fakeroot --initrd modules kernel-image #allez boire un café
dpkg -i /usr/src/linux-image-

Cette commande génère une configuration pour notre nouveau kernel, ensuite il faut modifier l’ordre de chargement de grub pour prendre en compte notre
noyau a la place de l’ancien. Il faut pour cela reperer le fichier dans /etc/grub.d qui correspond a notre nouvelle configuration(xx_linux), et lui attribuer un numero
plus faible que xx_OVHKernel en déplaçant les fichiers, ce qui donne au final un truc du genre :

root@ks313xxx:/# ls /etc/grub.d/
00_header 10_linux 40_custom README
05_debian_theme 15_OVHkernel 30_os-prober 41_custom

Enfin la meilleure partie, il faut bien sûr rebooter sur votre nouveau kernel.

shutdown -r now

La machine devrait redevenir accessible par ssh au bout de quelques minutes.
Vous pouvez vérifier que le nouveau noyeau est bien installé sur le serveur :

root@ks313xxx:~# uname -a
Linux #4 SMP Fri Nov 18 10:53:35 CET 2011 x86_64 x86_64 x86_64 GNU/Linux

Chic alors ! et les modules ?


Magie de la technique.

Catégories :Uncategorized Étiquettes : , ,