2012 in review

31 décembre 2012 1 commentaire

Les lutins statisticiens chez WordPress.com ont préparé un rapport annuel 2012 pour ce blog.

Cliquez ici pour voir le rapport complet.

Catégories :Uncategorized

A minimal linux system – Part 3

28 décembre 2012 Laisser un commentaire

We have managed, using only our bare hands, to :
– build a gcc-based cross-compiler
– build a linux kernel binary
– build a C standard library

Now we need some executables for our system. A minimum set would include a shell interpreter, some basic commands (cat, grep, ls, cp and so on) and an ssh server, because ssh is great.

The most current implementation of these tools on ‘desktop’ linux distributions are provided by GNU coreutils, GNU bash and openssh.


As more convenient way is to use the busybox project : busybox combines a shell interpreter all all the needed commonly used programs into a single binary, detecting how to behave depending on the name that was used to invoke it.

As usual it is simple to build, as long as you give the good configuration in the .config file (use mine or make menuconfig !)

curl "http://www.busybox.net/downloads/busybox-$BUSYBOX_VERSION.tar.bz2" -o busybox-$BUSYBOX_VERSION.tar.bz2
tar -jxvf busybox-$BUSYBOX_VERSION.tar.bz2
ln -s busybox-$BUSYBOX_VERSION.tar.bz2 busybox
pushd busybox
    #edit .config


The zlib compression library is really common and offently used, so I decided to embed it as a shared library.

curl "http://zlib.net/zlib-$ZLIB_VERSION.tar.gz" -o zlib-$ZLIB_VERSION.tar.gz
tar -zxvf zlib-$ZLIB_VERSION.tar.gz
ln -s zlib-$ZLIB_VERSION zlib
pushd zlib
	export LDSHARED="$TOOLCHAIN_TARGET-gcc -shared -Wl,-soname,libz.so.1"
	./configure --shared --prefix=/usr/$TOOLCHAIN_TARGET
	make -B


dropbear is a super lightweight implementation of the ssh2 protocol, which make it the perfect alternative to openssh ! it compiles without any dependency and word pretty well – just like busybox, several executables are bundled in the same binary. I choosed to include the ssh server, the client, and scp.



curl "https://matt.ucc.asn.au/dropbear/dropbear-$DROPBEAR_VERSION.tar.bz2" -o dropbear-$DROPBEAR_VERSION.tar.bz2
tar -jxvf dropbear-$DROPBEAR_VERSION.tar.bz2
ln -s dropbear-$DROPBEAR_VERSION dropbear
pushd dropbear
	export CFLAGS=-Os

	./configure --host=$TOOLCHAIN_TARGET \

	make MULTI=1 PROGRAMS="dropbear scp dbclient"

Now we have all the ingredients : kernel, libraries, executables. on next episode I’ll try to explain how integrate all of these in order to get a functional system – A part that I found more more tricky than compiling the binaries themselves…

Catégories :Uncategorized

A minimal linux system – Part 2

28 décembre 2012 Laisser un commentaire

Ok, we have a cross-compiler toolchain. now we can deal with the real thing. At this point, I was tempted to compile standard C and C++ standard libraries – But this is simply not possible without the header files provided .. by the linux kernel.
It is quite logical because implementations of standard C functions rely on system calls (it seems stupid once said) – So one cannot simply compile a standard library without kernel binaries, that’s why it is called a kernel after all.

so the most dramatic moment of this tutorial begins now : building the linux kernel itself.

Linux kernel


curl "http://www.kernel.org/pub/linux/kernel/v3.0/linux-$LINUX_KERNEL_VERSION.tar.bz2" -o linux-$LINUX_KERNEL_VERSION.tar.bz2
tar -jxvf linux-$LINUX_KERNEL_VERSION.tar.bz2
ln -s linux-$LINUX_KERNEL_VERSION linux
pushd linux

#linux kernel compilation options is normally setup by editing a .config file at the root of the linux source tree.
#This configuration file may be easily modified by doing :
# apt-get install libncurses5-dev
# make menuconfig
# I have my own settings right in this file, so
curl "http://julien-rialland.fr/pub/linux/linux-x86-config" -o .config

#At this step I can apply various kernel patches, for example Prempt-RT (http://en.wikipedia.org/wiki/RTLinux)
#curl "http://www.kernel.org/pub/linux/kernel/projects/rt/3.6/patch-3.6.11-rt25.patch.bz2" -o patch-3.6.11-rt25.patch.bz2
#cat patch-3.6.11-rt25.patch.bz2 | bunzip2 | patch -p1


#we also copy some headers that will be needed by libc headers
for h in `find ./include/asm-generic -name "*.h"`; do
    name=`basename $h`
    if [ ! -f "$TOOLCHAIN_INSTALL_DIR/include/asm/$name" ]; then
        ln -s $h $TOOLCHAIN_INSTALL_DIR/include/asm/$name


standard C library : uclibc

On a decent platform, executable are linked with a standard library, which provides implementation of all standard C functions (no ‘hello world’ without it 🙂 )
We use our freshly-compiled compiler in order to compile it (uh, a lot of compilations don’t you think ?)
There are several implementation of the standard C library : the GNU libc (which may be quite bloated for my use), newlib, dietlibc, musl, uclibc. I choosed uclibc because I have no imagination, but I think that any of the listed ones would be fine.

curl "http://www.uclibc.org/downloads/uClibc-$UCLIBC_VERSION.tar.bz2" -o uClibc-$UCLIBC_VERSION.tar.bz2
tar -jxvf uClibc-$UCLIBC_VERSION.tar.bz2
ln -s uClibc-$UCLIBC_VERSION uClibc
pushd uClibc

#we have to provide a configuration - it can be done by running make menuconfig just like linux kernel, or you can reuse mine
curl http://julien-rialland.fr/pub/uclibc.config -o .config

#and copy the headers and libraries at the right places into the toolchain's tree : 

Now the build chain is quite complete, we can now build plain executables for our linux system. The only missing thing is the c++ runtime, which is only useful if you are going to run apps written in c++ on your system – as it is not the case with mine, I skipped it. But here are the steps for building it :

C++ runtime

As usual, fetch the sources, configure, and compile !

curl "http://cxx.uclibc.org/src/uClibc++-$UCLIBCXX_VERSION.tar.bz2" -o uClibc++-$UCLIBCXX_VERSION.tar.bz2
tar -jxvf uClibc++-$UCLIBCXX_VERSION.tar.bz2
ln -s uClibc++-$UCLIBCXX_VERSION uClibc++
pushd uClibc++
     #make menuconfig, or get my .config at julien-rialland.fr
Catégories :Uncategorized

A minimal linux system – Part 1

28 décembre 2012 Laisser un commentaire

As a challenge to myself, in order to understand the way one can build a linux system, I started to play building a running os from scratch.
The challenge was the following : having a running operating system on a Pc (a virtualbox virtual machine was my target platform, for more convenience),
with a shell, at least a decent set of standard commands (ls, cp, grep, etc…), networking support, and an ssh server.

The main goals were :
– prooving to myself that building a linux Os is easy
– seeing how far one can go in order to have the ligthest linux-based system possible

So I aimed at reducing the size of the system, so I had to cut corners each time it was possible in order to make the smallest binaries. That’s the reason why it does’nt embed fancy hardware detection mechanisms, filesystems support,
software package systems like apt, etc…


I choosed to compile for x86 pc. By changing some options in the following tutorial, it might be easy to setup things in order to compile for another
target platform – arm for instance. Linux kernel configuration and modules compilation is the main thing that make compiling linux for alternative
platforms a bit difficult. The matter is not the processor itself, but the integration between linux and underlying hardware (the arch/ directory in the kernel source tree), which
may be hard to configure properly on exotic boards. Actually i didn’t go into these issues. as x86 pc is a basic platform for linux.
I also choosed x86 because I tought it would produce smaller binaries.

fitted for one machine

Classical kernel and modules compilation that can be found on linux distribution (debian, fedora and so on) are configured to potentially work on a big amount of
different machines, which is not my case. My goal was to be able to run on one (virtual) machine, so the set of supported hardware in my linux image is limited to this
particular machine :
one floppy disk unit
one ide channel with a cdrom drive and one hard disk.
generic intel pro/1000MT network card
no sound support

Disclaimer : You should use buildroot !

Some good project help building cross-compiler based toolchains for various platforms. buildroot (http://buildroot.uclibc.org/) allow to easily generate a complete
embedded linux system. The fact is that I actually wanted to do it myself in order to understand how it works – That’s the reason why I did not use such tool.
If you are just aiming at having an embedded linux system, you should use it instead of following this tutorial, as it is imo the most proper and standard way to do it.
The fact it that it automates a lot of things – As I wanted to customize almost everything, I did not use it – But you shall know that using buildroot is much more efficient !!
Many good tutorials explain how to use it, just google for it !

Let’s begin

Part 1 of this tutorial covers the compilation of the compiler itself : The first thing we need for building executables for another platform is a compiler that can compile code for my target platform. Such activity is named cross-compiling. Actually my development machine is a 64-bits ubuntu-based system,
and I want to compile for ia32 (I could also compile for arm or any instruction set supported by gcc) – So I need a customer gcc + binutils.

GNU binutils

binutils  are programs that allow to manipulate binaries, there is a linker, and an assembler. These will be used by our compiler. compiling it quite easy

apt-get install curl build-essential flex bison


mkdir -p $TOOLCHAIN_INSTALL_DIR/build_src
curl ftp://sourceware.org/pub/binutils/releases/binutils-${BINUTILS_VERSION}.tar.bz2 -o "binutils-${BINUTILS_VERSION}.tar.bz2"
tar -jxvf binutils-${BINUTILS_VERSION}.tar.bz2
ln -s binutils-${BINUTILS_VERSION} binutils
pushd binutils
./configure "--target=$TOOLCHAIN_TARGET" "--prefix=$TOOLCHAIN_INSTALL_DIR"
make install
popd #binutils
popd #toolchain


The compiler suite itself.
Gcc relies on third party libraries like gmp, mpfr and mpc which have to be retreived separately. The gcc makefile just need the source code of these libs to be
uncompressed over the gcc source code in order to be detected and used during the compilation of gcc itself.

apt-get install zip zlib1g-dev libgmp-dev libmpfr-dev


mkdir -p $TOOLCHAIN_INSTALL_DIR/build_src/gcc
pushd $TOOLCHAIN_INSTALL_DIR/build_src/gcc

curl "ftp://ftp.irisa.fr/pub/mirrors/gcc.gnu.org/gcc/releases/gcc-$GCC_VERSION/gcc-$GCC_VERSION.tar.bz2" -o gcc-$GCC_VERSION.tar.bz2
tar -jxvf gcc-$GCC_VERSION.tar.bz2
ln -s gcc-$GCC_VERSION gcc
pushd gcc

curl "ftp://ftp.gmplib.org/pub/gmp-$GMP_VERSION/gmp-$GMP_VERSION.tar.bz2" -o "gmp-$GMP_VERSION.tar.bz2"
tar -jxvf gmp-$GMP_VERSION.tar.bz2
ln -s gmp-$GMP_VERSION gmp

curl "http://www.mpfr.org/mpfr-current/mpfr-$MPFR_VERSION.tar.bz2" -o "mpfr-$MPFR_VERSION.tar.bz2"
tar -jxvf mpfr-$MPFR_VERSION.tar.bz2
ln -s mpfr-$MPFR_VERSION mpfr

curl "http://www.multiprecision.org/mpc/download/mpc-$MPC_VERSION.tar.gz" -o "mpc-$MPC_VERSION.tar.gz"
tar -zxvf mpc-$MPC_VERSION.tar.gz
ln -s mpc-$MPC_VERSION.tar.gz mpc

#gcc configure script should not be run with its source tree as current path ! - we may use a different (empty) directory during compilation
mkdir -p $TOOLCHAIN_INSTALL_DIR/build_obj/gcc
pushd $TOOLCHAIN_INSTALL_DIR/build_obj/gcc

#generate makefile
./configure     --target=$TOOLCHAIN_TARGET \
--with-system-zlib \
--disable-nls \
--disable-shared \
--disable-libssp \
--disable-multilib \
--disable-libgcj \
--disable-libada \
--enable-interwork \
--without-headers \
--with-gnu-ld \
--with-gnu-as \
--disable-decimal-float \
--disable-libmudflap \
--disable-libquadmath \
--disable-libgomp \
--disable-libstdc++-v3 \
--disable-libitm \
--without-ppl \
--without-cloog \
--enable-languages=c,c++ \
make install

#and finally add all produced executable to the path.

Note that at this point, we have do not have support for the c++ language, even if g++ has been created, we bypassed the compilation of the c++ runtime library.
This will be fixed in the next step by relying on a lightweight c++ runtime (ucLibC++) instead of the implementation provided with gcc (named libstdc++-v3).

Catégories :Uncategorized

Installing Pootle 2.1.6 on Ubuntu

Pootle is a web-based tool that eases collaborative translation effort on software projects. This tool allow people to log in into a single interface in order to translate projects’ files, review them, etc…

The tool seems to be used by many well known open-source projects (mozilla, filezilla, …) – It also seems that the reason for Pootle’s popularity is primarly its ‘open-sourceness », and visibly the lack of alternative projects that do the same particular thing : allow non-developper to contribute text translations on software projects through a web interface.

IMO it lacks some important features concerning the synchronization with the sources of a project, for example under subversion. When creating a new project, things have to be done manually on the server, and this involves creating files and symlinks in a way that is really complicated. It gives the test of something that is quite unifinish, and needs a bunch of extra hand written scripts in order to work properly.

I found it to be too simplistic : I wanted to be able to get some non-developpers provide their translation, and being able to seemlessly commit their improvements into the project’s repository under subversion.

Anyway I wanted to give a try to this tool, and I found that the package provided for ubuntu was a bit old, that’s the reason why I give you the steps I followed to to install pootle 2.1.6 (which is the latest version at the time I write this article)

This installation involves apache server + mod_wsgi and mysql, so it is attended to be reliable enough for production mode, as far as I know.

1) you need python => should be already installed / if it’s not the case, apt-get install python

2) you need apache2, mysql, and various python libraries.

sudo apt-get install apache2 libapache2-mod-wsgi python-django python-mysqldb python-lxml python-levenshtein pylucene

3 ) You need ‘translate-toolkit’ : This is the library that pootle uses to manage translations. pootle 2.1.6 need translate-toolkit 1.9.0. Installing it is quite easy :
cd ~
wget http://kent.dl.sourceforge.net/project/translate/Translate%20Toolkit/1.9.0/translate-toolkit-1.9.0.tar.bz2
tar -jxvf translate-toolkit-1.9.0.tar.bz2
cd translate-toolkit-1.9.0
python setup.py install

4) You need pootle (off course) : I install it into /var/www :

cd /var/www
wget http://heanet.dl.sourceforge.net/project/translate/Pootle/2.1.6/Pootle-2.1.6.tar.bz2
tar -jxvf Pootle-2.1.6.tar.bz2
chown -R www-data:ww-data Pootle-2.1.6
ln -s Pootle-2.1.6/ pootle
rm Pootle-2.1.6.tar.bz2

Install it (copies configuration files into various standard places)

/var/www/pootle/setup.py install

add execution right on the script that apache will run through mod_wsgi

chmod 770 /var/www/pootle/wsgi.py

Ensure that the settings file that will be used is the one that stand now in /etc/pootle :

rm /var/www/pootle/localsettings.py
ln -s /etc/pootle/localsettings.py /var/www/pootle/localsettings.py

5) mysql database creation
No particular things to do, just create a database as usual

apt-get install mysql-server

mysql -u root -p
GRANT ALL PRIVILEGES ON pootle.* TO pootle@localhost IDENTIFIED BY 'secretpassword';


6) Pootle configuration

edit the /etc/pootle/localsettings.py file in order to rely on the mysql database you just created :

DATABASE_ENGINE = 'mysql' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
DATABASE_NAME = 'pootle' # Or path to database file if using sqlite3.
DATABASE_USER = 'pootle' # Not used with sqlite3.
DATABASE_PASSWORD = 'secretpassword' # Not used with sqlite3.
DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3.

7) memcached configuration

Instructions from the pootle installation guide recommended to use memcached in order to improve performances,
so I just installed/configured it simply by doing :

sudo apt-get install memcached python-memcache

in/etc/pootle/localsettings.py :

CACHE_BACKEND = ‘memcached://localhost:11211/’
SESSION_ENGINE = ‘django.contrib.sessions.backends.cached_db’

8) Apache configuration :

Pootle runs on python, and the pages are served by apache using mod_wsgi. Here is the configuration file for apache.

I put this configuration in /etc/apache2/sites-available/pootle. static content will be served directly by apache :

<IfModule wsgi_module>

WSGIDaemonProcess pootle_wsgi user=www-data group=www-data processes=10 threads=1 maximum-requests=10000
WSGIProcessGroup pootle_wsgi
WSGIApplicationGroup pootle_wsgi
WSGIPassAuthorization On

WSGIScriptAlias /pootle /var/www/pootle/wsgi.py

# directly serve static files like css and images, no need to go through mod_wsgi and django
Alias /pootle/html /var/www/pootle/html

<Directory /var/www/pootle/html>
Order deny,allow
Allow from all

# Allow downloading translation files directly
Alias /pootle/export /var/www/pootle/po
<Directory /var/www/pootle/po>
Order deny,allow
Allow from all


, and then enable this settings it by running

a2ensite pootle ; /etc/init.d/apache2 reload

Pootle is now reachable at   http://localhost/pootle

Catégories :Uncategorized Étiquettes : , ,

Installation de virtualbox chez OVH

26 décembre 2011 3 commentaires

Voici comment j’ai installé virtualbox, ainsi qu’une excellent appli d’aministration par le web nommée phpVirtualBox, sur un serveur kimsufi chez OVH.

Avant tout, mauvaise nouvelle, il faut recompiler le noyau de linux (procedure ici), en effet l’installation de virtualbox induit l’installation de modules noyau, et le support des modules est désactivé par défaut.

J’utilise le paquet pour Ubuntu (Oneiric Ocelot) plus récent fournit par Oracle (4.1.8), et non pas le paquet packagé par ubuntu (4.1.2) car il me semble que la partie ‘démon’ manque dans le paquet d’Ubuntu.

1) Téléchargement et installation depuis le site de virtualbox

dpkg -i virtualbox-4.1_4.1.8-75467~Ubuntu~oneiric_amd64.deb

2) On crée un utilisateur non privilégier pour lancer virtualbox en mode démon (on évite d’utiliser root)
useradd vbox

Puis on Crée/édite le fichier /etc/default/virtualbox (fichier de conf du démon):

Par défaut le serveur écoute sur le port 18083, uniquement en local. D’autres paramètres sont documentés ici : http://code.google.com/p/phpvirtualbox/wiki/vboxwebServiceConfigLinux

3) le démon vboxweb-service permet de lancer virtualbox en mode ‘démon’, pilotable par une interface soap.

#lancement automatique au démarrage
update-rc.d vboxweb-service defaults
#et on le démarre dans la foulée
/etc/init.d/vboxweb-service restart

4) Oracle fournit un « extension pack » non open-source pour le support du protocole rdp, d’USB 2.0, etc…
J’ai installé cette extension pour pouvoir par la suite utiliser le client rdp en flash dans phpVbox, et ansi afficher les « écrans » des
machines virtuelles via le client web.

#Installation extensions Oracle (pour rdp, support usb 2.0, ...)
wget http://download.virtualbox.org/virtualbox/4.1.8/Oracle_VM_VirtualBox_Extension_Pack-4.1.8-75467.vbox-extpack
vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.8-75467.vbox-extpack

5) Installation de phpVirtualBox

Il s’agit d’une excellente interface web pour virtualbox : http://code.google.com/p/phpvirtualbox/.

Si apache n’est pas installé sur votre serveur :

apt-get install apache2 php5 php5-suhosin php5-xcache php5-gd

Puis l’installation de l’application propement dite :

wget http://phpvirtualbox.googlecode.com/files/phpvirtualbox-4.1-5.zip
unzip phpvirtualbox-4.1-5.zip -d /var/www
ln -s /var/www/phpvirtualbox-4.1-5 /var/www/vbox

#on ajoute la petite icone pour navigateur
wget https://www.virtualbox.org/favicon.ico -O /var/www/favicon.ico

Renommer le fichier vbox/config.php-example => config.php
Dans ce fichier, renseigner le mot de passe de l’utilisateur vbox

Puis on accéde simplement au serveur l’adresse http://ksxxxxxx.kimsufi.com/vbox
Le login par défaut est admin/admin, et doit être changé dés que possible dans le menu File/change password de l’application

Catégories :Uncategorized Étiquettes : , ,

recompilation du noyau linux chez OVH

26 décembre 2011 2 commentaires

Voici une recette pour recompiler le noyau linux sur votre serveur OVH.
Pourquoi mutiler son pauvre serveur a ce point ? Et bien dans mon cas c’est simplement pour avoir accés au support des modules.

En effet les modules sont désactivés par défaut dans les images d’OS fournies par OVH. Même si j’ai toujours aimé
un bon troll de temps en temps, je vous laisse cette fois si seul juge des vraies raisons qui poussent les ingénieurs d’OVH à faire ça.
La raison souvent invoquée est la sécurité (!)

La procédure est testée pour Ubuntu, mais devrait marcher sans problème si vous avez opté pour Debian.
si vous avez opté pour autre chose, désolé mais je ne peux pas vous aider 😦

1) Les ingrédients : Installer les paquets nécessaire pour compiler linux :

sudo apt-get install fakeroot kernel-wedge build-essential makedumpfile kernel-package libncurses5-dev lzma

2) Récuperer les sources de linux patchées OVH.

cd /usr/src/
wget ftp://ftp.ovh.net/made-in-ovh/bzImage/old/
tar -jxvf linux-
ln -s /usr/src/linux- /usr/src/linux

3) récuperer également la configuration du noyau

wget ftp://ftp.ovh.net/made-in-ovh/bzImage/old/
cp 2.6-config-xxxx-std-ipv6-64-hz1000 linux-

J’ai choisi la configuration avec timer 1000Hz,
4) Dans mon cas, je suis allé re-cocher l’option de support des modules,

cd linux-
make menuconfig #et réactiver le support des modules

5) Il faut reconfigurer grub pour prendre en compte notre nouveu kernel :


Attention cela ne marchera bien sur que si grub2 est le bootloader ! Si ce n’est pas le cas je ne saurais que vous conseiller de l’installer : apt-get install grub2 devrait suffir…

5) et ensuite mixer le tout. Cette méthode permet d’obtenir un magnifique fichier .deb qu’il suffit d’installer grâce à la commande dpkg.

make-kpkg --rootcmd fakeroot --initrd modules kernel-image #allez boire un café
dpkg -i /usr/src/linux-image-

Cette commande génère une configuration pour notre nouveau kernel, ensuite il faut modifier l’ordre de chargement de grub pour prendre en compte notre
noyau a la place de l’ancien. Il faut pour cela reperer le fichier dans /etc/grub.d qui correspond a notre nouvelle configuration(xx_linux), et lui attribuer un numero
plus faible que xx_OVHKernel en déplaçant les fichiers, ce qui donne au final un truc du genre :

root@ks313xxx:/# ls /etc/grub.d/
00_header 10_linux 40_custom README
05_debian_theme 15_OVHkernel 30_os-prober 41_custom

Enfin la meilleure partie, il faut bien sûr rebooter sur votre nouveau kernel.

shutdown -r now

La machine devrait redevenir accessible par ssh au bout de quelques minutes.
Vous pouvez vérifier que le nouveau noyeau est bien installé sur le serveur :

root@ks313xxx:~# uname -a
Linux ks313xxx.kimsufi.com #4 SMP Fri Nov 18 10:53:35 CET 2011 x86_64 x86_64 x86_64 GNU/Linux

Chic alors ! et les modules ?


Magie de la technique.

Catégories :Uncategorized Étiquettes : , ,

Liferay daemon script

5 décembre 2011 2 commentaires

I’ve googled around looking for a good daemon script for Liferay,… and I finally wrote mine.

Here is the script I use for running a Liferay 6.0.6 instance as a sysv daemon under linux. The script has been tested on an Ubuntu 10.04 LTS server.
It just the runs a normal liferay-tomcat bundle, unzipped in the /var/liferay6 directory.

# Provides:          liferay
# Required-Start:    $local_fs $remote_fs $network
# Required-Stop:     $local_fs $remote_fs $network
# Should-Start:      $named
# Should-Stop:       $named
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Liferay portal daemon.
# Description:       Starts the Liferay portal.
# Author:            Julien Rialland <julien.rialland@gmail.com>

#Display name of the application
APP_NAME="Liferay 6.0.6"

#Location of Liferay installation
export LIFERAY_HOME=/var/liferay6

#unprivileged user that runs the daemon. The group/user should have been created separately,
#using groupadd/useradd

###This is end of the configurable section for most cases, other variable definitions follow :

#Only root user may run this script
if [ `id -u` -ne 0 ]; then
	echo "You need root privileges to run this script"
	exit 1

#tomcat directory
#detection of the tomcat directory within liferay
TOMCAT_DIR=`ls "$LIFERAY_HOME" | grep tomcat | head -1`

#location of pid file
export CATALINA_PID=/var/run/liferay.pid

# guess where is JAVA_HOME if needed (when then environment variable is not defined)
JVM_DIRS="/usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun /usr/lib/jvm/default-java /usr/lib/jvm/java-1.5.0-sun /usr/usr/lib/j2sdk1.5-sun /usr/lib/j2sdk1.5-ibm"
if [ -z "$JAVA_HOME" ]; then
        for jdir in $JVM_DIRS; do
                if [ -r "$jdir/bin/java" -a -z "${JAVA_HOME}" ]; then
                        export JAVA_HOME="$jdir"

#if JAVA_HOME is still undefined, try to get it by resolving the path to the java program
if [ -z "$JAVA_HOME" ]; then
        javaexe=`which java`
        if [ ! -z "$javaexe" ]; then
                javaexe=`readlink -m "$javaexe"`
                export JAVA_HOME=`readlink -m "$jdir"`

#if JAVA_HOME is still undefined, crash the script
if [ -z "$JAVA_HOME" ]; then
	echo 'The JAVA_HOME environment variable could not be determined !'
	exit 1

#extra jvm configuration : enable jmx
#export JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"

#extra jvm configuration : enable remote debugging
#export JAVA_OPTS="$JAVA_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=9998" 


#verify that the user that will run the daemon exists
id "$USER" > /dev/null 2>&1
if [ "$?" -ne "0" ]; then
	echo "User $user does not exist !"
	exit 1

#load utility functions from Linux Standard Base
. /lib/lsb/init-functions

#starts the daemon service
function start {
        log_daemon_msg "Starting $APP_NAME"

        #create work directory if non-existent
        mkdir $CATALINA_HOME/work 2>/dev/null

        #clear temp directory
        rm -rf "$CATALINA_HOME/temp/*" 2>/dev/null
        mkdir $CATALINA_HOME/temp 2>/dev/null

        #fix user rights on liferay home dir
        chown -R "$GROUP":"$USER" "$LIFERAY_HOME"
        chmod -R ug=rwx "$LIFERAY_HOME"

        #ensure that pid file is writeable
        mkdir `dirname "$CATALINA_PID"` 2>/dev/null
        chmod ugo=rw `dirname "$CATALINA_PID"`

        su "$USER" -c "$CATALINA_HOME/bin/catalina.sh start"

        log_end_msg $status
        exit $status

#stops the daemon service
function stop {
        log_daemon_msg "Stopping $APP_NAME"
        if [ ! -f "$CATALINA_PID" ];then
            echo "file $CATALINA_PID is missing !"
            unset CATALINA_PID
        su "$USER" -c "$CATALINA_HOME/bin/catalina.sh stop 10 -force"
        log_end_msg $status
        if [ "$status" = "0" ];then
            rm -f "$CATALINA_PID"
        exit $status

#restarts the daemon service
function restart {
        sleep 3s

#prints service status
function status {
  if [ -f "$CATALINA_PID" ]; then
    pid=`cat "$CATALINA_PID"`
    echo "$APP_NAME is running with pid $pid"
    exit 0
    echo "$APP_NAME is not running (or $CATALINA_PID is missing)"
    exit 1

case "$1" in
		echo $"Usage: $0 {start|stop|restart|status}"
		exit 1

Just name the script ‘liferay’ and put in in /etc/init.d, If you want it to run automatically when the server starts up, you just have to run the following commands :

sudo chmod u+x /etc/init.d/liferay
sudo update-rc.d liferay defaults

Catégories :Uncategorized Étiquettes : , ,

WTP Eclipse project generation from Maven configuration

The maven ‘eclipse’ plugin is a bit outdated, but very useful. I had some issues when generating eclipse configuration specifically for web projects.
Here is a description on how I managed to configure the plugin finally !

This configuration generates an Eclipse project configuration when you run mvn eclipse:eclipse. All you have to do then is to import the project into Eclipse by running the File>Import menu entry in Eclipse

This configuration fixes some issues I used to have with mvn-generated Eclipse projects :

  • utf-8 encoding for all text files
  • good versions in project’s facets (servlet 3.0, java 1.6, javascript 1.0)
  • Correct web-specific settings (web root directory location, and use of the jar dependencies in the webapp)
  • Spring-enabled project nature

The plugins section of the pom.xml looks like that :





		<!-- Settings for generating eclipse project -->
							  <fixed facet="jst.java"/>
							  <fixed facet="jst.web"/>
							  <installed facet="jst.java" version="${java.version}"/>
							  <installed facet="jst.web" version="${servlet-api.version}"/>
							  <installed facet="wst.jsdt.web" version="1.0"/>


You may also have to ensure that you gave a value to the M2_REPO variable in the Eclipse settings points to your local .m2/repository.

Catégories :Uncategorized

From zero to Liferay portlet in less that 5 minutes (depending on your network connection)

Following this recipe, i can write a new portlets (for demoing purpose) in a very short time !

1) Create a the repertory structure and download a fresh bundle distribution:

mkdir $HOME/liferay
mkdir $HOME/liferay/portlets
mkdir $HOME/liferay/bundles
cd $HOME/liferay/bundles
wget http://sunet.dl.sourceforge.net/project/lportal/Liferay%20Portal/6.0.6/liferay-portal-jetty-6.0.6-20110225.zip
unzip liferay-portal-jetty-6.0.6-20110225.zip

(you may know want to run liferay : just run the script ‘$HOME/liferay/liferay-portal-6.0/jetty-6.1.24/bin/run.sh

3) Create a new portlet project using the liferay archetype

cd $HOME/liferay/portlets
mvn archetype:generate \
-DarchetypeGroupId=com.liferay.maven.archetypes \
-DarchetypeArtifactId=liferay-portlet-archetype \
-DarchetypeVersion=6.0.6 \
-DgroupId=net.jr.testapp \

You just have to modify the liferay.auto.deploy.dir property at the end of the generated pom.xml :


The portlet can be recompiled/deployed easily by running the following command :

mvn clean package liferay:deploy

4) The only things that are needed to customize the portlet is to modify the main.js and view.jsp files…

5) More fancy things may be done by turning the project into an Eclipe project, and then import it using Eclipe :

mvn eclipse:eclipse

It doesn’t take more than 5 minutes, counting the time it needs to download the Liferay bundle !

Catégories :Uncategorized