Debian Wheezy DRBD primary/primary + Corosync + CLVM + OCFS2

How to install :
– DRBD in dual primary mode,
– Corosync + Pacemaker,
on Debian Wheezy.

I am going to use two servers xbox-1 ( and xbox-2 ( with Debian 7 (Wheezy). Each server will have the following partition layout :
– /dev/sda1 and /dev/sda2 for boot and root,
– /dev/sda3 will be replicated as /dev/drbd0


If not specified otherwise all configurations should be made on both nodes.


1.1 Install

apt-get install drbd8-utils

1.2 Configure


global { 
 usage-count yes;
common {
 protocol C;
 handlers {
  pri-on-incon-degr "/usr/lib/drbd/; /usr/lib/drbd/; echo b > /proc/sysrq-trigger ; reboot -f";
  pri-lost-after-sb "/usr/lib/drbd/; /usr/lib/drbd/; echo b > /proc/sysrq-trigger ; reboot -f";
  local-io-error "/usr/lib/drbd/; /usr/lib/drbd/; echo o > /proc/sysrq-trigger ; halt -f";
 startup {
  wfc-timeout 0;
  degr-wfc-timeout 120;
  become-primary-on both;
 disk {
  on-io-error detach;
 net {
  after-sb-0pri discard-zero-changes;
  after-sb-1pri discard-secondary;
  after-sb-2pri disconnect;
  rr-conflict call-pri-lost;
  max-buffers 8000;
  max-epoch-size 8000;
  sndbuf-size 0;
 syncer {
  rate 100M;
  al-extents 3389;


resource sda3 {
 device /dev/drbd0;
 disk /dev/sda3;
 meta-disk internal;
 on xbox-1 {
 on xbox-2 {

Prevent drbd from starting automatically (it will be started by corosync)

update-rc.d drbd disable

1.3 Prepare drbd resources

drbdadm create-md sda3

1.4 Start drbd

/etc/init.d/drbd start

On xbox-1 :

drbdadm -- --overwrite-data-of-peer primary sda3

At this point if you run drbd-overview you should see the nodes connected and synchronization started.

2. Corosync and Pacemaker

2.1 Install

apt-get install corosync pacemaker openais

2.1 Configure corosync (in my case I use unicast communication).


totem {
    version: 2
    token: 3000
    token_retransmits_before_loss_const: 10
    join: 60
    consensus: 3600
    vsftype: none
    max_messages: 20
    clear_node_high_bit: yes
    secauth: off
    threads: 0
    rrp_mode: active

    interface {
        ringnumber: 0
        member {
        member {
        # on xbox-1
        # on xbox-2
        mcastport: 5405

    transport: udpu

amf {
    mode: disabled

service {
     ver:       0
     name:      pacemaker

aisexec {
        user:   root
        group:  root

logging {
        fileline: off
        to_stderr: no
        to_syslog: yes
        syslog_facility: daemon
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
                tags: enter|leave|trace1|trace2|trace3|trace4|trace6

2.2 Start corosync

/etc/init.d/corosync start

At this point if you run crm_mon (or crm status) you should see, after a while, both nodes online

2.3 Some basic 2-nodes cluster settings (all crm commands have to be run only on one node)

crm configure

property $id="cib-bootstrap-options" \
       stonith-enabled="false" \
rsc_defaults $id="rsc-options" \
op_defaults $id="op-options" \

3. Add drbd to cluster

crm configure
primitive p-drbd ocf:linbit:drbd \
        params drbd_resource="sda3" \
        op monitor interval="50" role="Master" timeout="30" \
        op monitor interval="60" role="Slave" timeout="30" \
        op start interval="0" timeout="240" \
        op stop interval="0" timeout="100"
ms ms-drbd p-drbd \
        meta master-max="2" clone-max="2" notify="true" interleave="true"

4. DLM (Distributed Lock Manager)

4.1 Install

apt-get install dlm-pcmk

4.2 Configure

crm configure
primitive p-dlm ocf:pacemaker:controld \
        op monitor interval="120" timeout="30" \
        op start interval="0" timeout="90" \
        op stop interval="0" timeout="100"


5.1 Install

apt-get install clvm

Debian Wheezy is missing the clvm ocf resource agent (/usr/lib/ocf/resource.d/lvm2/clvmd), so I took it from the ubuntu clvm package.

5.2 Configure

Disable automatic startup

update-rc.d clvm disable

In /etc/lvm/lvm.conf change

locking_type = 3

Cluster configuration

crm configure

primitive p-clvm ocf:lvm2:clvmd \
       params daemon_timeout="30" \
       op monitor interval="60" timeout="30" \
       op start interval="0" timeout="90" \
       op stop interval="0" timeout="100"

6. OCFS2

6.1 Install

apt-get install ocfs2-tools ocfs2-tools-pacemaker

6.2 Config

crm configure
primitive p-o2cb ocf:pacemaker:o2cb \
        op monitor interval="120" timeout="30" \
        op start interval="0" timeout="90" \
        op stop interval="0" timeout="100"

Disable automatic startup

update-rc.d ocfs2 disable
update-rc.d o2cb disable

7. Put everything together

Add the rest of cluster conf which will make everything work together

crm configure
group g-lock p-dlm p-clvm p-o2cb
clone c-lock g-lock \
        meta globally-unique="false" interleave="true"
colocation col-drbd-lock inf: c-lock ms-drbd:Master
order ord-drbd-lock inf: ms-drbd:promote c-lock

At this point if you run crm status you should see all cluster resources up and running

Online: [ xbox-1 xbox-2 ]

Master/Slave Set: ms-drbd [p-drbd]
     Masters: [ xbox-2 xbox-1 ]
Clone Set: c-lock [g-lock]
     Started: [ xbox-1 xbox-2 ]

and you should be able to create lvm volumes on one node and access them on the other node.

8. Create LVM volumes and shared file system (all commands have to be run only on one node)

8.1 LVM stuff

pvcreate /dev/drbd0
vgcreate vg0 /dev/drbd0
lvcreate -n sharelv -L 10G vg0

If you run vgdisplay -v on the other node you should see vg0 and sharelv.

8.2 Ocfs2

mkfs.ocfs2 /dev/vg0/sharelv
mkdir /root/sharedata

8.3 Cluster config

crm configure

primitive p-vg0 ocf:heartbeat:LVM \
        params volgrpname="vg0" \
        op monitor interval="60s" timeout="40" \
        op start interval="0" timeout="40" \
        op stop interval="0" timeout="40"
primitive p-sharefs ocf:heartbeat:Filesystem \
        params device="/dev/vg0/sharelv" directory="/sharedata" fstype="ocfs2" \
        op monitor interval="60s" timeout="60s" \
        op start interval="0" timeout="90s" \
        op stop interval="0" timeout="90s"
group g-sharedata p-vg0 p-sharefs
clone c-sharedata g-sharedata \
        meta globally-unique="false" interleave="true"
colocation col-lock-sharedata inf: c-sharedata c-lock
order ord-lock-sharedata inf: c-lock c-sharedata

At this point if you run crm status you should see all cluster resources up and running

Online: [ xbox-1 xbox-2 ]

Master/Slave Set: ms-drbd [p-drbd]
     Masters: [ xbox-2 xbox-1 ]
Clone Set: c-lock [g-lock]
     Started: [ xbox-1 xbox-2 ]
Clone Set: c-sharedata [g-sharedata]
     Started: [ xbox-1 xbox-2 ]

and /dev/vg0/sharelv should be mounted r/w on /shardata on both nodes.


HTC Desire S S-OFF hboot 2.00.002

How to downgrade the hboot in order to be able to use

Most of the links in this post are not valid anymore. If you are interested by the subject try

The main source for this post was an article on You will need adb and fastboot. If you don’t want to install the full Android SDK you can download them from or from here.

1. Put the phone in debug mode:
– disable fastboot Settings -> Power -> Fastboot or Settings -> Applications -> Fastboot.
– enable USB debugging Settings -> Applications -> Development -> USB debugging.

2. Unlock the bootloader:
– you will need to login on If you don’t have an account create one.
– go to
– choose All Other Supported Models and follow the instructions.

3. Root and change version:
– start the phone in fastboot mode and connect the usb cable.
– run fastboot oem lock. Reboot the phone.
– download zergRush from or from here.
– extract the content of in the same folder as adb.exe.
– run
adb push zergRush /data/local/tmp
adb push misc_version /data/local/tmp
adb shell chmod 777 /data/local/tmp/zergRush
adb shell chmod 777 /data/local/tmp/misc_version
adb shell /data/local/tmp/zergRush

At this point the adb shell should close and restart with root permissions.
– run adb shell /data/local/tmp/misc_version -s 1.27.405.6.

4. Downgrade hboot:
– download RUU_Saga_HTC_Europe_1.28.401.1.
– run the RUU and follow the instructions on your PC screen.
– after 5 – 10 min the RUU will install Android 2.3.3, HBOOT 0.98.0000, S-ON.

5. Go to and follow the instructions here to S-OFF your phone.

Self signed certificate, fast and easy

Use certtool instead of openssl. It is less flexible but much more user friendly.

1. Installation:
Certtool is part of GnuTLS. On debian-based distributions you have to install the gnutls-bin package.

2. Create a private key:

# certtool -p --outfile server.key.pem

3. Generate the self signed certificate:

# certtool -s --load-privkey server.key.pem --outfile server.crt.pem

You will get a prompt to enter various informations required for a certificate. For a server certificate you only need to fill common name with the server name (e.g. and validity period.

For some applications, like openvpn, you may need your own certificate authority (CA). These are the steps required:
– create a CA key
– create a self signed certificate for the CA. Say yes to the questions: “Does the certificate belong to an authority?” and “Will the certificate be used to sign other certificates?”
– create a key
– create a certificate using the CA key, CA certificate and the above key. For openvpn the common name is the user name.

# certtool -p --outfile ca.key.pem
# certtool -s --load-privkey ca.key.pem --outfile ca.crt.pem
# certtool -p --outfile user.key.pem
# certtool -c --load-privkey user.key.pem --load-ca-privkey ca.key.pem --load-ca-certificate ca.crt.pem --outfile user.crt.pem

Cisco vpn client on Linux

These are the steps required to instal the Cisco vpn client ver. on Ubuntu 11.10 (it should work on any kernel above 2.6.38):

1. Download the necessary software:

Cisco vpn client.
– patch 01. Get details about this patch here.
– patch 02. Get details here.
– optionally patch 03. This is a patch I wrote and it enables dkms for building and installing the cisco_ipsec kernel module.

2. Apply patches:

# tar -xzvf vpnclient-linux-x86_64-
# cd vpnclient
# patch -p1 < ../vpnclient-01-a-fseitz.patch
# patch -p1 < ../vpnclient-02-joergensen.patch
# patch -p1 < ../vpnclient-03-dkms.patch

3. Install:

# ./vpn_install