this route was stuck in ospf

[email protected]#show ip ospf route

Destination Mask Path_Cost Type2_Cost Path_Type 2 0 Intra
Adv_Router Link_State Dest_Type State Tag Flags Network Valid 0 3000
Paths Out_Port Next_Hop Type Arp_Index State
1 v234 OSPF 65535 8a 00

but this router wasn’t even advertising the route!!!

[email protected]#sh ip ospf route

Destination Mask Path_Cost Type2_Cost Path_Type 2 10 Type2_Ext
Adv_Router Link_State Dest_Type State Tag Flags Ase Valid 0 1800
Paths Out_Port Next_Hop Type State
1 v201 OSPF 20 84
2 v234 OSPF 00 00

so i added the route /29 to a ve and removed it to reset the routing table!

I copied this site because it was offline and thought it was very useful. I didn’t want to lose this info

In the past, I’ve shared how we use HAProxy  to help increase uptime and distribute load among any number of web servers.

HAProxy is an amazing tool, and I can’t thank the author, Willy Tarreau , enough for making it available and open source.

Of course, while HAProxy has proven to be extremely stable in production and we’ve never had an issue with it at all, it does present its own single point of failure. If your HAProxy server dies, everything that relies on HAProxy will go down with it. While I am not at all concerned about HAProxy itself crashing, I amconcerned about a gamma ray hitting the CPU at just the wrong moment and nuking the whole thing.

Let’s take our infrastructure a level higher. Let’s add some redundancy on the HAProxy level to ensure that even if we nuke the front line HAProxy box, nobody’s the wiser.

We’re going to do that with a really cool tool called Pacemaker . My basic idea is that I’m going to take an IP address, and a cluster of servers. Each server will run Pacemaker and be a part of a Pacemaker cluster. I tell pacemaker that I want my IP address and HAProxy to always be available, and Pacemaker will start and stop both IPs and HAProxy on all of the different nodes to ensure it is always available.

It’s actually pretty simple, but there’s nothing cooler than pinging an HA (high availability) IP, kicking over the server it’s sitting on and watching Pacemaker instantly move the services to a different server. In my testing, I only ever lost at most 1 packet as I was pinging an IP during switchover. That just makes me so happy inside.

Enough theory, let’s dive into how to make this wonder work.

Boring Setup Pre-Requisites

If you are on Debian (and have backports enabled ), you just need to install corosync and pacemaker to get it set up and going:

apt-get install corosync pacemaker -t lenny-backports

Run this on every node in your planned cluster.

Corosync is, from my understanding, the layer that communicates with all the different nodes. It ensures that each node has a valid copy of the configuration, and it helps Pacemaker know the status of the cluster. There are other choices for this layer, but Corosync seems to be the most actively developed and mature choice currently available.

Next, we need to make sure that we have some encryption keys for Corosync to communicate with all the nodes.


Only run this on one node. Copy this key (/etc/corosync/authkey) to all the other planned nodes, and ensure permissions are set correctly. (chown root:root /etc/corosync/authkey; chmod 400 /etc/corosync/authkey)

Next, you’ll want to configure the corosync config file. The example file (in /etc/corosync/corosync.conf.example) is perfect to start out, you’ll probably only need to adjust the interface bindnetaddr value and multi-cast address. In many cases, especially if you are hosting with another provider, you can’t use multicasting. Fortunately, corosync supports broadcast just fine.

You’ll need to do each of the following steps on each node. Here’s what works for me:

interface {
           ringnumber: 0
           broadcast: yes
           mcastport: 5405

Note that bindnetaddr needs to be your subnet ID for broadcast to work correctly. (for some reason, I couldn’t find clear documentation on this point when I was setting up my cluster.)

At the end, you’ll also want to add this section so corosync starts pacemaker:

service {
        name: pacemaker
        ver: 0

You’ll also need a log directory, or you’ll get a cryptic error like “parse error in config”

mkdir /var/log/cluster/

Go ahead and start corosync on each node.

/etc/init.d/corosync start

With any luck, you will have your first Pacemaker high availability cluster up and running! Hooray! You can verify the status of all your nodes by running crm_mon. If some of your nodes are not listed, you will want to go back and make sure everything is setup properly.

The Fun Stuff: Configuration

So now that we have our cluster up and running, we can play around with the configuration and tell it what we want to make highly available across all nodes. For that, we’re going to use the crm command. If you have ever used Cisco IOS commands, you will feel at home with crm. A lot of their navigational structure really reminds me of IOS.

So let’s create a new config!

Type ‘configure’ to enter configuration mode. verify helps make sure the config is valid, show will show you the current configuration, help will show you other commands you can run. Let’s make a new configuration, and get cracking.

crm(live)configure# cib new config1

Config1 is just a name. We’re making a new configuration that we can work on that won’t actually change anything in production until we commit it and switch live to config1. This allows us to tweak and make complex dependency changes without breaking anything in the cluster.

Let’s go ahead and configure an IP address we want to make highly available. This IP should not be already assigned or setup on any existing hosts. This IP needs to be routable to all the nodes in your cluster. If you are in a hosting environment, you will need to get an IP from your provider.

So with our planned HA IP in mind, it’s just a simple matter of giving it to Pacemaker:

crm(config1)configure# primitive failover-ip ocf:heartbeat:IPaddr2 params ip= cidr_netmask=32 op monitor interval=1s

Pretty straight forward, just give it the IP address, and tell it to monitor this IP once a second.

Now we’ll make sure the config looks good, verify it, and commit it to live:

crm(config1)configure# show
crm(config1)configure# verify
crm(config1)configure# end
crm(config1)# cib use live
crm(live)# cib commit config1

Congratulations! You’ve just saved your configuration to all nodes in the cluster and made it live.

If you issue status, you should see that your IP address has started, and it will tell you on which node it started.

One thing you probably will want to do is increase the resource “stickiness”. By default, Pacemaker assumes the cost of switching a resource is 0. This just means that Pacemaker thinks there are no penalties for moving a resource around, and will do so any time a node goes down or becomes available. If you take down the node the IP was assigned, Pacemaker automatically moves it to the other node. If the original node comes up again, Pacemaker will go ahead and move the IP back to the node, even if nothing is wrong on the new node. Since we probably don’t want services moving around unless totally necessary, we will set the resource stickiness so Pacemaker will only move a service when necessary:

configure rsc_defaults resource-stickiness=100

Next, we want to make HAProxy available with the IP address. If HAProxy fails on one node, we want Pacemaker to start it (with the right IP) on another node. Pacemaker will handle starting/stopping services/IP addresses on different nodes, but it’s our responsibility to ensure each node has HAProxy installed and the correct configuration.

We’ll use the LSB (Linux Standard Base) class for HAProxy. The LSB class is for all the scripts in /etc/init.d. Note that most startup scripts, including the Debian startup script for HAProxy, are not LSB compatible. You will probably need to tweak the HAProxy startup script so it sends the right signals to Pacemaker so it will act properly. You can find out how to do that here .

In configure mode, with a new configuration, just create a new LSB primitive:

crm(config-2)configure# primitive haproxy lsb:haproxy op monitor interval="1s"

We will want to keep the failover IP and HAProxy together, it doesn’t do us any good if HAProxy runs on one machine, but the IP address is on another. So we set colocation to infinity:

crm(config-2)configure# colocation haproxy-with-public-IPs INFINITY: haproxy failover-ip

And we want to make sure that HAProxy is started after the IP address, so that we can bind to the IP:

crm(config-2)configure# order haproxy-after-IP mandatory: failover-ip haproxy

Do be careful when you set INFINITY with your colocation. If both can’t start, neither will start. I had an issue with my cluster where I had an error in my stunnel configuration on only one server, and I set stunnel and haproxy both to prefer each other with INFINITY preference, and when stunnel couldn’t start, Pacemaker shut down HAProxy even though HAProxy was fine alone. I switched the preference to 200, so that Pacemaker would know I prefer HAProxy and stunnel to run together, but that I don’t consider it a failure if both can’t run for some reason.

Now commit your changes to live and your HAProxy setup will automatically distribute itself across several servers for much better uptime, and no single point of failure! Hooray!


You should always test your clustering setup and make sure it’s acting the way you expect it to. Rip out the power or ethernet cable, shutdown corosync service, mess up your HAProxy config and reboot. Test several different failures and make sure that Pacemaker responds the way you expect it to. Make sure that each node can start up all the necessary services. Make sure that your config isn’t doing anything dumb that will actually worsen your reliability rather than improve it.

Then sleep like a baby at night because you have just removed a significant single point of failure! :)[]=install

Optimizing WordPress with Nginx, Varnish, APC, W3 Total Cache, and Amazon S3 (With Benchmarks) –

Running WordPress with nginx, php-fpm, apc and varnish –

Introduction to Varnish –