Panty-RAID1 -4- CentOS


So I am working on an older system today, CentOS 4.4 – 2.6.18-93.cc4 to be quite specific and I needed to add a RAID1 mirror to the existing filesystem because for whatever reason, when this server was built, it was all built under / ?  Well just goes to show you, don’t do that, especially if you are locally storing IMAP emails!  Anyway, this should be a good tutorial on how to do this should you encounter a similar situation.

I made a remote backup of the /home and /var fs just in case… 🙂  incidentally, those are what we are working on here.

First, (LOL) shutdown the system and install the new hard disks…

If you are super-duper lucky (like me) upon restart, your hardware will see the new disks just installed, validate that this is also true for you…

dmesg|grep hd (grep for whatever your /dev designation is for hd's)
ide0: BM-DMA at 0xf800-0xf807, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0xf808-0xf80f, BIOS settings: hdc:DMA, hdd:DMA
hda: WDC WD800BB-00JHC0, ATA DISK drive
hdc: WDC WD3200AAJB-00J3A0, ATA DISK drive <-- New drive
hdd: WDC WD3200AAJB-00J3A0, ATA DISK drive <-- New drive

Cool I see my 2 new 320GB disks… 🙂

On to fdisk… I am only going to show the one disk as they are both identical for my task and the process is the same no matter what you are doing with your disks.

fdisk /dev/hdc
The number of cylinders for this disk is set to 38913.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/hdc: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start   End     Blocks      Id  System
/dev/hdc1        1       19456   156280288+  fd  Linux raid autodetect
/dev/hdc2        19457   38913   156288352+  fd  Linux raid autodetect

So this is the finished, written fdisk-ing.  I am going to assume you can figure this part out on your own and say that from the menu, select “n” for “add a new partition” make it primary, and for my task I needed 2 equal sized partitions so I did that as illustrated above giving me 2 147GB partitions to use:

Device Boot      Start   End     Blocks      Id  System
/dev/hdc1        1       19456   156280288+  fd  Linux raid autodetect
/dev/hdc2        19457   38913   156288352+  fd  Linux raid autodetect
Device Boot      Start   End     Blocks      Id  System
/dev/hdd1        1       19456   156280288+  fd  Linux raid autodetect
/dev/hdd2        19457   38913   156288352+  fd  Linux raid autodetect

A word of advice here… see the ID? make sure to set it to “fd” or your array will not come back when you reboot… The fd is the linux auto-detect RAID identifier…

Now create the arrays:

mdadm --create /dev/md0 --chunk=64 --level=raid1 --raid-devices=2
/dev/hdc1 /dev/hdd1
mdadm --create /dev/md1 --chunk=64 --level=raid1 --raid-devices=2
/dev/hdc2 /dev/hdd2

Make the file systems:

mkfs.ext3 /dev/md0
mkfs.ext3 /dev/md1

Check the array(s);

cat /proc/mdstat
 Personalities : [raid1]
 md1 : active raid1 hdd2[1] hdc2[0]
 156288256 blocks [2/2] [UU]
md0 : active raid1 hdd1[1] hdc1[0]
 156280192 blocks [2/2] [UU]
 [=====>...............]  resync = 26.7% (41758400/156280192) finish=56.8min
speed=33562K/sec

*NOTE – the resync process in-process may take a bit of time to complete…

Make the temp mount points:

mkdir home1 var1

Now this part is purely subjective, but I like to do it this way, you can do it however you like.

Edit /etc/fstab adding your new filesystems:

LABEL=/                 /                ext3    defaults        1 1
LABEL=/boot             /boot            ext3    defaults        1 2
/dev/md0                /var             ext3    defaults        1 1
/dev/md1                /home            ext3    defaults        1 1
none                    /dev/pts         devpts  gid=5,mode=620  0 0
none                    /dev/shm         tmpfs   defaults        0 0
none                    /proc            proc    defaults        0 0
none                    /sys             sysfs   defaults        0 0
LABEL=SWAP-hda2         swap             swap    defaults        0 0

Now mount them:

mount -a

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda3              72G   60G  8.4G  88% /
/dev/hda1             194M   12M  173M   7% /boot
/dev/md0              147G   31G  109G  22% /var1
/dev/md1              147G   11G  130G   8% /home
none                  982M     0  982M   0% /dev/shm

There is also another file to ensure that you have, /etc/mdadm.conf, and it should look like this:

cat /etc/mdadm.conf
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=75a8bc67:d6c5a49f:9f889373:47899702
 devices=/dev/hdc2,/dev/hdd2
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f4aa0764:0cb7aa5f:6d175968:58c838dd
 devices=/dev/hdc1,/dev/hdd1

There is a neat trick to get this info…

mdadm --examine --scan >> /etc/mdadm.conf

Done !

I would suggest rebooting to ensure that everything comes back as expected…

Advertisements

, , , , , , , , , ,

  1. #1 by gmconklin on May 27, 2010 - 2:35 pm

    You could also do this:
    mdadm –detail –scan >> /etc/mdadm.conf

    If you have already done the –examine, it may not re-produce any results… so in that case use the –detail option.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: