SOFTware-pantyRAID


OK… so I completed a build yesterday and of course being the great engineer that I am (LOL), I didn’t add the final disk set to the vm for the bulk of the filesystem to be used for the native application.  I figured it would be easy enough to create the RAID array without clobbering the whole system and starting over… I was right… and it was a cool learning experience to boot.  I have sort-of (and I am really stressing that) created it as an interactive “ghetto-script” but I wrote that part AFTER I did everything manually… so be fore-warned.

#!/bin/bash
#
# Create a RAID 1 array from (2) new disks added to the system
#
# Determine what the new disks are (If you do not already know)
cat /proc/partitions
echo “OK your disk choices are listed above…”
echo “Select the first disk, press [ENTER]”
read disk1
echo “Now select the 2nd disk, press [ENTER]”
read disk2
echo “Just to confirm, these are the 2 disks you selected $disk1 & $disk2, enter[Y/N]”
read yesorno
#
if [[$yesorno == “Y”|| $yesorno == “y” ]]; then
echo “Great let’s move on”;
else echo “OK, let’s start again…”;
echo “ready?”; sleep 2;
bash test.sh
fi
#
echo “OK, there may be some interaction here”
echo “Also keep in mind that this will create a single full partiton on the disk(s)”
for i in $disk1 $disk2; do mkfs -t ext3 /dev/$i; done
echo “OK the file systems have been created. let’s create the array now”
mdadm –detail –scan
echo “What name would you like to use for your md device? i.e. md4, md5, etc…”
echo “Select something NOT listed above… :)”
read mdname
echo “Also, I need to know what RAID level u desire (0=stripe, 1=mirror, etc…)”
read Rlevel
mdadm –create /dev/$mdname –level=$Rlevel –raid-devices=2 /dev/$disk11 /dev/$disk21
# Make the ext3 filesystems on the new RAID device now
mkfs -t ext3 /dev/$mdname

uuid=`mdadm –detail /dev/md5|grep UUID|awk ‘{print $3}’`
echo “ARRAY /dev/$mdname level=raid1 num-devices=2 uuid=$uuid” >> /etc/mdadm.conf
# OR
echo “ARRAY /dev/$mdname level=raid1 num-devices=2 uuid=`mdadm –detail /dev/md5|grep UUID|awk ‘{print $3}’` >> /etc/mdadm.conf
echo “OK what r u going to mount this new array on”
read mountpt
echo “/dev/$mdname                /$mountpt                      ext3    defaults        1 2” >> /etc/fstab
mount -a
df -h
#
#
# EOF

Begining of the Notes section of the process:
for i in c d; do mkfs -t ext3 /dev/sd$i; done
mdadm:

[root@cent55vm ~]# cat /proc/partitions
major minor  #blocks  name

8     0   15728640 sda
8     1     104391 sda1
8     2   10498477 sda2
8     3    2562367 sda3
8     4    2562367 sda4
8    16   15728640 sdb
8    17     104391 sdb1
8    18   10498477 sdb2
8    19    2562367 sdb3
8    20    2562367 sdb4
8    32   12582912 sdc <– NEW Disk
8    48   12582912 sdd <– NEW Disk
9     0   10498368 md0
9     2    2562240 md2
9     1    2562240 md1
9     3     104320 md3

[root@cent55vm ~]# for i in c d; do mdadm –query /dev/sd$i; done
/dev/sdc: is an md device which is not active
/dev/sdc: No md super block found, not an md component.
/dev/sdd: is not an md array
/dev/sdd: No md super block found, not an md component.

[root@cent55vm ~]# mdadm –create /dev/md5 –level=1 –raid-devices=2 /dev/sd[cd]1
[root@cent55vm ~]# mdadm –stop –scan /dev/md5
mdadm: stopped /dev/md5
[root@cent55vm /]# mkfs -t ext3 /dev/md5
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1572864 inodes, 3144688 blocks
157234 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3221225472
96 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@cent55vm ~]# mdadm –query /dev/md5
/dev/md5: 11.100GiB raid1 2 devices, 0 spares. Use mdadm –detail for more detail.
/dev/md5: No md super block found, not an md component.

[root@cent55vm ~]# cat /etc/mdadm.conf

# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 uuid=d8f8e0f1:caa5c290:62f0c003:e5ed749b
ARRAY /dev/md3 level=raid1 num-devices=2 uuid=ff5e4f19:159926bd:d6f963ca:e2b5dd18
ARRAY /dev/md2 level=raid1 num-devices=2 uuid=fdddc210:d340c10e:abec408b:83be0dfe
ARRAY /dev/md1 level=raid1 num-devices=2 uuid=9a79ae00:c847b003:d1212b6b:e1698b20
[root@cent55vm ~]# mdadm –detail /dev/md5|grep UUID
UUID : 746ec738:0867caf1:bcb45960:2e9b2dde
[root@cent55vm ~]# echo “ARRAY /dev/md5 level=raid1 num-devices=2 uuid=746ec738:0867caf1:bcb45960:2e9b2dde” >> /etc/mdadm.conf
[root@cent55vm /]# mdadm –detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Tue Sep 21 09:18:15 2010
Raid Level : raid1
Array Size : 12578752 (12.00 GiB 12.88 GB)
Used Dev Size : 12578752 (12.00 GiB 12.88 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent

Update Time : Tue Sep 21 09:59:58 2010
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : 746ec738:0867caf1:bcb45960:2e9b2dde
Events : 0.2

Number   Major   Minor   RaidDevice State
0       8       33        0      active sync   /dev/sdc1
1       8       49        1      active sync   /dev/sdd1

Advertisements

, , , , , , , , ,

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: