Posts Tagged LINUX

CIFS it is…


This is the final, fully tested and functional remote backup script for linux… Finally ūüôā

#!/bin/bash
# A straght forward system backup script
#
LOGBASE=/var/log/backup/log
BACKUP_ROOT_DIR=”a/facts77 a/can”¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ## Backup dirs; do not prefix /
NOW=$(date +”%a”)¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ## Get todays day
TSTAMP=$(date +”%l:%M:%S”)¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ## Get time stamp H:M:S
TDATE=$(date -I)                                        ## Get todays date
TAPE=”/oracle55vm_backup”¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ## Backup device name
TAR_ARGS=””¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ## Exclude file
EXCLUDE_CONF=/root/.backup.exclude.conf                 ## Named file for file exclusion
LOGFILE=$LOGBASE/$TDATE.backup.log                      ## Backup Log file
FILELIST=$LOGBASE/$TDATE.backup.file-listing.log        ## Backup Log file list
UNAME=”xxx”
PWORD=”xxxXXXX”
SYSTEM=”`uname -n|cut -c 1-10`”
# Path to binaries
TAR=/bin/tar
MKDIR=/bin/mkdir
#
full_backup(){
local old=$(pwd)i
cd /
# Mount the samba destination
mount.cifs //bufvmfacts01/G/oracle55vm_backup $TAPE -o username=$UNAME,password=$PWORD
# Search the directory for files older than 7 days and delete them
find /oracle55vm_backup -type f -mtime +7|xargs -r rm -f
# Run the backup
tar -zcvf $TAPE/$SYSTEM.bak.`date -I`.tgz $BACKUP_ROOT_DIR # gzipping these
cd $old
}
# Make sure all dirs exits
verify_backup_dirs(){
local s=0
for d in $BACKUP_ROOT_DIR
do
if [ ! -d /$d ];
then
echo “Error : /$d directory does not exit!”
s=1
fi
done
# if not; just die
[ $s -eq 1 ] && exit 1
}
# Make some kind of status report
report_backup_info(){
touch $LOGBASE/$TDATE.backup.file-listing.log
cd $TAPE
echo ” ”
echo ”¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† **** Backup Report ****”
echo ”¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ****¬†¬† $TDATE¬† ****”
echo ” ——————————————————————————— ”
echo ” ################################################################################# ”
echo ” _________________________________________________________________________________ ”
echo ” ”
echo ” ”
echo ”¬† Backup start time: $TSTAMP”
echo ”¬† Operating System: `cat /etc/redhat-release`”
echo ” ”
echo ”¬† Size of the complete archive: `tar -ztvf $SYSTEM.bak.$TDATE.tgz|wc -c` Bytes”
echo ”¬† Size of the logged archive:¬†¬† `cat $FILELIST|wc -c` Bytes”
echo ” ”
echo ”¬† File count of the completed archive: `tar -ztvf $SYSTEM.bak.$TDATE.tgz|wc -l` Files”
echo ”¬† File count of the logged archive:¬†¬†¬† `cat $FILELIST|wc -l` Files”
echo ” ”
echo ”¬† Remote CIFS Directory Listing:”
ls -lh
echo ” ”
echo ”¬† Disk Summary:”
df -h
echo ” ”
echo ” _________________________________________________________________________________ ”
echo ”¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ”
echo ” ################################################################################# ”
echo ” ——————————————————————————— ”
echo ” ”
cd –
} > $LOGFILE 2>&1
#
#
# Clean Up
clean_up(){
cd /
umount $TAPE # unmount the cifs mount
# Email the report
mail -s “System Backup $SYSTEM” gconklin@proserve-solutions.com < $LOGFILE
}
#
#
#### MAIN ####
#
# Make sure log dir exits
[ ! -d $LOGBASE ] && $MKDIR -p $LOGBASE
#
# Verify dirs
verify_backup_dirs
#
#
# Okay let us start backup procedure
# If it is Monday-Friday make a full backup;
# Weekend no backups
full_backup > $FILELIST 2>&1
#
#
# Make the simple report
report_backup_info
#
# Call the Clean UP function
clean_up

, , , , , , , , , , ,

Leave a comment

Pretty Linux?


So what do you think? Is it pretty? LOL ūüôā

I guess I should have added how to actually do this… Here is the quick-n-dirty way:

Add the following to the bottom of your .bashrc file for whatever user:

#
alias ls=’ls –color’
LS_COLORS=’di=1;96:fi=0:ln=31:pi=5:so=5:bd=5:cd=5:or=31:mi=0:ex=35:*.rpm=94:*.tar=92:*.sh=32:*.log=91:*.gz=93:*.tgz=93′
export LS_COLOR

Here is a quick legend for the color associations:

di = directory
fi = file
ln = symbolic link
pi = fifo file
so = socket file
bd = block (buffered) special file
cd = character (unbuffered) special file
or = symbolic link pointing to a non-existent file (orphan)
mi = non-existent file pointed to by a symbolic link (visible when you type ls -l)
ex = file which is executable (ie. has ‘x’ set in permissions)

0   = default colour
1   = bold
4   = underlined
5   = flashing text
7   = reverse field
31  = red
32  = green
33  = orange
34  = blue
35  = purple
36  = cyan
37  = grey
40  = black background
41  = red background
42  = green background
43  = orange background
44  = blue background
45  = purple background
46  = cyan background
47  = grey background
90  = dark grey
91  = light red
92  = light green
93  = yellow
94  = light blue
95  = light purple
96  = turquoise
100 = dark grey background
101 = light red background
102 = light green background
103 = yellow background
104 = light blue background
105 = light purple background
106 = turquoise background

Also, you can combine more than one option per directive like this… *.log=91;1;42 which would give you this:

, , , ,

Leave a comment

SOFTware-pantyRAID


OK… so I completed a build yesterday and of course being the great engineer that I am (LOL), I didn’t add the final disk set to the vm for the bulk of the filesystem to be used for the native application.¬† I figured it would be easy enough to create the RAID array without clobbering the whole system and starting over… I was right… and it was a cool learning experience to boot.¬† I have sort-of (and I am really stressing that) created it as an interactive “ghetto-script” but I wrote that part AFTER I did everything manually… so be fore-warned.

#!/bin/bash
#
# Create a RAID 1 array from (2) new disks added to the system
#
# Determine what the new disks are (If you do not already know)
cat /proc/partitions
echo “OK your disk choices are listed above…”
echo “Select the first disk, press [ENTER]”
read disk1
echo “Now select the 2nd disk, press [ENTER]”
read disk2
echo “Just to confirm, these are the 2 disks you selected $disk1 & $disk2, enter[Y/N]”
read yesorno
#
if [[$yesorno == “Y”|| $yesorno == “y” ]]; then
echo “Great let’s move on”;
else echo “OK, let’s start again…”;
echo “ready?”; sleep 2;
bash test.sh
fi
#
echo “OK, there may be some interaction here”
echo “Also keep in mind that this will create a single full partiton on the disk(s)”
for i in $disk1 $disk2; do mkfs -t ext3 /dev/$i; done
echo “OK the file systems have been created. let’s create the array now”
mdadm –detail –scan
echo “What name would you like to use for your md device? i.e. md4, md5, etc…”
echo “Select something NOT listed above… :)”
read mdname
echo “Also, I need to know what RAID level u desire (0=stripe, 1=mirror, etc…)”
read Rlevel
mdadm –create /dev/$mdname –level=$Rlevel –raid-devices=2 /dev/$disk11 /dev/$disk21
# Make the ext3 filesystems on the new RAID device now
mkfs -t ext3 /dev/$mdname

uuid=`mdadm –detail /dev/md5|grep UUID|awk ‘{print $3}’`
echo “ARRAY /dev/$mdname level=raid1 num-devices=2 uuid=$uuid” >> /etc/mdadm.conf
# OR
echo “ARRAY /dev/$mdname level=raid1 num-devices=2 uuid=`mdadm –detail /dev/md5|grep UUID|awk ‘{print $3}’` >> /etc/mdadm.conf
echo “OK what r u going to mount this new array on”
read mountpt
echo “/dev/$mdname¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† /$mountpt¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ext3¬†¬†¬† defaults¬†¬†¬†¬†¬†¬†¬† 1 2” >> /etc/fstab
mount -a
df -h
#
#
# EOF

Begining of the Notes section of the process:
for i in c d; do mkfs -t ext3 /dev/sd$i; done
mdadm:

[root@cent55vm ~]# cat /proc/partitions
major minor  #blocks  name

8     0   15728640 sda
8     1     104391 sda1
8     2   10498477 sda2
8     3    2562367 sda3
8     4    2562367 sda4
8    16   15728640 sdb
8    17     104391 sdb1
8    18   10498477 sdb2
8    19    2562367 sdb3
8    20    2562367 sdb4
8    32   12582912 sdc <РNEW Disk
8    48   12582912 sdd <РNEW Disk
9     0   10498368 md0
9     2    2562240 md2
9     1    2562240 md1
9     3     104320 md3

[root@cent55vm ~]# for i in c d; do mdadm –query /dev/sd$i; done
/dev/sdc: is an md device which is not active
/dev/sdc: No md super block found, not an md component.
/dev/sdd: is not an md array
/dev/sdd: No md super block found, not an md component.

[root@cent55vm ~]# mdadm –create /dev/md5 –level=1 –raid-devices=2 /dev/sd[cd]1
[root@cent55vm ~]# mdadm –stop –scan /dev/md5
mdadm: stopped /dev/md5
[root@cent55vm /]# mkfs -t ext3 /dev/md5
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1572864 inodes, 3144688 blocks
157234 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3221225472
96 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@cent55vm ~]# mdadm –query /dev/md5
/dev/md5: 11.100GiB raid1 2 devices, 0 spares. Use mdadm –detail for more detail.
/dev/md5: No md super block found, not an md component.

[root@cent55vm ~]# cat /etc/mdadm.conf

# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 uuid=d8f8e0f1:caa5c290:62f0c003:e5ed749b
ARRAY /dev/md3 level=raid1 num-devices=2 uuid=ff5e4f19:159926bd:d6f963ca:e2b5dd18
ARRAY /dev/md2 level=raid1 num-devices=2 uuid=fdddc210:d340c10e:abec408b:83be0dfe
ARRAY /dev/md1 level=raid1 num-devices=2 uuid=9a79ae00:c847b003:d1212b6b:e1698b20
[root@cent55vm ~]# mdadm –detail /dev/md5|grep UUID
UUID : 746ec738:0867caf1:bcb45960:2e9b2dde
[root@cent55vm ~]# echo “ARRAY /dev/md5 level=raid1 num-devices=2 uuid=746ec738:0867caf1:bcb45960:2e9b2dde” >> /etc/mdadm.conf
[root@cent55vm /]# mdadm –detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Tue Sep 21 09:18:15 2010
Raid Level : raid1
Array Size : 12578752 (12.00 GiB 12.88 GB)
Used Dev Size : 12578752 (12.00 GiB 12.88 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent

Update Time : Tue Sep 21 09:59:58 2010
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : 746ec738:0867caf1:bcb45960:2e9b2dde
Events : 0.2

Number   Major   Minor   RaidDevice State
0       8       33        0      active sync   /dev/sdc1
1       8       49        1      active sync   /dev/sdd1

, , , , , , , , ,

Leave a comment

WVC-min… all 3-in-1


I used to be a pure command line junky… maybe that was just immaturity as an admin… thinking I was cool because I didn’t rely on any type of GUI… Well I have moved on from that thought process and I use a GUI quite a bit now… well a lot more than I used to anyway… so I thought I would share the 9-liner that I use to get it all running…

Oh and the pic here is from my son… the Artists’ rendering of “Super Diaper Baby,” I felt it appropriate since I just took mine off ūüôā

#!/bin/bash
# Virtualmin, Webmin & Cloudmin Installation
yum install -y wget
cd /usr/src
wget http://software.virtualmin.com/gpl/scripts/install.sh
sh install.sh
wget http://cloudmin.virtualmin.com/gpl/scripts/cloudmin-gpl-redhat-install.sh
sh cloudmin-gpl-redhat-install.sh
# END

, , , , , , , , , , , ,

Leave a comment

20 lines or less to NFS !


I was looking through the Linux Magazine links on facebook again and came across another nifty little topic that is fairly usable in almost every scenario, NFS.¬† I thought it was funny that I found several blog postings that had lots of replies that the steps outlined didn’t work !¬† I know I am a stickler, but see I told you that was the case… this one is functional on CentOS v5.5… So in less than 20 lines, you can share all day on your Linux boxes…

# Server-side:
yum install -y nfs-utils nfs-utils-lib
echo “/home/software 192.168.11.141(rw,sync)” >> /etc/exports
echo “/home/scripts 192.168.11.141(rw,sync)” >> /etc/exports
echo “portmap: 192.168.11.0/255.255.255.0” >> /etc/hosts.allow
exportfs -a -v
for i in nfs portmap; do chkconfig $i on; done
for i in tcp udp; do iptables -A INPUT -p $i -m $i -m multiport –dports 1110,2049 -j ACCEPT; done
iptables-save > /etc/sysconfig/iptables
service portmap start
service nfs start
#
# Client-side:
service portmap start
chkconfig portmap on
cd /
mkdir fx6-share
mount 192.168.11.64:/home/software /fx6-share
# Make it stick:
echo “192.168.11.64:/home/software /fx6-share¬†¬†¬†¬†¬†¬†¬†¬† nfs¬†¬†¬†¬† defaults¬†¬†¬†¬†¬†¬†¬† 0 0” >> /etc/fstab
echo “192.168.11.64:/home/scripts /fx6-scripts¬†¬†¬†¬†¬†¬†¬†¬† nfs¬†¬†¬†¬† defaults¬†¬†¬†¬†¬†¬†¬† 0 0” >> /etc/fstab
mount -a
# EOF

, , , , , , ,

2 Comments

Nagios Core 3.2.1 “Insta-Install”


I think I already posted something here about Nagios… but maybe not as I was doing a test install last week following a DEMO of ScienceLogic EM7 appliances and I couldn’t find any easy steps… so this one may have slipped through the cracks… I have added the script I just made below…¬† This does nothing more than make Nagios usable, you will need to do all of the customization on your own…¬† I also made a nice vm but since I am a ‘cheapo’ on here it is too big to host it… I will try to find a free host and put the link up here as that would save even more time if you plan to use it as a vm guest ūüôā

#!/bin/bash
cd /home
mkdir software
cd  software/
mkdir nagios
cd nagios/
wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.1.tar.gz
mkdir plugins
cd plugins
wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.14.tar.gz
cd ..
mkdir addons
cd addons
for i in nrpe-2.12.tar.gz nsca-2.7.2.tar.gz ndoutils-1.4b9.tar.gz; do wget http://prdownloads.sourceforge.net/sourceforge/nagios/$i; done
cd ..
yum install -y gcc glibc glibc-common httpd php gd gd-devel
/usr/sbin/useradd -m nagios
passwd nagios
/usr/sbin/groupadd nagcmd
/usr/sbin/usermod -a -G nagcmd nagios
/usr/sbin/usermod -a -G nagcmd apache
tar xzf nagios-3.2.1.tar.gz
cd nagios-3.2.1
./configure –with-command-group=nagcmd
make all
make install
make install-init
make install-config
make install-commandmode
sed -i ‘s/nagios@localhost/networksupport@proserve-solutions.com/’ /usr/local/nagios/etc/objects/contacts.cfg
make install-webconf
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
service httpd restart
cd ../plugins
tar xzf nagios-plugins-1.4.14.tar.gz
cd nagios-plugins-1.4.14
./configure –with-nagios-user=nagios –with-nagios-group=nagios
make
make install
chkconfig –add nagios
chkconfig nagios on
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
service nagios start
getenforce # just checks your selinux status
# EOF

, , , , , , , ,

2 Comments

Penile-extendr… Oh wait, sorry… vgextendr…


So you should know that I am all about ‘quick-n-dirty’ and this one is the epitome of that statement…¬† I did this one following some preliminary steps in VMware to add another virtual disk to my settings for this host.¬† I added a 12G Persistent SCSI vdisk to work with on this host… I am assuming you know how to do that… ( Edit ‘Virtual Machine Settings’, ‘Add…’, ‘Add Harddisk’, ‘ Create a new virtualdisk’, etc… ) So make sure you have done that before proceeding or of course nothing will work ūüôā

cat /proc/partitions
/usr/sbin/pvcreate /dev/sdb
major minor  #blocks  name

8     0   12582912 sda
8     1     104391 sda1
8     2   12474472 sda2
8¬†¬†¬† 16¬†¬† 12582912 sdb <– This is the newly created vdisk based on size and order…
253     0    2850816 dm-0
253     1    3047424 dm-1
253     2    2097152 dm-2
253     3    4128768 dm-3
vgdisplay|grep Free
#  Free  PE / Size       10 / 320.00 MB
vgextend /dev/mapper/VolGroup00 /dev/sdb
vgdisplay|grep Free
#¬† Free¬† PE / Size¬†¬†¬†¬†¬†¬† 393 / 12.28 GB <– Oooo… lots bigger now ūüôā
lvextend -l +93 /dev/mapper/VolGroup00-LogVol00 ( +93 just to make a nice round number in remainder…)
resize2fs /dev/mapper/VolGroup00-LogVol00
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/VolGroup00-LogVol00 is mounted on /; on-line resizing required
Performing an on-line resize of /dev/mapper/VolGroup00-LogVol00 to 1474560 (4k) blocks.
The filesystem on /dev/mapper/VolGroup00-LogVol00 is now 1474560 blocks long
vgdisplay|grep Free
Free  PE / Size       300 / 9.38 GB
EOFarce…

, , , , , , , , , ,

1 Comment