Dec 2, 2009

Networking : network isolation with iptables and brctl


This is one of the first problems I had to solve in my career : the goal is to isolate a part of an existing network for any reason. Let's say your network is configured 192.168.0.0/16 and you cannot isolate with the use of VLANs (either you don't have compliant equipment, either you cannot do it on your architecture, ...).

The solution is to use a Linux bridge and filter packets with iptables.

Have a look at the graphic at the beginning of this post : this is what I'll implement with the following scripts. So let's suppose you want to isolate a group of client machines with their own servers (UNIX and Windows servers).
Considering this, there are three things to do :

  • Give access to the servers and client machines from admin machines
  • Give access to internet, e-mailing, FTP, etc to isolated machines
  • Give access to some services (license, Samba,...) from machines outside the isolated network

In order to do that, prepare a machine with 2 Ethernet cards, install any distro you want on it (i like CentOS but any Debian-like or Suse or any Linux is fine too). Once your this installed, verify that your 2 Ethernet controllers are recognized :

lsmod
lspci
ifconfig -a

The ifconfig -a command should have 2 ethernet interfaces (eth0 and eth1 usually), if not get the good linux module for your network card !

Now, install bridge utilities :
apt-get install bridge-utils #on Debians
yum install bridge-utils #on RHEL / CentOS / Fedora

Let's configure the brigde now. Here is the script that you'll have to put in /etc/init.d and link to /etc/rc5.d/ according to your distro (see /etc/runlevel for current runlevel and your distro doc for running a script at startup).
#!/bin/sh
#
# Start and stop Network bridge
#
case "$1" in
start)
ifconfig eth0 0.0.0.0 promisc
ifconfig eth1 0.0.0.0 promisc

brctl addbr pont
brctl addif pont eth0
brctl addif pont eth1

ifconfig pont 192.168.231.50 netmask 255.255.255.0
route add default gw 192.168.231.254
;;
stop)
brctl delif pont eth0
brctl delif pont eth1
brctl delbr pont
;;
stat)
brctl showstp
;;
*)
echo "Usage : /etc/init.d/brigde start | stop | stat"
;;
esac

We configure the bridge with the 192.168.231.50 IP address, and add to it our 2 ethernet cards.

Now the bridge is up and running let's focus on the packet filtering. As I said, iptables will help up do that, as long as we know source IP, dest IP, proto (tcp, udp,..) and port. Here is the firewalling script, to be put in /etc/init.d :

#!/bin/sh
#
# Start and stop Firewall
#
case "$1" in
start)
PATH="/usr/sbin:$PATH"

#We flush tables
iptables -F
#We erase all user tables
iptables -X

#Default rules : Deny all access
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

#Let's log what goes through our firewall
iptables -t filter -A INPUT -j LOG --log-level notice
iptables -t filter -A OUTPUT -j LOG --log-level notice
iptables -t filter -A FORWARD -j LOG --log-level notice

#Admin machines, access the firewall, and UNIX and Windows servers
awk '{print $1}' /etc/machines_admin | while read ligne
do
#We accept connnection from admin machines to our firewall
iptables -A INPUT --source $ligne -p tcp --dport ssh -j ACCEPT
iptables -A OUTPUT --destination $ligne -p tcp --sport ssh -j ACCEPT
#Connections from admin machines to UNIX server : ping and SSH
iptables -A FORWARD --source $ligne --destination 192.168.100.102 -p icmp -j ACCEPT
iptables -A FORWARD --source 192.168.100.102 --destination $ligne -p icmp -j ACCEPT
iptables -A FORWARD --source $ligne --destination 192.168.100.102 -p tcp --dport ssh -j ACCEPT
iptables -A FORWARD --source 192.168.100.102 --destination $ligne -p tcp --sport ssh -j ACCEPT
#Connections from admns to Windows server : ping and Dameware remote control
iptables -A FORWARD --source $ligne --destination 192.168.100.100 -p icmp -j ACCEPT
iptables -A FORWARD --source 192.168.100.100 --destination $ligne -p icmp -j ACCEPT
iptables -A FORWARD --source $ligne --destination 192.168.100.100 -p tcp --dport 6129 -j ACCEPT
iptables -A FORWARD --source 192.168.100.100 --destination $ligne -p tcp --sport 6129 -j ACCEPT
#Add here any rule you need for admin machines
done

#Machines behind the firewall should accees mail server, internet, and an accees to a FTP server
awk '{print $1}' /etc/machines_client | while read ligne
do
#Client machines ger accees to mail server (smtp and imap), proxy (tcp 8080), and LDAP on the mail server (tcp/389)
iptables -A FORWARD --source $ligne --destination 192.168.110.97 -p tcp --dport 8080 -j ACCEPT
iptables -A FORWARD --source 192.168.110.97 --destination $ligne -p tcp --sport 8080 -j ACCEPT
iptables -A FORWARD --source $ligne --destination 192.168.110.112 -p tcp --dport 143 -j ACCEPT
iptables -A FORWARD --source 192.168.110.112 --destination $ligne -p tcp --sport 143 -j ACCEPT
iptables -A FORWARD --source $ligne --destination 192.168.110.112 -p tcp --dport 25 -j ACCEPT
iptables -A FORWARD --source 192.168.110.112 --destination $ligne -p tcp --sport 25 -j ACCEPT
iptables -A FORWARD --source $ligne --destination 192.168.110.112 -p tcp --dport 389 -j ACCEPT
iptables -A FORWARD --source 192.168.110.112 --destination $ligne -p tcp --sport 389 -j ACCEPT
#We authorize client machines to acceess a FTP server
iptables -A FORWARD --source $ligne --destination 10.243.0.225 -p tcp --dport 21 -j ACCEPT
iptables -A FORWARD --source 10.243.0.225 --destination $ligne -p tcp --sport 21 -j ACCEPT
iptables -A FORWARD --source $ligne --destination 10.243.0.225 -p tcp --dport 20 -j ACCEPT
iptables -A FORWARD --source 10.243.0.225 --destination $ligne -p tcp --sport 20 -j ACCEPT
#From admin machines to client machines, we let authorize ping and Dameware remote control
awk '{print $1}' /etc/machines_admin | while read admin
do
iptables -A FORWARD --source $admin --destination $ligne -p tcp --dport 6129 -j ACCEPT
iptables -A FORWARD --source $ligne --destination $admin -p tcp --sport 6129 -j ACCEPT
iptables -A FORWARD --source $admin --destination $ligne -p icmp -j ACCEPT
iptables -A FORWARD --source $ligne --destination $admin -p icmp -j ACCEPT
done
#Add here any rules you need for client machines
done

;;
stop)
iptables -F
iptables -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
;;
stat)
iptables -L
;;
*)
echo "Usage : /etc/init.d/firewall start | stop | status"
;;
esac

Note that we use 2 files for declaring admin and client machines. These 2 files should be same format as /etc/hosts, meaning for ex for /etc/machines_admin:
192.168.100.45 PC329
192.168.100.222 PC771

Same thing for /etc/machines_client.

Now you have a working, logging firewall/bridge :)

Have fun !

Dec 1, 2009

RRD Monitoring for Netapp [2] : Network in and out in KB/s


The second of my posts, concerning Netapp monitoring : this time we'll monitor Network IN and OUT. Refer to the previous article in order to get system dependencies, and software needed.

With the same perl script which reads SNMP entries, we'll get values for Netwok in and Network out for the Filer.
As for the operations per second, here we get TOTAL Network in and out. This explains why we have to stock it into a "COUNTER" and not a "GAUGE".

As usual, the crontab, every minute reads the value, updates the RRD and creates the graph :
* * * * * cd /root/scripts/; /root/scripts/netapp1_net_rrd
And now the script which does all that :
#!/bin/bash

check="/root/scripts/check_netapp"
args=" -H 192.168.xxx.xxx "
stockage="/data/rrd"
rep_logs="/root/logs"
rep_img="/data/www/monitoring"
declare -ar durees='([0]="1" [1]="10" [2]="30" [3]="90" )'


creation_rrd(){

if [ ! -f $stockage/netapp1_net.rrd ]
then
echo "rrdtool create $stockage/netapp1_net.rrd -s 60 \\" > /tmp/create.sh
echo "DS:netin:COUNTER:180:U:U \\" >> /tmp/create.sh
echo "RRA:MAX:0.5:1:14400 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:30:960 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:180:480 \\" >> /tmp/create.sh
echo "DS:netout:COUNTER:180:U:U \\" >> /tmp/create.sh
echo "RRA:MAX:0.5:1:14400 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:30:960 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:180:480 \\" >> /tmp/create.sh
echo >> /tmp/create.sh
. /tmp/create.sh
rm -f /tmp/create.sh
fi
}

maj_rrd(){
commande="rrdtool update $stockage/netapp1_net.rrd N"
comm=$(echo $check$args" -v NETIN")
res=$($comm)
cifsops=$(echo $res| awk '{print $7}' )
comm=$(echo $check$args" -v NETOUT")
res=$($comm)
nfsops=$(echo $res| awk '{print $7}' )
commande=$commande":"$cifsops":"$nfsops
#echo $commande
$($commande)
}

creation_graph(){
for i in ${durees[*]}
do
echo "rrdtool graph $rep_img/netapp1_net_"$i".png \\" > /tmp/graph_netapp1_net.sh
echo "-s \"now -"$i" days\" -e now \\" >> /tmp/graph_netapp1_net.sh
echo "--title=\"Entrees sorties reseau sur Netapp1, les "$i" derniers jours\" \\" >> /tmp/graph_netapp1_net.sh
echo "--vertical-label=\"KOctets / sec \" \\" >> /tmp/graph_netapp1_net.sh
echo "--imgformat=PNG \\" >> /tmp/graph_netapp1_net.sh
echo "--color=BACK#CCCCCC \\" >> /tmp/graph_netapp1_net.sh
echo "--color=CANVAS#343434 \\" >> /tmp/graph_netapp1_net.sh
echo "--color=SHADEB#9999CC \\" >> /tmp/graph_netapp1_net.sh
echo "--width=600 \\" >> /tmp/graph_netapp1_net.sh
echo "--base=1000 \\" >> /tmp/graph_netapp1_net.sh
echo "--height=400 \\" >> /tmp/graph_netapp1_net.sh
echo "--interlaced \\" >> /tmp/graph_netapp1_net.sh
echo "--lower-limit=0 \\" >> /tmp/graph_netapp1_net.sh
echo "DEF:netin=$stockage/netapp1_net.rrd:netin:MAX \\" >> /tmp/graph_netapp1_net.sh
echo "AREA:netin#FE8A06:\" Flux entrant reseau\" \\" >> /tmp/graph_netapp1_net.sh
echo "DEF:netout=$stockage/netapp1_net.rrd:netout:MAX \\" >> /tmp/graph_netapp1_net.sh
echo "LINE1:netout#06FE40:\" Flux sortant reseau\" \\" >> /tmp/graph_netapp1_net.sh
. /tmp/graph_netapp1_net.sh
rm -f /tmp/graph_netapp1_net.sh
done
}

cd /root/scripts
creation_rrd
maj_rrd
creation_graph



Have fun !

RRD Monitoring for Netapp [1] : Operations per second



This is the first of my posts for Netapp Filers monitoring. Main purpose here is to monitor operations per second on these Filers. As usual, we'll script for getting the values and then use RRD to stock values over a period of time and graph them !

For that, we'll use a perl script, which does an SNMP call on the Filer in order to get the values (disk usage, cpu usage, IOs, network IO; here operations per second are interesting us). In order to run this script, you must have, Perl, net-snmp-perl, perl-Config-IniFiles, and perl-Crypt-DES installed. Get the script check_netapp here !. I customized this script in order to read NetIN, NetOUT, OPS/sec and so on. If you need to add new SNMP entries in this script, get them from /vol0/etc/mib/traps.dat from your Netapp. You'll also need utils.pm from Nagios, get it on my site, here.

Once these files copied on your machine and libs installed, this is the crontab entry for our Bash script :
* * * * * cd /root/scripts/; /root/scripts/netapp1_ops_rrd

And here is what netapp1_ops_rrd look like :
#!/bin/bash

check="/root/scripts/check_netapp"
args=" -H 192.168.xxx.xxx "
stockage="/data/rrd"
rep_logs="/root/logs"
rep_img="/data/www/monitoring"
declare -ar durees='([0]="1" [1]="10" [2]="30" [3]="90" )'


creation_rrd(){

if [ ! -f $stockage/netapp1_ops.rrd ]
then
echo "rrdtool create $stockage/netapp1_ops.rrd -s 60 \\" > /tmp/create.sh
echo "DS:cifsops:COUNTER:180:U:U \\" >> /tmp/create.sh
echo "RRA:MAX:0.5:1:14400 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:30:960 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:180:480 \\" >> /tmp/create.sh
echo "DS:nfsops:COUNTER:180:U:U \\" >> /tmp/create.sh
echo "RRA:MAX:0.5:1:14400 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:30:960 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:180:480 \\" >> /tmp/create.sh
echo "DS:fcops:COUNTER:180:U:U \\" >> /tmp/create.sh
echo "RRA:MAX:0.5:1:14400 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:30:960 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:180:480 \\" >> /tmp/create.sh
echo >> /tmp/create.sh
. /tmp/create.sh
rm -f /tmp/create.sh
fi
}

maj_rrd(){
commande="rrdtool update $stockage/netapp1_ops.rrd N"
comm=$(echo $check$args" -v CIFSOPS")
res=$($comm)
cifsops=$(echo $res| awk '{print $6}' )
comm=$(echo $check$args" -v NFSOPS")
res=$($comm)
nfsops=$(echo $res| awk '{print $6}' )
comm=$(echo $check$args" -v FCOPS")
res=$($comm)
fcops=$(echo $res| awk '{print $6}' )
commande=$commande":"$cifsops":"$nfsops":"$fcops
#echo $commande
$($commande)
}

creation_graph(){
for i in ${durees[*]}
do
echo "rrdtool graph $rep_img/netapp1_ops_"$i".png \\" > /tmp/graph_netapp1_ops.sh
echo "-s \"now -$i"" days\" -e now \\" >> /tmp/graph_netapp1_ops.sh
echo "--title=\"Operations par seconde sur Netapp1, les "$i" derniers jours\" \\" >> /tmp/graph_netapp1_ops.sh
echo "--vertical-label=\"OPS / sec \" \\" >> /tmp/graph_netapp1_ops.sh
echo "--imgformat=PNG \\" >> /tmp/graph_netapp1_ops.sh
echo "--color=BACK#CCCCCC \\" >> /tmp/graph_netapp1_ops.sh
echo "--color=CANVAS#343434 \\" >> /tmp/graph_netapp1_ops.sh
echo "--color=SHADEB#9999CC \\" >> /tmp/graph_netapp1_ops.sh
echo "--width=600 \\" >> /tmp/graph_netapp1_ops.sh
echo "--base=1000 \\" >> /tmp/graph_netapp1_ops.sh
echo "--height=400 \\" >> /tmp/graph_netapp1_ops.sh
echo "-E \\" >> /tmp/graph_netapp1_ops.sh
echo "--lower-limit=0 \\" >> /tmp/graph_netapp1_ops.sh
echo "DEF:cifsops=$stockage/netapp1_ops.rrd:cifsops:MAX \\" >> /tmp/graph_netapp1_ops.sh
echo "AREA:cifsops#FE8A06:\" Operations CIFS par sec\" \\" >> /tmp/graph_netapp1_ops.sh
echo "DEF:nfsops=$stockage/netapp1_ops.rrd:nfsops:MAX \\" >> /tmp/graph_netapp1_ops.sh
echo "AREA:nfsops#06FE40:\" Operations NFS par sec\":STACK \\" >> /tmp/graph_netapp1_ops.sh
echo "DEF:fcops=$stockage/netapp1_ops.rrd:fcops:MAX \\" >> /tmp/graph_netapp1_ops.sh
echo "AREA:fcops#F7053E:\" Operations FC par sec\":STACK \\" >> /tmp/graph_netapp1_ops.sh
. /tmp/graph_netapp1_ops.sh
rm -f /tmp/graph_netapp1_ops.sh
done
}

cd /root/scripts
creation_rrd
maj_rrd
creation_graph


As you can see, in RRD I decided to stock 3 RRA per value : 10 days with a precision of 1mn, 20 days with an average on 30 minutes and 60 days with an average on 3 hours. You can modify all that as you wish in the rrd create command !
What is new comparing to the other RRDs I presented is that here we use an array of values (1, 10, 30, 90) in order to generate graphs for 1day, 10days, 30days, ... In that way we have an "MRTG-like" monitoring for our Netapp operations per second.
An other new thing is the values we get from SNMP : for CIFS for ex. we get TOTAL operations since the Filer has rebooted. This explains why we stock a "COUNTER" in RRD and not a GAUGE like usually !
We also use Areas into RRD graphs and stack them to have a nice graph like the one on RRD's site, here

As usual, refer to the post image in order to have a preview of the graph !

Have fun !

Nov 30, 2009

RHEL & CentOS automatic install Kickstart


I use Kickstart in order to install automatically RHEL (3, 4, 5, 32 or 64 bit) and CentOS from an image shared on the network (NFS).

You can completely automatize the method I describe by adding a DHCP/BOOTP server on which you install a boot image. I do not use that because of already having several networks, all with their own DHCP. I use a bootable image of RHEL and then start with this line on the GRUB :

linux ks=nfs:192.168.xx.xx:/path/to/your/kickstart
The advantages of this method are :

  • Nothing to do during the install, meaning time for other things ;p
  • You can customize the default RHEL or CentOS install with the post-install script
  • Greatest speed ever !!! Twice or three times faster than installing with DVD or CD, as long as your network is correct (100Mb).

So, first step is to create ISO images of your favorite distro. For that use dd or mkisofs :
dd if=/dev/dvd of=image.iso
mkisofs -R -J -o image.iso /mnt/cdrom


Then, share these images (i have on per architecture and distro) on a NFS server.

Now the most important, the Kickstart file :
# System authorization information
auth --useshadow --enablemd5 --enablenis --nisdomain=mydomain --nisserver=192.168.xxx.xxx
# License RHEL
key xxxxxxxxxxxxxx # put here youy license key
# System bootloader configuration
bootloader --location=mbr
# Clear the Master Boot Record
zerombr
# Partition clearing information
clearpart --all --initlabel
# Use graphical install
graphical
# Firewall configuration
firewall --disabled
# Run the Setup Agent on first boot
firstboot --disable
# System keyboard
keyboard fr-latin1
# System language
lang fr_FR
# Installation logging level
logging --level=debug
#We make a network install from NFS Server
nfs --server=192.168.xxx.xxx --dir=/path/to/your/isos/
# Network information
network --bootproto=dhcp --device=eth0 --onboot=on --mtu=4500
# Reboot after installation
reboot
#Root password
rootpw --iscrypted $1$DNIhtN0D$8D.Ard1Aq48KP6NmtxZSx0
# SELinux configuration
selinux --disabled
# System timezone
timezone Europe/Paris
# Install OS instead of upgrade
install
# X Window System configuration information
xconfig --defaultdesktop=GNOME --depth=24 --resolution=1280x1024
# Disk partitioning information, fixed sizes, /data is what remains on disk
part /boot --bytes-per-inode=4096 --fstype="ext3" --size=200
part / --bytes-per-inode=4096 --fstype="ext3" --size=5000
part /var --bytes-per-inode=4096 --fstype="ext3" --size=5000
part /tmp --bytes-per-inode=4096 --fstype="ext3" --size=5000
part /usr --bytes-per-inode=4096 --fstype="ext3" --size=10000
part swap --bytes-per-inode=4096 --fstype="swap" --size=4000
part /data --bytes-per-inode=4096 --fstype="ext3" --grow --size=1

%packages
@base-x
@gnome-desktop
@base
@development-libs
@graphical-internet
@admin-tools
@development-tools
@kde-desktop
@printing
@sound-and-video
@legacy-software-development
@graphics
@office
@system-tools
@editors
@engineering-and-scientific

#--- Post-installation script
%post
#!/bin/bash
chroot /mnt/sysimage

echo "192.168.xxx.xxx netapp" >> /etc/hosts
echo "192.168.xxx.xxx nisserver" >> /etc/hosts

mkdir -p /netapp/vol3
mount 192.168.xxx.xxx:/vol/vol3 /netapp/vol3
cd /netapp/vol3/distros/rpms
#Installing NVIDIA kernel module, RPMs from internet.
rpm -ivh nvidia-graphics-helpers-0.0.26-27.el5.i386.rpm
rpm -ivh nvidia-graphics-devices-1.0-5.0.el5.noarch.rpm
rpm -ivh nvidia-graphics173.14.09-libs-173.14.09-99.el5.i386.rpm
rpm -ivh nvidia-graphics173.14.09-kmdl-2.6.18-53.el5PAE-173.14.09-99.el5.i686.rpm
rpm -ivh nvidia-graphics173.14.09-173.14.09-99.el5.i386.rpm

#Installing software (meaning doing links to autofs mount points)
ln -s /soft/abaqus/ /usr/local/abaqus
ln -s /soft/hyperworks8 /usr/local/hyworks
ln -s /soft/hyperworks7 /usr/local/hyworks7
ln -s /soft/msc /usr/local/msc
ln -s /soft/radioss /usr/local/radioss

#Copy nsswitch and other resolv.conf ....
cp -f /netapp/vol3/distros/post/nsswitch.conf /etc/nsswitch.conf

cd /root
umount /netapp/vol3



Have fun !

Nov 25, 2009

Howto install CVS/CVSNT server on RHEL/CentOS


First you'll have to check /etc/services for the definitions of the CVS used TCP ports :




grep -i cvs /etc/services
cvspserver 2401/tcp # CVS client/server operations
cvspserver 2401/udp # CVS client/server operations

Then you'll have to install either CVS package either CVSNT package :
yum install cvs
or
rpm -ivh cvsnt-2.5.04.3510-rh9-rpm.tar.gz

Now, set up users that will access the CVS server :
vim /etc/passwd
vim /etc/shadow

Then, setup a folder for the CVS data. I usually put this on an NFS share which is on a filer so that data is backuped and you have snapshots of it :
mkdir /home/cvsrepo
cvs -d /home/cvsrepo init
chgrp GID /home/cvsrepo (according to what you did in the /etc/passwd)
chmod g+w /home/cvsrepo

Now add the CVSROOT environement variable to all the users needed :

echo "export CVSROOT=/home/cvsrepo" >> /root/.bashrc
echo "export CVSROOT=/home/cvsrepo" >> /home/userXXX/.bashrc

Now time for setup the CVS server. It will be launched by Xinetd, so check you have it running on your system. It is running default on CentOS5, RHEL5, Suse, ...
/etc/init.d/xinetd status
It's Xinetd who will launch CVS each time a client is asking. This is configured in the /etc/xinetd.d folder; you'll have to add a file like this one :
cat /etc/xinetd.d/cvspserver
service cvspserver
{
disable = no
socket_type = stream
wait = no
user = root
group = systeme
log_type = FILE /var/log/cvspserver
protocol = tcp
env = '$HOME=/usr/local/cvspserver'
log_on_failure += USERID
port = 2401
server = /usr/bin/cvs
server_args = -f --allow-root=/home/cvsrepo pserver
}


Now restart you xinetd, and check you /var/log/messages that your new rule cvspserver has been loaded :
/etc/init.d/xinetd restart
cat /var/log/messages

Everything is now ok, your CVS server is up. Supposing you created a user named 'user1' on the cvs server named 'cvsserver' here is the CVSROOT you'll have to use either on UNIX systems, either on Windows with TortoiseCVS or another CVs client :
CVSROOT=:pserver:user1@cvsserver:/home/cvsrepo
For security reasons, it's possible to create CVS ONLY users. For that, go to the CVS folder and CVSROOT then. At this place there is a "CVS local passwd" alose named passwd. To add users able only to access the CVS and no the entire system, use htpasswd command or perl scripts :
htpasswd passwd user3

Have fun !

Nov 18, 2009

Playing with "find" and "dc"


First, here is how to find and print all files that were modified more than 300 days from now, and that are bigger than 30k. For all these files we print : User [tab] Size_in_kb [tab] Creation_date [tab] Access_date [tab] Path_to_the_file

find . -mtime +300 -size +30k -type f -printf "%u\t%k\t%TD\t%AD\t%p\n"
Now, another interesting thing could be knowing the total size of all these files. For that, print only the size, followed by "+p" and pipe it to dc :

find . -mtime +300 -size +30k -type f -printf "%k+p\n" | dc
At the end you'll have the total size of your files older than 300 days, and bigger than 30kb.

Link aggregation on Alcatel 5022 Omnicore and Alcatel 6400 Switch

Here is how to do a link aggregation (Fiber Channel or Ethernet) between 2 Alcatel network devices.
First, on the Alcatel 6400 Switch, here is a dump of the config :
!========================================!
! File: /flash/working/boot.cfg !
!========================================!
! Stack Manager :
! Chassis :
system name SW6400
system location Some_Building_Some_Place
! Configuration:
! VLAN :
vlan 1 enable name "VLAN 1"
! VLAN SL:
! IP :
ip service all
ip interface "vlan1" address 192.168.230.246 mask 255.255.255.0 vlan 1 ifindex 1
! IPX :
! IPMS :
! AAA :
aaa authentication console "local"
aaa authentication telnet "local"
aaa authentication http "local"
! PARTM :
! AVLAN :
! 802.1x :
! QOS :
! Policy manager :
! Session manager :
! SNMP :
! RIP :
! IPv6 :
! IPRM :
ip static-route 0.0.0.0/0 gateway 192.168.230.254 metric 1
! RIPng :
! Health monitor :
! Interface :
! Udld :
! Port Mapping :
! Link Aggregate :
static linkagg 1 size 2 admin state enable
static linkagg 1 name "TO_CENTRAL"
static agg 1/1 agg num 1
static agg 2/1 agg num 1
! VLAN AGG:
! 802.1Q :
! Spanning tree :
bridge mode 1x1
! Bridging :
! Bridging :
! Port mirroring :
! UDP Relay :
! Server load balance :
! System service :
debug fscollect disable
! SSH :
! Web :
! AMAP :
! LLDP :
! Lan Power :
! NTP :
! RDP :
! VLAN STACKING:
! Ethernet-OAM :


The link aggregation is created with the "static linkaggr" commands; then the ports 1/1 and 2/1 are added to the aggregation.
These ports are in fact Fiber Channel links.

Then, here is how to create a link aggregation on an Alcatel 5022 Omnicore (i precise that the 2 links which will be agregated HAVE to be on the same card, or slot if you prefer, no aggregate between slot 2 port 1 and slot 3 port 1) :
trunk 1 create
trunk 1
slot 3 port 1 add
slot 3 port 2 add
portlist
Port Members :3 -1, 3 -2


You have now a nice link aggregation between your 5022 and 6400 Alcatel equipements.

Have fun !

Multiple kill


Very simple but interesting, here is how to kill all the processes which "grep" a keyword :

ps aux | grep -i keyword | awk '{print $2}' | xargs kill -9

RRD Monitoring tool for Disk usage


Last of my post was presenting a BASH script which monitored the usage of a FlexLM license server.
Here is how to monitor disk usage of some users' personal folders. For that we will use 2 scripts.
These are the crontab entries :




0 0 * * * /root/scripts/taille_utilisateurs
0 20 * * * /root/scripts/creation_graphs


I run the second script 20mn after the first one; it's function of the time that the first script will take to run. I recommend the first time you run the "taille_utilisateurs" script to "time" it and be sure to schedule the second one AFTER all the "du -s" of all folders is done.

Here is the first script "taille_utilisateurs" :
#!/bin/bash

stockage="/backup/vol2/rrd"
users_windows="/backup/vol1/home"
users_unix="/backup/vol2/users"
rep_logs="/root/logs"

creation_graphs(){
for i in `ls $1`
do
user=$(echo $i | sed -e 's/\.old*//')
user=$(echo $user | sed -e 's/\.//')
if [ ! -f $stockage/$2/taille_$user.rrd ]
then
rrdtool create $stockage/$2/taille_$user.rrd -s 86400 -b 1157782169 \
DS:$user:GAUGE:100000:U:U \
RRA:AVERAGE:0.5:1:150
echo "Creation du RRD "$stockage/$2/taille_$user.rrd >> $logfile
fi
done
}

maj_rrd(){
for i in `ls $1`
do
# On calcule la taille du repertoire, puis on met a jour le RRD correspondant avec la valeur
user=$(echo $i | sed -e 's/\.old*//')
user=$(echo $user | sed -e 's/\.//')
taille=$(du -sb $1/$i | awk '{print $1}')
rrdtool update $stockage/$2/taille_$user.rrd N:$taille
echo "Mise a jour du RRD" $stockage/$2/taille_$user.rrd "avec la valeur " $taille "le " `date` >> $logfile
echo $taille" "$i >> $rep_logs/tailles
done

}

logfile=$rep_logs/log-`date "+%d-%m-%Y"`
touch $logfile
echo "Debut du script de calcul des tailles le" `date` > $logfile
creation_graphs $users_unix "unix"
creation_graphs $users_windows "windows"
echo "UNIX" > $rep_logs/tailles
maj_rrd $users_unix "unix"
echo "WINDOWS" >> $rep_logs/tailles
maj_rrd $users_windows "windows"


This first script creates a RRD file per user/folder (150 values stocked, 1 every 24 hours, meaning half a year archive) and then updates the value of this file. Everything is logged in a file, and the sizes of all folders are written in the $rep_logs/tailles file. This file will be used by the second script to do a "Top10" of the most disk-consuming users.

Here is the second script, "creation_graphs" :
#!/bin/bash

stockage="/backup/vol2/rrd"
users_windows="/backup/vol1/home"
users_unix="/backup/vol2/users"
rep_logs="/root/logs"
rep_img="/var/www/html/stats"

top10(){
ligne_win=$(grep -in windows /root/logs/tailles | awk -F':' '{print $1}')
#On trace le Top10 unix
echo "rrdtool graph $rep_img/top10_unix.png \\" > /tmp/graph.sh
echo "-s \"now -4 week\" -e now \\" >> /tmp/graph.sh
echo "--title=\"Top 10 des tailles disque Home UNIX\" \\" >> /tmp/graph.sh
echo "--vertical-label=Octets \\" >> /tmp/graph.sh
echo "--imgformat=PNG \\" >> /tmp/graph.sh
echo "--width=800 \\" >> /tmp/graph.sh
echo "--base=1000 \\" >> /tmp/graph.sh
echo "--height=600 \\" >> /tmp/graph.sh
echo "--interlaced \\" >> /tmp/graph.sh
rang=10
declare -ar couleurs='([0]="#FF0000" [1]="#FF6347" [2]="#FF8C00" [3]="#FF00FF" [4]="#DDA0DD" [5]="#9ACD32" [6]="#008000" [7]="#0000FF" [8]="#6A5ACD" [9]="#48D1CC")'
for i in $(head -$ligne_win /root/logs/tailles | sort -n | tail -10 | awk '{print $2}')
do
user=$i
indice=$(expr $rang - 1)
couleur=${couleurs[$indice]}
echo "DEF:$user=$stockage/unix/taille_$user.rrd:$user:AVERAGE \\" >> /tmp/graph.sh
echo "LINE1:"$user$couleur":\""$rang") Taille de $user\" \\" >> /tmp/graph.sh
echo "VDEF:"$user"_MAX="$user",MAXIMUM \\" >> /tmp/graph.sh
rang=$(expr $rang - 1)
done
. /tmp/graph.sh

#Puis le top 10 windows
echo "rrdtool graph $rep_img/top10_windows.png \\" > /tmp/graph.sh
echo "-s \"now -4 week\" -e now \\" >> /tmp/graph.sh
echo "--title=\"Top 10 des tailles disque Home WINDOWS\" \\" >> /tmp/graph.sh
echo "--vertical-label=Octets \\" >> /tmp/graph.sh
echo "--imgformat=PNG \\" >> /tmp/graph.sh
echo "--width=800 \\" >> /tmp/graph.sh
echo "--base=1000 \\" >> /tmp/graph.sh
echo "--height=600 \\" >> /tmp/graph.sh
echo "--interlaced \\" >> /tmp/graph.sh
rang=10
declare -ar couleurs='([0]="#FF0000" [1]="#FF6347" [2]="#FF8C00" [3]="#FF00FF" [4]="#DDA0DD" [5]="#9ACD32" [6]="#008000" [7]="#0000FF" [8]="#6A5ACD" [9]="#48D1CC")'
for i in $(tail -$ligne_win /root/logs/tailles | sort -n | tail -10 | awk '{print $2}')
do
user=$i
indice=$(expr $rang - 1)
couleur=${couleurs[$indice]}
echo "DEF:$user=$stockage/windows/taille_$user.rrd:$user:AVERAGE \\" >> /tmp/graph.sh
echo "LINE1:"$user$couleur":\""$rang") Taille de $user\" \\" >> /tmp/graph.sh
echo "VDEF:"$user"_MAX="$user",MAXIMUM \\" >> /tmp/graph.sh
rang=$(expr $rang - 1)
done
. /tmp/graph.sh
}

top10


This second script generates a RRD graph corresponding to the Top10 most disk-consuming users.
In order to do this TOP10, we parse the $rep_logs/tailles file.
It's up to you to adapt this to do TOP10-20 TOP20-30 and so on.

As usual, an image of the final RRD graph is at the beginning of this post.

Have fun !

RRD Monitoring tool for FlexLM




These are some BASH scripts which generate RRD databases for FlexLM servers.

First, here is the crontab entry :
*/5 * * * * /root/scripts/ansys_lic


Here is the script ansys_lic (which monitors every 5mn the status of a FlexLM server) :
#!/bin/bash

lmutil="/usr/local/flexlm/lmutil"
license="27000@192.168.100.100"
stockage="/data/rrd"
rep_logs="/root/logs"
rep_img="/data/www/monitoring"
declare -ar features='([0]="abaqus" [1]="cae" [2]="viewer" [3]="standard" [4]="explicit" [5]="foundation" )'
declare -ar features_aff='([0]="abaqus" [1]="cae" [2]="viewer" )'
declare -ar couleurs='([0]="#FF0000" [1]="#00D832" [2]="#1C05EA" [3]="#B505EA" [4]="#DDA0DD" [5]="#9ACD32" [6]="#008000" [7]="#0000FF" [8]="#6A5ACD" [9]="#48D1CC")'
declare -ar durees='([0]="1" [1]="10" [2]="30" [3]="90" )'

creation_rrd(){

if [ ! -f $stockage/abaqus_lic.rrd ]
then
echo "rrdtool create $stockage/abaqus_lic.rrd -s 300 \\" > /tmp/create.sh
for i in ${features[*]}
do
echo "DS:$i:GAUGE:600:U:U \\" >> /tmp/create.sh
echo "RRA:MAX:0.5:1:2880 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:12:480 \\" >> /tmp/create.sh
echo "RRA:AVERAGE:0.5:60:240 \\" >> /tmp/create.sh
done
echo >> /tmp/create.sh
. /tmp/create.sh
rm -f /tmp/create.sh
fi
}

maj_rrd(){
$lmutil lmstat -a -c $license > /tmp/abaqus_lic.log
commande="rrdtool update $stockage/abaqus_lic.rrd N"
for i in ${features[*]}
do
used=$(grep $i /tmp/abaqus_lic.log | grep Total |head -1| awk '{print $11}')
commande=$commande":$used"
done
$($commande)
}

creation_graph(){
for d in ${durees[*]}
do
echo "rrdtool graph $rep_img/abaqus_lic_"$d".png \\" > /tmp/graph_abaqus.sh
echo "-s \"now -"$d" day\" -e now \\" >> /tmp/graph_abaqus.sh
echo "--title=\"Occupation du serveur de licences Abaqus, "$i" derniers jours\" \\" >> /tmp/graph_abaqus.sh
echo "--vertical-label=Nombre \\" >> /tmp/graph_abaqus.sh
echo "--imgformat=PNG \\" >> /tmp/graph_abaqus.sh
echo "--color=BACK#CCCCCC \\" >> /tmp/graph_abaqus.sh
echo "--color=CANVAS#343434 \\" >> /tmp/graph_abaqus.sh
echo "--color=SHADEB#9999CC \\" >> /tmp/graph_abaqus.sh
echo "--width=800 \\" >> /tmp/graph_abaqus.sh
echo "--base=1000 \\" >> /tmp/graph_abaqus.sh
echo "--height=600 \\" >> /tmp/graph_abaqus.sh
echo "--interlaced \\" >> /tmp/graph_abaqus.sh
echo "--upper-limit=36 \\" >> /tmp/graph_abaqus.sh
echo "AREA:32#F6FACF:\"32 Licences Max\" \\" >> /tmp/graph_abaqus.sh
rang=0
for i in ${features_aff[*]}
do
couleur=${couleurs[$rang]}
echo "DEF:$i=$stockage/abaqus_lic.rrd:$i:MAX \\" >> /tmp/graph_abaqus.sh
echo "LINE1:"$i$couleur":\" Nombre de $i\" \\" >> /tmp/graph_abaqus.sh
#echo "VDEF:"$i"_MAX="$i",MAXIMUM \\" >> /tmp/graph.sh
rang=$( expr $rang + 1 )
done
. /tmp/graph_abaqus.sh
rm -f /tmp/graph_abaqus.sh
done
}

creation_rrd
maj_rrd
creation_graph

In this script you will have to change paths to FlexLM lmutil command and the features declared in the beginning.
Then you can also change the colors of the lines for representing the features (couleurs array) and the durations for graph generation (1 days, 10 days, 30 days and 90 days, durees array).

3 things are done in the script : first we create the RRD database (5mn between 2 records, and we keep 2880 records of these, meaning the last 10 days with precision of 5mn, then we keep 480 records of the max every 60mn, meaning 20 days, at last we keep 240 records of the max every 5hours, meaning 50 days). It's up to you to change this.
After creating the database we update the values in the RRD file, and the we re-generate the png graphic.

The final result is the image in the beginning of the post; it's the last day graph for license usage.

Have fun !