Friday, 8 January 2016

Automate backup with python script

Automate Backup of a Folder

You may have an important folder in your Server that your may want to backup regularly. In this blog, I will explain how your can automate the backup with a simple python script. I have used Ubuntu Linux in this blog. You can use this method in any flavor of Linux and in Windows as well. For windows most of the steps change but the logic remains same.

Step 1:
Download python script pyCompry.py from https://sourceforge.net/projects/pycompry/

Step2:
You will need Python 3.3. Install Python3 (if it is not installed already) on the server. Type below command in the shell.

apt-get install python3

To confirm if the python is installed correctly, just type "python3 --version" command on the CLI. It should return the python version number.

Step3:
Now locate the folder that you would like to backup regularly. Ensue that you have necessary permission on source and destination path.

python ~/pyCompry.py -h
(This will show the help)

Run below command to manually execute the backup (please change -i and -o with actual paths)

python3 ~/pyCompry.py -i /var/somesourcepath/ -o /mnt/somemountedremotepath/



Step4:
Now you may use crontab to schedule this script to automate the backup. Type:

sudo crontab -e

Now in the crontab, enter the below line. Edit the line to correct the timing you want to schedule and the actual paths.

45 04 * * *python3 ~/pyCompry.py -i /var/somesourcepath/ -o /mnt/somemountedremotepath/

The above line will configure crontab to schedule the script to run at 00:30 every night. So you may want to edit the line with whatever time you want it to run.


You may mount a remote path of your DR server on this server and schedule to take backup. Since this script compress the data, it might help you save some time on the transfer over slow networks.

For more details, refer to WiKi
https://sourceforge.net/p/pycompry/wiki/Home/

Tuesday, 16 September 2014

Install OTRS on Centos with Oracle as Backend Database

In this blog, I wanted to share my experience on how I deployed OTRS with Oracle as backend database. Initially during this deployment, I have come across multiple errors and obstructions. But I managed to fix everything by referring to multiple guides and forums. I am writing this blog to share with you the correct method of installing OTRS with Oracle as a backend data base, based on my leanings.

Initially I couldn't get OTRS running. I was getting these errors.

"/var/log/httpd/error_log was recorded with errors as "[error] install_driver(Oracle) failed: Can't load '/usr/local/lib64/perl5/auto/DBD/Oracle/Oracle.so' for module DBD::Oracle: libocci.so.12.1: cannot open shared object file: No such file or directory at /usr/lib64/perl5/DynaLoader.pm line..."

AND

" [error] install_driver(Oracle) failed: Attempt to reload DBD/Oracle.pm aborted.\nCompilation failed in require at (eval 169) line 3.\n\n at /opt/otrs//Kernel/System/DB.pm"

However, after referring to multiple guides and forums, I discovered the correct method of deploying OTRS with Oracle as backend database.

The correct method of deploying OTRS with Oracle Database.

The packages you will need here are Oracle Instant Client 12.1.0.2, OTRS 3.3, httpd and required httpd perl modules. Refer to OTRS deployment guide for information on required httpd modules.

Download and Install Oracle Instant client packages. You will need a active oracle login to download it.

oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm
oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm

To Install these packages, login as root and run this command
$ rpm -Uvh oracle-instantclient12*.rpm

Create oracle.sh under /etc/profile.d/, with below environment variable.
$ vi /etc/profile.d/oracle.sh 
#Add these environment variables.
export ORACLE_HOME=/usr/lib/oracle/12.1/client64
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH

Execute this script to set the required environment variables in your current session.
$ sh /etc/profile.d/oracle.sh

Verify if the Oracle Instant Client is functioning.
$ su - root
$ echo $ORACLE_HOME
#It should show you the path /usr/lib/oracle/12.1/client64
$ sqlplus /nolog

If you see a SQL Prompt then Oracle client is working. If otherwise, probably you downloaded a wrong Oracle Instant Client.

Next step is to install DBD::Oracle module.
$ yum install perl-DBI
$ wget http://search.cpan.org/CPAN/authors/id/P/PY/PYTHIAN/DBD-Oracle-1.74.tar.gz
$ tar -xvf DBD-Oracle-1.74.tar.gz
$ cd DBD-Oracle-1.74
$ perl Makefile.PL -V 12.1
$ make install

After the installation of DBD::Oracle, create necessary links and cache for DBD libraries.
$ vi /etc/ld.so.conf.d/oracle.conf
#Insert this line
/usr/lib/oracle/12.1/client64/lib

Then run this command
$ ldconfig -v

Now restart/start httpd
$ /etc/init.d/httpd restart

This should bring up the OTRS. Please like my post if this helped you.

Thursday, 11 September 2014

Enable Replication on MongoDB Database with very little downtime

In this blog, I wanted to share my experience on how we handled replication of 3.5TB MongoDB database with little downtime. You may wonder why can't I just enable replication by enabling 'replicaset' on the MongoDB configuration, doing which will have no downtime. The fact is that, if enabled, synchronization will never succeed due to the huge data size and oplog in MongoDB. So our last option was to manually synchronize the database files, bring up the secondary server and then attempt the replication. Stopping the primary server for copy would require a huge downtime due to size of the data base and the bandwidth. So we had to first build a strategy to minimize the database downtime during the copy process.

We had to choose a right tool for copying database files. We tried multiple possibilities such as NFS, RSYNC etc but finally chose SCP as the best tool. We first created list of all the files in the source folder and created 4 scripts to copy file-by-file. We named these scripts as scpcp1.sh, scpcp2.sh, scpcp3.sh and scpcp4.sh with SCP commands in it. 4 scripts were created so that 4 concurrent instances of SCP can be run, or just simply for increasing the threads.

So the procedure is as follows. Before you begin, you need to make sure that primary MongoDB server is running in standalone mode. To do so comment out #replicaset /etc/mongod.conf 

Here is one example SCP command in these scripts. 

 scp -r -p -c arcfour128 root@192.168.1.150:/mongodb/database/mgdbfile.1 /mongodb/database/.

-p preserves the file created, modified and accessed date. Without this argument, next rsycn will copy all the files again.
-c arcfour128 This change the Ciphers hence drastically speeds up the copy.

Please note that you will need to add the ssh keys to the source server to avoid password authentications. You can do it by following these steps.

Once the SSH keys are added, we can run the scripts as background process. Use ampersand symbol to start it in the background process. And then disown the process. 
$ sh scpcp1.sh &
$ sh scpcp2.sh &
$ sh scpcp3.sh &
$ sh scpcp4.sh &
$ disown -a

While the copy is in process, we had to plan for RSYNC. I am always not in favor of RSYNC because it may make changes to the source files. We had to be sure that there are no writes to the source while doing RSYNC. To overcome this concern, we decided to create a new user in the source server with read-only access to the source path. 

First we created a normal user in source server as rsyncuser.
$ useradd rsyncuser
$ passwd rsyncuser

Since the source path owner is mongod user and the permission on the source path is 664, the newly created rsyncuser had only read access. Now we are ready to do RSYNC.

Copy of 3.5TB through SCP took around 8 hours to complete with 1Gbps network.

Now we issued RSYNC dry run command to see the modified files at source in these 8 hours.

rsync -avzn --del -e "ssh -c arcfour128" --log-file="/home/rsyncuser/rsync2.log" rsyncuser@192.168.1.150:/mongodb/database/ /mongodb/  

This command listed the files that were modified at source after while the copy was in progress. Remember we started copying the database files while MongoDB was running. This was to avoid big downtimes.
Once RSYNC DryRun listed the file names, we copied them individually using scp (command given above). We did this repeatedly for 2 times to have minimum level. Once the number of files came down, we intiated db.fsynclock() to mongoDB so that it stops writes to the data base. Caution: This command will lock database and make it nonoperational. Hence please be sure you plan this downtime. You need to keep the mongo shell open until the rsync completes.

rsync -avz --del -e "ssh -c arcfour128" --log-file="/home/rsyncuser/rsync2.log" rsyncuser@192.168.1.150:/mongodb/database/ /mongodb/  

Once the RSYNC is completed, you may now start the monogDB service in replicaset mode in the secondary server. And then issue db.fsyncUnlock() in the source server of which the shell is already open.
Check the replication status by issuing rs.status() in any of the node.
Please like my post if this helped you.

Wednesday, 10 September 2014

V2V Migrate Windows VMs from Citrix XenServer to RHEV

In this blog, I have mentioned the the procedure involved in migrating a Windows 2008 R2 guest VM from Citrix XenServer to Red Hat Enterprise Virtualization (RHEV) platform. Basically its Xenserver v2v hence this method can be used for all V2V. Its rather safe method since there are no changes to the source VM. I have used the disk cloning method to transfer the VM from Citrix XenServer to Red Hat Enterprise Virtualization (RHEV) platform. However, after the VM is transferred to Red Hat Enterprise Virtualization (RHEV) platform, Windows Guest VM will intially throw Blue Screen error. I have mentioned the work around to overcome the blue screen error and then run the Windows Guest normally. The tools used here clone is Clonezilla. Clonezilla live CD might have a blank screen issue in Citrix XenServer hence, I have used Archlinux Live CD, which is bundled with Clonezilla.

The step-by-step procedure to convert a Windows 2008 R2 guest VM from Citrix Xenserver to Red hat RHEV is as below.

Step 1: Download the Archlinux Live CD ISO image and upload it into your XenServer ISO storage. Shut down the guest VM you want to migrate and boot it using ArchLinux Live CD. If you don't have a ISO mount created in XenServer, you may follow these steps to add it. Note: Please choose "Boot Arch Linux (i686)" while booting. Booting into x86_64 might fail or end up in blank screen.

Step2: Once booted into Live CD, type Clonezilla and press enter. This will take you through disk cloning process. Follow these steps to create a clone image of your windows guest. Skip the initial steps in the clonezilla guide and start from 'Choose "device-image" option'. It's too tedious map your image external disk in the hypervisor and then mount it in Acrh Linux, so avoid using local disk for storing image. Clonezilla gives you multiple options to store image on the network. I chose ssh-server and saved it in RHEV Manager. It might take several minutes to create image of your disk depending on how the size of disk and the data inside.

Step3: Create a new VM in RHEV. Keep the guest disk size as same as the source guest VM in XenServer. Ex:- If the disk that you clone above is 500GB in size, create 500GB disk in the guest VM that you create in RHEV.

Step4: Upload the same ArchLinux ISO image that you have downloaded above to your ISO repository in RHEV. Select "Run Once" and choose the ArcLinux ISO image and boot from it.

Step5: Once booted into Archlinux Live CD, type Clonezilla and press enter. Follow these steps restore disk image created in step2 to the guest VM newly created in step3. Skip the initial steps in the clonezilla guide and start from 'Choose "device-image" option'. Choose the image that your created in Step2 and restore it. It might take several minutes to restore the image.

Step6: This step is important. After you restore the image and boot Windows Guest in RHEV, it will throw Blue Screen. But dont panic. It's just because the guest VM doesn't have VirtIO SCSI Drivers hence it cant detect the Hard disk.

Now shut down the guest VM in RHEV. You need to start the VM again by clicking on 'Run Once'.


Mount the 'virtio-drvers' floppy image.

Step7: Boot the guest VM into recovery mode using "Launch Startup Repair". If you don't see the option then press immediately F8 after POST and select "Repair Your Computer".

Once booted into recovery mode, select the keyboard layout and click on next.

Step8: In the "System Recovery Options" click on "Load Drivers".
Select the floppy drive and select A:\amd64\Win2008R2\.
This will list all available drivers in the floppy disk. Select viostore.inf

In the next screen, Choose "Red Hat VirtIO SCSI Controller" and click on "Add Drivers...".

This should detect hard disk and show you "Windows Server 2008 R2". Select Restore your computer ......." and click "Next".
Next step will try to detect the system restore image. Ignore the error and Click on "Cancel" to come out of image restore process.
In the next screen, click on "Command Prompt" to open command window.

Step9: In the command type 'diskpart' and press enter. Then type 'list volume' to see the drive letter of your Windows Guest Disk. In the below example the driver letter is C:\
Now you need to load the VirtIO drivers to your Windows Guest. To do it, type the below command and press enter.
Dism /image:E:\ /Add-Driver /driver:A:\amd64\Win2008R2 /recurse
You will see a success message once the drivers are loaded into Windows guest.

Step10: Now exit from "Command Prompt" and reboot Windows Guest VM. After reboot, boot the Windows Guest VM into normal mode.
This workaround worked fine with my environment. I had Windows 2008R2 guest VM with single disk of 100GB running on Citrix XenServer 6.2. This procedural might work for V2V and P2V converstion from any Virtualization Software to Red hat RHEV. This will might also work for all other flavors of Windows 2008 and Windows 2012. You just need to change the path of the drivers in the command in step9.

Please like my post if this helped you.

Run VMs from Local Storage (NFS) of a RHEV Host

Red Hat Enterprise Virtualization (RHEV) supports using local storage but with limitations. To be able to use the local storage of RHEV-H, you will have to isolate this host and create a separate Data Center with separate cluster. This may not be good enough for SMEs with few servers in pool with limited budget for shared expensive storage. In this blog, I have explained a design which can help in using local storage in the Data Center with multiple hosts and shared storage. With this design, you will be able to utilize a host to run VMs both from local storage and shared storage. Therefore, VMs running from shared storage can have automatic failover (HA) to a different host in the same data center and cluster. Before you implement this design, make sure you comply with Red Hat subscription requirements.


Step1: The prerequisite here is to use Redh Hat Enterpise Linux (RHEL) as Hypervisor. The installation procedure is same as usual RHEL installation, but keep it minimal installation to save on Hypervisor resource consumption for unused services.
Caution: This installation procedure will completely erase all data on your server. Make sure you know what you are doing before you proceed with installation process.
I have not included all the screenshots of the installation process since it is available in multiple web sites on the net. Please go through these steps and follow till point 8.

Use the basic storage


Choose the local hard disk on the server. Caution: This will erase all the data on the server.


Use all Space on the server.

Choose Minimal select customize now and click on Next.


In the package selection, choose NFS and Virtualization components. Keeping it minimum packages will less consume resources on the server.


Once the installation is completed, make sure you add subscription using subscription-manager command line tool. Steps are given here. Once added, update your installation by issuing command

#yum update

Step2: After the installation is completed, you will need to configure portmap for NFS in your newly server. Login to the newly installed Red Hat Enterprise Linux 6.5 server and edit etc/sysconfig/nfs.

The example configuration that works is given below.

#vi /etc/sysconfig/nfs

#
# Define which protocol versions mountd
# will advertise. The values are "no" or "yes"
# with yes being the default
#MOUNTD_NFS_V2="no"
#MOUNTD_NFS_V3="no"
#
#
# Path to remote quota server. See rquotad(8)
#RQUOTAD="/usr/sbin/rpc.rquotad"
# Port rquotad should listen on.
RQUOTAD_PORT=875
# Optinal options passed to rquotad
#RPCRQUOTADOPTS=""
#
#
# Optional arguments passed to in-kernel lockd
#LOCKDARG=
# TCP port rpc.lockd should listen on.
LOCKD_TCPPORT=32803
# UDP port rpc.lockd should listen on.
LOCKD_UDPPORT=32769
#
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
# Turn off v4 protocol support
#RPCNFSDARGS="-N 4"
# Number of nfs server processes to be started.
# The default is 8.
#RPCNFSDCOUNT=8
# Stop the nfsd module from being pre-loaded
#NFSD_MODULE="noload"
# Set V4 grace period in seconds
#NFSD_V4_GRACE=90
#
#
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
#RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
MOUNTD_PORT=892
#
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
#STATDARG=""
# Port rpc.statd should listen on.
STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
#RPCIDMAPDARGS=""
#
# Set to turn on Secure NFS mounts.
#SECURE_NFS="yes"
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
#RPCGSSDARGS=""
# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)
#RPCSVCGSSDARGS=""
#
# To enable RDMA support on the server by setting this to
# the port the server should listen on
RDMA_PORT=20049


Then configure firewall in the newly installed Red Hat Enteprise Linux 6.5 server.

The example configuration that works is given below. If you are changing any port number in the above configuration, change the firewall configuration below accordingly. 

#vi /etc/sysconfig/iptables


# Generated by iptables-save v1.4.7 on Thu Aug 14 23:31:19 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [486:86645]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 16514 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 54321 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT

#for nfs exports
# Portmapper (rpcbind on RHEL6)
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT

# mountd; NFS MOUNTD_PORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 892 -j ACCEPT
-A INPUT -p tcp --dport 892 -j ACCEPT

# rquotad; NFS RQUOTAD_PORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 875 -j ACCEPT
-A INPUT -p tcp --dport 875 -j ACCEPT

# NFS STATD_PORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 662 -j ACCEPT
-A INPUT -p tcp --dport 662 -j ACCEPT

# nfsd for nfs and nfs_acl
-A INPUT -p tcp --dport 2049 -j ACCEPT

# nlockmgr; NFS LOCKD_TCPPORT (defined in /etc/sysconfig/nfs)
-A INPUT -p tcp --dport 32803 -j ACCEPT

# NFS LOCKD_UDPPORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 32769 -j ACCEPT

-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Thu Aug 14 23:31:19 2014


Then, create a folder in the storage you would like and add it to the exports configuration.

#mkdir /home/data/images/rhev
#chmod 777 /home/data/images/rhev
#vi /etc/exports
Here is the sample configuration of /etc/exports. change IPs as appropriate.

/home/data/images/rhev 192.168.1.21(rw) 192.168.1.22(rw) 192.168.1.23(rw) 192.168.1.24(rw) 192.168.1.25(rw) 192.168.1.27(rw) 192.168.1.28(rw) 127.0.0.1(rw)

Once done restart NFS and iptables service,
#service nfs restart
#service iptables restart

Step3: After completing step 2, add the hypervisor to the cluster using RHEV-Manager. Configure the hypervisor appropriately and bring it up. Then go to Storage nuder your Data Center in RHEV and add a new storage as below. Its recommended to amend the storage name with your Hypervisor hostname so that you dont get confused later as from which host to run from. It is recommended that all the VMs running from the Hypervisor local storage be started from the same hypervisor. Ex:- In below case, all the VMs stored on LocalStorageOnHost should be running only from Host1. 


Please like my post if this helped you.