Red Hat Enterprise Virtualization (RHEV) supports using local storage but with limitations. To be able to use the local storage of RHEV-H, you will have to isolate this host and create a separate Data Center with separate cluster. This may not be good enough for SMEs with few servers in pool with limited budget for shared expensive storage. In this blog, I have explained a design which can help in using local storage in the Data Center with multiple hosts and shared storage. With this design, you will be able to utilize a host to run VMs both from local storage and shared storage. Therefore, VMs running from shared storage can have automatic failover (HA) to a different host in the same data center and cluster. Before you implement this design, make sure you comply with Red Hat subscription requirements.
Step1: The prerequisite here is to use Redh Hat Enterpise Linux (RHEL) as Hypervisor. The installation procedure is same as usual RHEL installation, but keep it minimal installation to save on Hypervisor resource consumption for unused services.
Caution: This installation procedure will completely erase all data on your server. Make sure you know what you are doing before you proceed with installation process.
I have not included all the screenshots of the installation process since it is available in multiple web sites on the net. Please go through these steps and follow till point 8.
Use the basic storage
Choose the local hard disk on the server. Caution: This will erase all the data on the server.
Use all Space on the server.
Choose Minimal select customize now and click on Next.
In the package selection, choose NFS and Virtualization components. Keeping it minimum packages will less consume resources on the server.
Once the installation is completed, make sure you add subscription using subscription-manager command line tool. Steps are given here. Once added, update your installation by issuing command
#yum update
Step2: After the installation is completed, you will need to configure portmap for NFS in your newly server. Login to the newly installed Red Hat Enterprise Linux 6.5 server and edit etc/sysconfig/nfs.
The example configuration that works is given below.
#vi /etc/sysconfig/nfs
#
# Define which protocol versions mountd
# will advertise. The values are "no" or "yes"
# with yes being the default
#MOUNTD_NFS_V2="no"
#MOUNTD_NFS_V3="no"
#
#
# Path to remote quota server. See rquotad(8)
#RQUOTAD="/usr/sbin/rpc.rquotad"
# Port rquotad should listen on.
RQUOTAD_PORT=875
# Optinal options passed to rquotad
#RPCRQUOTADOPTS=""
#
#
# Optional arguments passed to in-kernel lockd
#LOCKDARG=
# TCP port rpc.lockd should listen on.
LOCKD_TCPPORT=32803
# UDP port rpc.lockd should listen on.
LOCKD_UDPPORT=32769
#
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
# Turn off v4 protocol support
#RPCNFSDARGS="-N 4"
# Number of nfs server processes to be started.
# The default is 8.
#RPCNFSDCOUNT=8
# Stop the nfsd module from being pre-loaded
#NFSD_MODULE="noload"
# Set V4 grace period in seconds
#NFSD_V4_GRACE=90
#
#
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
#RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
MOUNTD_PORT=892
#
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
#STATDARG=""
# Port rpc.statd should listen on.
STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
#RPCIDMAPDARGS=""
#
# Set to turn on Secure NFS mounts.
#SECURE_NFS="yes"
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
#RPCGSSDARGS=""
# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)
#RPCSVCGSSDARGS=""
#
# To enable RDMA support on the server by setting this to
# the port the server should listen on
RDMA_PORT=20049
Then configure firewall in the newly installed Red Hat Enteprise Linux 6.5 server.
The example configuration that works is given below. If you are changing any port number in the above configuration, change the firewall configuration below accordingly.
#vi /etc/sysconfig/iptables
# Generated by iptables-save v1.4.7 on Thu Aug 14 23:31:19 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [486:86645]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 16514 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 54321 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
#for nfs exports
# Portmapper (rpcbind on RHEL6)
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
# mountd; NFS MOUNTD_PORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 892 -j ACCEPT
-A INPUT -p tcp --dport 892 -j ACCEPT
# rquotad; NFS RQUOTAD_PORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 875 -j ACCEPT
-A INPUT -p tcp --dport 875 -j ACCEPT
# NFS STATD_PORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 662 -j ACCEPT
-A INPUT -p tcp --dport 662 -j ACCEPT
# nfsd for nfs and nfs_acl
-A INPUT -p tcp --dport 2049 -j ACCEPT
# nlockmgr; NFS LOCKD_TCPPORT (defined in /etc/sysconfig/nfs)
-A INPUT -p tcp --dport 32803 -j ACCEPT
# NFS LOCKD_UDPPORT (defined in /etc/sysconfig/nfs)
-A INPUT -p udp --dport 32769 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Thu Aug 14 23:31:19 2014
Then, create a folder in the storage you would like and add it to the exports configuration.
#mkdir /home/data/images/rhev
#chmod 777 /home/data/images/rhev
#vi /etc/exports
Here is the sample configuration of /etc/exports. change IPs as appropriate.
/home/data/images/rhev 192.168.1.21(rw) 192.168.1.22(rw) 192.168.1.23(rw) 192.168.1.24(rw) 192.168.1.25(rw) 192.168.1.27(rw) 192.168.1.28(rw) 127.0.0.1(rw)
Once done restart NFS and iptables service,
#service nfs restart
#service iptables restart
Step3: After completing step 2, add the hypervisor to the cluster using RHEV-Manager. Configure the hypervisor appropriately and bring it up. Then go to Storage nuder your Data Center in RHEV and add a new storage as below. Its recommended to amend the storage name with your Hypervisor hostname so that you dont get confused later as from which host to run from. It is recommended that all the VMs running from the Hypervisor local storage be started from the same hypervisor. Ex:- In below case, all the VMs stored on LocalStorageOnHost should be running only from Host1.
Please like my post if this helped you.