Wednesday, August 1, 2012

MultiNICB and IPMultiNICB

To use nics in multinic sg , first bond them in a group.

root ~etc #>more hostname.e1000g3
group PROD1 up

root ~etc #>more hostname.e1000g0
sys1 netmask 255.255.255.0 broadcast + group PROD1 up
we have two systems :-
sys1 & sys2
Add a nic service group:-

#hagrp –add nicsg
#hagrp –modify nicsg –SystemList sys1 0 sys1 2

Add multinicb resource :-

#hares –add mnicb MultiNICB nicsg
#hares –modify mnicb Critical 0
#gares –modify mnicb Device e1000g0 0 e1000g3 1
#hares –modify mnicb MpathdCommand /usr/lib/inet/in.mpathd –a
#hares –modify mnicb ConfigCheck 0
#hares –modify mnicb Enabled 1

Add ipmultinicb resource :-

#hares –add ipmnicb IPMultiNICB nicsg
#hares –modify ipmnicb Critical 0
#hares –modify ipmnicb Address 192.168.1.100
#hares –modify ipmnicb BaseResName mnicb
#hares –modify ipmnicb NetMask 255.255.255.0
#hares –modify ipmnicb DeviceChoice 0
#hares –modify ipmnicb Enabled 1

#haconf -dump
OnlineService Group :-
#hagrp –online nicsg –sys sys1









NFS service group in VCS

AIM : Make an nfs type service group which will provide redundancy to a nfs share.

Description : We need to have an nfs share point on one of the nodes of a cluster. Whenever this server goes down, this share point will  failover to another node and nsf share will be available to the client continuously. That's what the aim of VCS , right ? I mean , if any thing goes wrong at one node, it should not affect client.

So here we go :

The 2 most important things to reach our objective :

1. NFS configuration at operating system level at all the nodes.
2. Hierarchy of resources i.e. dependency of resources.

1. NFS needs a particular set of services NOT to run at OS level to work perfectly under VCS. We need to disable them NOT through SMF ( svcadm ) but at configuration files so that it may not get enabled when system reboots. So execute following commands to make changes at all the nodes :-


svccfg -s nfs/server setprop "application/auto_enable=false"
svccfg -s nfs/mapid setprop "application/auto_enable=false" 
svccfg -s nfs/nlockmgr setprop "application/auto_enable=false"
svccfg -s nfs/status setprop "application/auto_enable=false"

Now we are all set to play with VCS. I mean it's time to decide what all resources are required for this service group. In technical language - "Hierarchy of resources" 

Keep NFSRestart at top, all other should be child of this resource. 

it needs following 6 resources :-

1. NFSRestart
2. Share

3. DiskGroup
4. Mount

5. IP
6. NIC

Don't get scared how to handle these many resources, we already have divided in 3 sections.

We will take a bottom-up approach , it's not a rocket science, trust me. See, my approach is , if anything feels too tough or lengthy , just break it into peaces , and then assemble it at the end.

First thing that we can do is to make a service group with a familiar name , just in case we need to recall "nfssg".


#hagrp -add nfssg
#hagrp -modify nfssg SystemList sys1 0 sys2 1
#hagrp  -modify nfssg AutoStartList sys1


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now we have to make a mount resource which will mount one of the volume from this disk group to a mount point /test :
Now, when the mount point is available, our task is to make a share resource which will nfs share this mount point , /test.
You know what , we are done with resources and service group. Nothing mechanical stuff needed now. Only bit of logic and now you will use only 1000th part of your brain to how exactly we should link these resources. 



Now resources : (BOTTOM - UP)


Start with the easiest one, i.e. NIC and IP resources :

Why this IP ? Because it is used to access NFS share from client. ( remember what we do at client, mount ip:/nfsshare  /mnt , so this ip address will be used by client to access our share point)


#hares -add mnicb MultiNICB nfssg
#hares -modify mnicb Critical 0
#hares -modify mnicb Device e1000g0
#hares -modify mnicb Enabled 1

#hares add ipmnicb IPMultiNICB nfssg
#hares -modify ipmnicb Critical 0
#hares -modify ipmnicb Address 192.168.1.100
#hares -modify ipmnicb BaseResName mnicb
#hares -modify ipmnicb NetMask 255.255.255.0
#hares -modify ipmnicb Enabled 1

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Now we need a mount point to be shared. This mount point will come from a disk group , as we are using VxVM. So make a DiskGroup resource and name it nfsdg. (Don't confuse with name nfssg)

#hares -add nfsdg DiskGroup nfssg
#hares -modify  nfsdg  Critical 0
#hares -modify nfsdg DiskGroup dg1
#hares -modify  nfsdg  Enabled 1



#hares -add nfsmount Mount nfssg
#hares -modify   nfsmount  Critical 0
#hares -modify  nfsmount BlockDevice /dev/vx/dg1/dsk/vol1
#hares -modify nfsmount MountPoint /test
#hares -modify nfsmount MountOpt
#hares -modify nfsmount Enabled 1

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


#hares -add nfsshare Share nfssg
#hares -modify nfsshare Critical 0
#hares -modify nfsshare PathName /test
#hares -modify nfsshare Options %-y
#hares -modify nfsshare Enabled 1



The most important resource is NFRestart resource, it will restart nfs services whenever it is called by VCS. Usually whenever service group is brought online or offline, this resource is triggered. As it is most important, we will give highest priority to this resource and it will be our top resource in dependency hierarchy. First add it : 

#hares -add nfsrestart NFSRestart nfssg
#hares -modify nfsrestart Critical 0
#hares -modify nfsrestart Enabled 1



As we know NFSRestart is most important , so make it grandfather, I mean keep it at the top of dependency tree :   NFSRestart  ->Share->Mount->DiskGroup   and another one is  IP->NIC  , thats it. DONE.  We will make 2 dependency tree not 1 because making 1 dependency tree will violate the rule of max 5 dependency in a tree.

#hares -link nfsrestart nfsshare
#hares -link nfsshare nfsmount
#hares -link nfsmount nfsdg

#hares -link ipmnicb mnicb

~~~~~~~~~~~~~DONE ...!!!~~~~~~~~~~~

#haconf -dump -makero

BRING THE SERVICE GROUP ONLINE :-

#hagrp -online nfssg -sys sys1

Clarity of facts :-
1. Here , we are working on NFS server and not on client. We are providing high availability to "nfs share".
2. On client, simply mount it by "mount" command. If you want to provide HA on this mount point as well, simple "Mount" type resource will work , with block device modified as "192.168.1.100:/test" .




I/O Fencing

I/O fencing properties:-


1. At least 3 or odd no of Coordinator disks.
2. SCSI3 persistent reservation support .
3. Separate deported dg with coordinator disks.
Commands :-

/etc/vxfenmode is used to set the mode for coordinator disks.

Disable it :-

root ~~ #>cat /etc/vxfenmode
vxfen_mode=disabled

Enable DMP :-

root ~~ #>cat /etc/vxfenmode
vxfen_mode=scsi3
scsi_disk_policy=dmp

verify if the hard disk supports scsi3 persistent reservation :-

#vxfentsthdw –r –g vxfencoorddg
Initialize coordinator disk group :-
vxdg init -o coordinator=on vxfencoorddg disk1=c1t5d0 disk2=c1t5d1 disk3=c1t5d2
deport dg :-

#vxdg deport vxfencoorddg

Create vxfendg on all systems :-

#echo “vxfencoorddg “ > /etc/vxfendg

Create vxfenmode on all systems :-

#cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfenmode

Start fensing driver on each system :-

#ls /sbin |grep vxfen
vxfen-shutdown vxfen-startup vxfenadm vxfenconfig vxfend

#/etc/init.d/vxfen start
Or
#/sbin/vxfen-startup

#haconf –dump -makero

Stop VCS :-

#hastop –all –force
Start VCS on each system:-

#hastartt

Display fencing membership and state information :-
#vxfenadm -d

See keys on the coordinator disk :-
#vxfenadm -i /dev/vx/rdsk/disk1

Read the existing keys on disks :-
#vxfenadm -s all -f /etc/vxfentab


************************FINISHED********************
SCSI3 persistent reservation :- it supports same storage to be accessed by :
1. different systems .
2. same system through different path.
3. It allows one system to access the storage at a time.

Wednesday, July 18, 2012

VCS Basics

I started exploring VCS couple of months ago and extract of all possible experiments are here. These are all working scenarios and if anything doesn't seem good...please feel free to discuss. I am certified on VCS 5.0 and always look for new challenges and scenarios.
My target through this blog is to provide you actual working environment of all the components of VCS. Disclosure of any doubts/confusion/suggestions will highly be appreciated .

 

The tutorial will be divided in following groups :-
1. Installation
2. Configuration
3. Service groups
4. Resources
*********************************************************************************
1. Installation

* Requirement :-

1.1) Two nodes with 64 bit Solaris 10 installed.
1.2) SF ( storage foundation, VxVM installed ) 
1.3) HA ( Veritas Cluster Server )

2. Configuration

Here we are planning to bring two nodes , sys1 & sys2, in a cluster. Make entries in /etc/hosts file so that hostnames sys1 and sys2 can be resolved properly:-

*Requirement :-
2.1) 3 NICs ( 2 for cluster interconnect and 1 for public access ).

*What is cluster interconnect ?
It connects nodes in a cluster through crossover cables, nic-to-nic connection.They communicate through ethernet address. No IP required, no need of routers or switches. If more than 2 nodes, use a hub or switch to connect them, use straight through cable this time.
Third nic contains public IP. All these 3 are used to carry heartbeats ( I am alive signals) and used by each nodes to make sure all listed nodes are alive in the cluster.

Private networks, cluster interconnect nxge1 & nxge2. Public NIC nxge0.

* Three configuration files are required :-
              /etc/llttab , /etc/llthosts and  /etc/gabtab 

It should contain info like this :-

#cat /etc/llttab
set-node sys1
set-cluster 0
link  nxge1  /dev/nxge1 - ether - -
link  nxge2  /dev/nxge2 - ether - -
link-lowpri /dev/nxge0 – ether - -

# cat /etc/llthosts
0 sys1
1 sys2

#cat /etc/gabtab
/sbin/gabconfig -c -n 2
 
test LLT/GAB:
On  sys1& sys2:
#llttest –nvv |more
#lltconfig
#gabconfig –a
GAB Port Memberships
===============================================================
Port a gen   e9ca05 membership 01
Port h gen   e9ca08 membership 01

* Start HA on each node:- 
#hastart

*Check status :-
#hastatus -sum

* NOTE :- Some times we need to reboot the nodes after configuring first time.