Menu

Thursday, February 7, 2013

Scanning LUN on Solaris and Mount

Here are the steps to scanning LUN's on Solaris and Mount

Step 1: Display HBA's
example:
bash-3.00# luxadm -e port
/devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0:devctl CONNECTED
/devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED

In the above instance it is displaying 2 HBA's

Step 2: Get the WWN for HBA's
bash-3.00# fcinfo hba-port -l |grep HBA
HBA Port WWN: 21000024ff295a34
HBA Port WWN: 21000024ff295a35

Step 3:

            i) Get LUNs Solaris OS already knows about
bash-3.00# fcinfo remote-port -sl -p 21000024ff295a34 > 21000024ff295a34.out
bash-3.00# fcinfo remote-port -sl -p 21000024ff295a35 > 21000024ff295a35.out
bash-3.00# cfgadm -al -o show_SCSI_LUN > currentLUNs.out

           ii) To get additional information such as link statistics on HBA
bash-3.00# fcinfo hba-port -l 21000024ff295a35
HBA Port WWN: 21000024ff295a35
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3356-02
Firmware Version: 05.03.02
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402R00-1023835638
Driver Name: qlc
Driver Version: 20100301-3.00
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 20000024ff295a37
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
bash-3.00#

Step 4: For each entry in /dev/fc, issue a "luxadm -e dump_map" command to view all the devices that are visible through that HBA port or for a specific HBA's
bash-3.00# luxadm -e port
/devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0:devctl CONNECTED
/devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED

bash-3.00# luxadm -e dump_map /devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0:devctl
bash-3.00# luxadm -e dump_map /devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl

Step 5: Let map a LUN from a storage system to solaris host and the scan
bash-3.00# devfsadm #This will scan for LUN but in solris 10 LUN should be populated automatically

bash-3.00# cfgadm -al #New LUNs show as unconfigured

Example:
c1::2200000c50401277 disk connected unconfigured unknown

Step 6: Configure the LUN using "cfgadm"
bash-3.00# cfgadm -c c1::2200000c50401277

bash-3.00# cfgadm -c configure c1 # We can also do it at controller level
bash-3.00# cfgadm -c configure c2

Troubleshooting:

 i) If LUNS are still not visible to solaris host

  • Add a LUN id to /kernel/drv/sd.conf ( This is a text file ) using your comfortable editor
  • Then "update_drv -f sd", Solaris 9 and 10 does not required this step
  • Then scan for the LUN "devfsadm"

bash-3.00# vi /kernel/drv/sd.conf # Add LUN id
bash-3.00# update_drv -f sd"
bash-3.00# devfsadm

ii) If still not working then reboot the host
bash-3.00# reboot -r

Step 7: Get LUN info for Mounting by run the luxadm command
bash-3.00# luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device(s):
Node WWN:200400a0b821eab1 Device Type:Disk device
Logical Path:/dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2

Step 8: Label disk using format command and select respective drive from luxadm output

Step 9 : Create New Filesystem on the LUN
bash-3.00# newfs /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2

Step 10: Edit /etc/vfstab to auto mount the new LUN on reboot
bash-3.00# vi /etc/vfstab # The content below in the double quotes show be in one line
"/dev/dsk/c6t600A0B800021E8B90000536B456B26B3d0s2 /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2 /tibco ufs 1 yes logging"

Step 11: You can manually mount it using following commands
bash-3.00# mount /tibco

Step 12: Verfing the mount point
bash-3.00# mount | grep tibco

/tibco on /dev/dsk/c6t600A0B800021E8B90000536B456B26B3d0s2 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=1d80022

Step 13: You are done ... Start writing files to the disk


Wednesday, February 6, 2013

Oplocks option in NetApp qtree


Oplocks

    Oplocks (opportunistic locks) enable the redirector on a CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly reminding the server that it needs access to that file, which improves performance by reducing network traffic.

    By default, oplocks are enabled for each qtree. If you disable oplocks for the entire storage system, oplocks are not sent even if you enable oplocks on a per-qtree basis.

When to use oplocks

    If a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must relinquish the oplock and its access to the file. The redirector must then invalidate cached data and flush writes and locks, resulting in possible loss of data that was to be written.
CIFS oplocks on the storage system are on by default. You might turn CIFS oplocks off under either of the following circumstances (otherwise, you can leave CIFS oplocks on):
You are using a database application whose documentation recommends that oplocks be turned off.
You are handling critical data and cannot afford even the slightest data loss.

Enabling CIFS oplocks for a specific volume or qtree

    If you've previously disabled CIFS oplocks for a specific volume or qtree, and now you want to reenable them, you can do so by using the qtree oplocks command.

Step 1:
             Ensure that the cifs.oplocks.enable option is set to on. Otherwise, enabling CIFS oplocks for a specific volume or qtree has no effect.
qtree oplocks path enable

Step 2: ( example )

            To enable CIFS oplocks on the "bigdata" qtree in "vol10", use the following commands

netapp1> options cifs.oplocks.enable on
netapp1> qtree oplocks /vol/vol10/bigdata enable

Step 3:
             You can verify the update by using the qtree status command, using the name of the containing volume if you updated the CIFS oplocks for a qtree.

Tuesday, February 5, 2013

How to create an aggregate using command line

What is a Aggregate in Netapp

            Disks are combined into raid groups, usually 14+2 using raid dp (o smaller n+2). Raid groups are then concatenated into an aggregate. Once set up disks are added to the aggregate to expand capacity following the raid group policy automatically. Volumes are then created within the aggregate that can contain file systems and/or luns. Volumes are flexible so can be shrunk o grown on the fly


Creating a Aggregate:
aggr create <aggr_name> [-m] –r <raid_size> –t <raid_type> <total_number_of_disks>


Setup a snap reserve on the Aggregate:
snap reserve –A <aggr_name> <snap_reserve_percentage>


Schedule a snapshot on Aggregate:
snap sched –A <aggr_name> <snap_sched>

Friday, February 1, 2013

NetAPP Command Quick Reference



Commands:

sysconfig -a : shows hardware configuration with more verbose information
sysconfig -d : shows information of the disk attached to the filer
version : shows the netapp Ontap OS version.
uptime : shows the filer uptime
dns info : this shows the dns resolvers, the no of hits and misses and other info
nis info : this shows the nis domain name, yp servers etc.
rdfile : Like "cat" in Linux, used to read contents of text files/
wrfile : Creates/Overwrites a file. Similar to "cat > filename" in Linux
aggr status : Shows the aggregate status
aggr status -r : Shows the raid configuration, reconstruction information of the disks in filer
aggr show_space : Shows the disk usage of the aggreate, WAFL reserve, overheads etc.
vol status : Shows the volume information
vol status -s : Displays the spare disks on the filer
vol status -f : Displays the failed disks on the filer
vol status -r : Shows the raid configuration, reconstruction information of the disks
df -h : Displays volume disk usage
df -i : Shows the inode counts of all the volumes
df -Ah : Shows "df" information of the aggregate
license : Displays/add/removes license on a netapp filer
maxfiles : Displays and adds more inodes to a volume
aggr create : Creates aggregate
vol create : Creates volume in an aggregate
vol offline : Offlines a volume
vol online : Onlines a volume
vol destroy : Destroys and removes an volume
vol size [+|-] : Resize a volume in netapp filer
vol options : Displays/Changes volume options in a netapp filer
qtree create : Creates qtree
qtree status : Displays the status of qtrees
quota on : Enables quota on a netapp filer
quota off : Disables quota
quota resize : Resizes quota
quota report : Reports the quota and usage
snap list : Displays all snapshots on a volume
snap create : Create snapshot
snap sched : Schedule snapshot creation
snap reserve : Display/set snapshot reserve space in volume
/etc/exports : File that manages the NFS exports
rdfile /etc/exports : Read the NFS exports file
wrfile /etc/exports : Write to NFS exports file
exportfs -a : Exports all the filesystems listed in /etc/exports
cifs setup : Setup cifs
cifs shares : Create/displays cifs shares
cifs access : Changes access of cifs shares
lun create : Creates iscsi or fcp luns on a netapp filer
lun map : Maps lun to an igroup
lun show : Show all the luns on a filer
igroup create : Creates netapp igroup
lun stats : Show lun I/O statistics
disk show : Shows all the disk on the filer
disk zero spares : Zeros the spare disks
disk_fw_update : Upgrades the disk firmware on all disks
options : Display/Set options on netapp filer
options nfs : Display/Set NFS options
options timed : Display/Set NTP options on netapp.
options autosupport : Display/Set autosupport options
options cifs : Display/Set cifs options
options tcp : Display/Set TCP options
options net : Display/Set network options
ndmpcopy : Initiates ndmpcopy
ndmpd status : Displays status of ndmpd
ndmpd killall : Terminates all the ndmpd processes.
ifconfig : Displays/Sets IP address on a network/vif interface
vif create : Creates a VIF (bonding/trunking/teaming)
vif status : Displays status of a vif
netstat : Displays network statistics
sysstat -us 1 : begins a 1 second sample of the filer's current utilization (crtl - c to end)
nfsstat : Shows nfs statistics
nfsstat -l : Displays nfs stats per client
nfs_hist : Displays nfs historgram
statit : beings/ends a performance workload sampling [-b starts / -e ends]
stats : Displays stats for every counter on netapp. Read stats man page for more info
ifstat : Displays Network interface stats
qtree stats : displays I/O stats of qtree
environment : display environment status on shelves and chassis of the filer
storage show : Shows storage component details
snapmirror intialize : Initialize a snapmirror relation
snapmirror update : Manually Update snapmirror relation
snapmirror resync : Resyns a broken snapmirror
snapmirror quiesce : Quiesces a snapmirror bond
snapmirror break : Breakes a snapmirror relation
snapmirror abort : Abort a running snapmirror
snapmirror status : Shows snapmirror status
lock status -h : Displays locks held by filer
sm_mon : Manage the locks
storage download shelf : Installs the shelf firmware
software get : Download the Netapp OS software
software install : Installs OS
download : Updates the installed OS
cf status : Displays cluster status
cf takeover : Takes over the cluster partner
cf giveback : Gives back control to the cluster partner
reboot : Reboots a filer

Sunday, August 12, 2012

Find Certificate Expiry Time and Date



Certificate expiry Time and Date

1. Compare the certificate expiry time and system time and will send a 
warning mail when it is 30 days. 

#!/bin/sh 

CertExpires=`openssl x509 -in /path/to/cert.pem -inform PEM -text \ 
-noout -enddate | grep "Not After" | awk '{print $4, $5, $7}'` 

TodayPlus30=`date -ud "+30 day" | awk '{print $2, $3, $6}'` 

if [ "$CertExpires" = "$TodayPlus30" ] 
then 
echo "Your SSL Cert will expire in 30 days." | mail -s "SSL Cert 
Monitor" email@removed 
fi 
#!/bin/sh
#
# example: remote.host.name [port]
#

REMHOST=$1
REMPORT=${2:-443}

echo |\
openssl s_client -connect ${REMHOST}:${REMPORT} 2>&1 |\
sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > ~/certificate

DATEC=`openssl x509 -in ~/certificate -inform PEM -text -noout\
-enddate | grep "Not After" | awk '{print $4, $5, $7}'`
TIMEC=`openssl x509 -in ~/certificate -inform PEM -text -noout\
-enddate| grep "Not After" | awk '{ print $6 }'`

echo ExpireDate is $DATEC
echo ExpireTime is $TIMEC

rm -rf ~/certificate
#EXIT

Friday, April 13, 2012

Howto: Use tar Command Through Network Over SSH Session


How do I use tar command over secure ssh session?

The GNU version of the tar archiving utility (and other old version of tar) can be use through network over ssh session. Do not use telnet command, it is insecure. You can use Unix/Linux pipes to create actives. Following command backups /wwwdata directory to dumpserver.backup.com (IP 192.168.1.201) host over ssh session.
The default first SCSI tape drive under Linux is /dev/st0. You can read more about tape drives naming convention used under Linux here.
# tar zcvf - /wwwdata | ssh root@dumpserver.backup.com "cat > /backup/wwwdata.tar.gz"OR# tar zcvf - /wwwdata | ssh root@192.168.1.201 "cat > /backup/wwwdata.tar.gz"
Output:
tar: Removing leading `/' from member names
/wwwdata/
/wwwdata/n/nixcraft.in/
/wwwdata/c/cyberciti.biz/
....
..
...
Password:
You can also use dd command for clarity purpose:
# tar cvzf - /wwwdata | ssh root@192.168.1.201 "dd of=/backup/wwwdata.tar.gz"
It is also possible to dump backup to remote tape device:
# tar cvzf - /wwwdata | ssh root@192.168.1.201 "cat > /dev/nst0"
OR you can use mt to rewind tape and then dump it using cat command:
# tar cvzf - /wwwdata | ssh root@192.168.1.201 $(mt -f /dev/nst0 rewind; cat > /dev/nst0)$
You can restore tar backup over ssh session:# cd /
# ssh root@192.168.1.201 "cat /backup/wwwdata.tar.gz" | tar zxvf -
If you wish to use above command in cron job or scripts then consider SSH keys to get rid of the passwords.

Monday, April 9, 2012

Steps To Create A Startup Script

Steps to create a startup script

Step1:  Create a file
$ touch /etc/init.d/jira

Step2: Use your favorite editor ( eg: vi ) and past the below content into the file


#!/bin/sh

case "$1" in

start)
/opt/jira/startup.sh
;;

stop)
/opt/jira/stop
ps aux | grep java
CSTOP=`echo $?`

if [ $CSTOP = 0 ]; then
sudo killall java
else

echo " Application is stopped"
fi
;;

restart)

$0 stop && sleep 5
$0 start

;;

reload)

$0 stop
$0 start
;;

*)

echo "Usage: $0 {start|stop|restart|reload}"
exit 1
esac

Step3: Change the file permissions

chmod +x /etc/init.d/httpd 
Step4 : Auto Starting application


 # update-rc.d apache2 defaults
Adding system startup for /etc/init.d/apache2 ...
/etc/rc0.d/K20apache2 -> ../init.d/apache2
/etc/rc1.d/K20apache2 -> ../init.d/apache2
/etc/rc6.d/K20apache2 -> ../init.d/apache2
/etc/rc2.d/S20apache2 -> ../init.d/apache2
/etc/rc3.d/S20apache2 -> ../init.d/apache2
/etc/rc4.d/S20apache2 -> ../init.d/apache2
/etc/rc5.d/S20apache2 -> ../init.d/apache2
 


Step 5: Manual start and restart of application


 /etc/init.d/jira start