UNIX MATRIX - Where there is a shell, there is a way

UNIX,cluster, Solaris, SVM,ZFS, AIX Booting,Solaris basic,soft partition,Storage Area Network, NetAPP storage, Solaris zone, server installation, server support, AIX architecture, cluster availability, cluster guide,cluster requirements,SUN, AIX,NFS,Solaris troubleshooting

UNIX Document Center

Saturday, August 8, 2009

Raid Manager 6.22 and A1000 config

Raid Manager 6.22 and A1000 config

Config and setup

Firstly install the Raid manager 6.22 (6.221) software on the Solaris 8 system.

 # pkgadd -d . SUNWosar SUNWosafw SUNWosamn SUNWosau  
Defending upon your raid manager version and  scsi/fibre card type 
you will need to patch the system. 
The following patches are recommended for Solaris 8.  

Solaris 8 & Raid manager 6.22108553-07
108982-09
111085-02
Solaris 8 & Raid manager 6.221112125-01
108982-09
111085-02
Ultra 60106455-09
Fibre channel card109571-02
It is probably worth giving the system a reconfigure reboot at this stage.  

Firmware

The first thing to do is check the firmware of the A1000. This can be done with the raidutil command. ( I assume the A1000 is on controller 1. If not then change the controller as appropriate.

 # raidutil -c c1t0d0 -i  
If the returned values are less that those shown below 
you will have to upgrade the firmware using fwutil.   
Product  Revision  0301  
Boot Level        03.01.03.04  
Boot Level Date   07/06/00  
Firmware Level    03.01.03.60  
Firmware Date     06/30/00  
To upgrade the firmware perform the following.   
# cd /usr/lib/osa/fw  
# fwutil 02050632.bwd c1t0d0  
# fwutil 02050632.apd  c1t0d0  
# fwutil 03010233.bwd  c1t0d0  
# fwutil 03010235.apd  c1t0d0  
# fwutil 03010304.bwd  c1t0d0  
# fwutil 03010360.apd  c1t0d0  
You can now re-perform the "raidutil -c c1todo -i" command again to verify the firmware changes. 

Clean up the array

I am assuming that the array is free for full use by ourselves and intend to remove any old luns that might be lying around.

 # raidutil -c c1t0d0 -X 
The above command resets the array internals. 
We can now remove any old lun's.  
To do this run "raidutil -c c1t0d0 -i" and note any luns that are configured.  
To delete the luns perform the following command.  
# raidutil -c c1t0d0 -i    
LUNs found on c1t0d0.     
LUN 0    RAID 1    10 MB     
Vendor ID         Symbios    
ProductID         StorEDGE A1000    
Product Revision  0301    
Boot Level        03.01.03.04    
Boot Level Date   07/06/00    
Firmware Level    03.01.03.60    
Firmware Date     06/30/00    
raidutil succeeded!   
# raidutil -c c1t0d0 -D 0 
In the above example we are removing lun 0.  
repeat this command changing the lun number as appropriate.  
We can now give the array a name of our choice. (Do not use a .)  
# storutil -c c1t0d0 -n "dragon_array" 

Creating Lun's

The disks are labelled on the front of the A1000 as controller number and disk number seperated by a comma eg. 1,0 1,2 and 2,0 etc, etc. We refer to the disks without using the comma. So the first disk on controller 1 is disk 10 and the 3rd disk on controller 2 is disk 23. we will use disks on both controllers when creating the mirrors. I am starting with the disks on each controller as viewed form the left. The next stage is to create the luns we require. In the below example I will configure a fully populated (12 disks) system which has 18Gb drives into the following sizes. Here we will use the raidutil command again.

 # raidutil -c controller -n lun_number -l  raid_type  -s  size  -g  disk_list  
LUN 0   Size 8617mb of a stripped/mirror configuration across half of the first two disks.   
# raidutil -c c1t0d0 -n 0 -l 1+0 -s 8617 -g 10,20  
LUN 1   Size 8617mb of a stripped/mirror configuration across the second half of the first two disks.   
# raidutil -c c1t0d0 -n 1 -l 1+0 -s 8617 -g 10,20  
LUN 2   Size 8617mb of a stripped/mirror configuration across half of the next two disks.   
# raidutil -c c1t0d0 -n 2 -l 1+0 -s 8617 -g 11,21  
LUN 3   Size 8617mb of a stripped/mirror configuration across the second half of the next two disks.   
# raidutil -c c1t0d0 -n 3 -l 1+0 -s 8617 -g 11,21  
LUN 4   Size 34468mb of a stripped/mirror configuration across the next four disks.   
# raidutil -c c1t0d0 -n 4 -l 1+0 -s 34468 -g 12,13,22,23  
LUN 5   Size 17234mb of a stripped/mirror configuration across the next two disks.   
# raidutil -c c1t0d0 -n 5 -l 1+0 -s 34468 -g 14,24  
LUN 6  Size 17234mb of a non mirror configuration on the next disk.   
# raidutil -c c1t0d0 -n 6 -l 0 -s 34468 -g 15  
This then leaves the disk 25 or disk 5 on the second controller free as a hot spare. 
to set up this disk as a hot spare run 
# raidutil -h 25 

Finishing off

We are now ready to reboot the system performing a reconfigure. When this is done we can format, partition, newfs and mount the disks in the normal way.

Other commands

The following is a list of possibly useful raid manager commands

  • rm6 (GUI interface)
  • drivutil (drive / lun management)
  • healtchk (helth check on a raid module
  • lad (list array devices)
  • logutil (log formatting program)
  • nvutil (edit / modify NVSRAM)
  • parityck (parity checker and repair)
  • rdacutil (redundency controller for failed bits and load balancing)
  • storutil (host and naming info)

1 1

EMC Client installation and checking

Quick guide to install and how to check that EMC SAN is attached and working

Solaris

Installing

==========================================================

Install Emulex driver/firmware, san packages (SANinfo, HBAinfo, lputil), EMC powerpath

Use lputil to update firmware

Use lputil to disable boot bios

Update /kernel/drv/lpfc.conf

Update /kernel/drv/sd.conf

Reboot

Install ECC agent

Note: when adding disks on different FA had to reboot server?

List HBA's:

/usr/sbin/hbanyware/hbacmd listHBAS (use to get WWN's)

/opt/HBAinfo/bin/gethbainfo (script wrapped around hbainfo)

grep 'WWN' /var/adm/messages

HBA attributes:

/opt/EMLXemlxu/bin/emlxadm

/usr/sbin/hbanyware/hbacmd HBAAttrib 10:00:00:00:c9:49:28:47

HBA port:

/opt/EMLXemlxu/bin/emlxadm

/usr/sbin/hbanyware/hbacmd PortAttrib 10:00:00:00:c9:49:28:47

HBA firmware:

/opt/EMLXemlxu/bin/emlxadm

Fabric login:

/opt/HBAinfo/bin/gethbainfo (script wrapped around hbainfo)

Adding Additional Disks:

cfgadm -c configure c2

Disk available:

cfgadm -al -o show_SCSI_lun

echo|format

inq (use to get serial numbers)

Labelling:

format

Partitioning:

vxdiskadm

format

Filesystem:

newfs or mkfs

Linux

Installing

***********************************************************************

Install Emulex driver, san packages (saninfo, hbanyware), firmware (lputil)

Configure /etc/modprobe.conf

Use lputil to update firmware

Use lputil to disable boot bios

Create new ram disk so changes to modprobe.conf can take affect.

Reboot

Install ECC agent

List HBA's:

/usr/sbin/hbanyware/hbacmd listHBAS (use to get WWN's)

cat /proc/scsi/lpfc/*

HBA attributes:

/usr/sbin/hbanyware/hbacmd HBAAttrib 10:00:00:00:c9:49:28:47

cat /sys/class/scsi_host/host*/info

HBA port:

/usr/sbin/hbanyware/hbacmd PortAttrib 10:00:00:00:c9:49:28:47

HBA firmware:

Lputil

Fabric login:

cat /sys/class/scsi_host/host*/state

Disk available:

cat /proc/scsi/scsi

fdisk -l |grep -I Disk |grep sd

inq (use to get serial numbers)

Labelling:

parted -s /dev/sda mklabel msdos (like labelling in solaris)

parted -s /dev/sda print

Partitioning:

fdisk

parted

Filesystem:

mkfs -j -L /dev/vx/dsk/datadg/vol01

PowerPath

HBA Info:

/etc/powermt display

Disk Info:

/etc/powermt display dev=all

Rebuild /kernel/drv/emcp.conf:

/etc/powercf –q

Reconfigure powerpath using emcp.conf:

/etc/powermt config

Save the configuration:

/etc/powermt save

Enable and Disable HBA cards used for testing:

/etc/powermt display (get card ID)

/etc/powermt disable hba=3072

/etc/powermt enable hba=3072

Tasks:

Assign IP

Change Domain ID

Change Hostname

Setting the Date and Time

A new switch (before IP) will need to be configured through serial port.

Default username and password on new Brocade switch is:

username: admin

password: password

Domain ID must be different on each switch.

Default Domain ID is 1.

One of the switches will need a different Domain ID.

The current setup in non-classified:

J2_SANSW01

IP: 192.16.50.5

Domain: 3

J2_SANSW02

IP: 192.16.50.6

Domain: 1

# Note: Brocade will not allow a hostname longer then above.

A "switchShow" command will show config of switch


Setting the IP Address

After connecting with serial cable, use the "ipaddrset" command to set the IP address.

Example:

J2_SANSW01:admin> ipaddrset 172.16.50.5

Ethernet IP Address [172.16.50.5]:

Ethernet Subnetmask [255.255.255.0]:

Fibre Channel IP Address [0.0.0.0]:

Fibre Channel Subnetmask [0.0.0.0]:

Gateway IP Address [172.16.50.1]:

Issuing gratuitous ARP...Done.

IP address is being changed...Done.

Committing configuration...Done.

Just hit enter if correct.

Not using fibre Channel IP setup in this configuration. Leave blank.

Now the switch can be telneted to.

To set/change the Domain ID

1. Connect to the switch and log in as admin.

2. Enter the switchdisable command to disable the switch.

3. Enter the "configure" command.

4. Enter y after the Fabric Parameters prompt:

Fabric parameters (yes, y, no, n): [no] y

5. Enter a unique domain ID at the Domain prompt. Use a domain ID value from 1 through 239 for

normal operating mode (FCSW compatible). For example:

Domain: (1..239) [1] 3

6. Respond to the remaining prompts (or press Ctrl-d to accept the other settings and exit).

7. Enter the "switchenable" command to reenable the switch.


To set/change switchname

Enter the "switchname" command at the command line, using the following syntax:

switchname “newname”

Where "newname" is the new name for the switch.

Example:

J2_SANSW01:admin> switchname J2_SANSW01

Setting the Date and Time

To synchronize local time with an external source

1. Connect to the switch and log in as admin.

2. Enter the following command:

tsclockserver ipaddr

Example:

J2_SANSW01:admin> tsclockserver 172.16.10.1

Updating Clock Server configuration...done.

To set the date and time manually

1. Connect to the switch and log in as admin.

2. Enter the date command at the command line using the following syntax:

date “MMDDhhmmYY”

The values represent the following:

• MM is the month; valid values are 01 through 12.

• DD is the date; valid values are 01 through 31.

• hh is the hour; valid values are 00 through 23.

• mm is minutes; valid values are 00 through 59.

• YY is the year; valid values are 00 through 99 (values greater than 69 are interpreted as

1970 through 1999, and values less than 70 are interpreted as 2000-2069).

Example:

You can synchronize the local time

Ensure the following OS packages/patches are installed

This one should be on the first Solaris install DVD

SUNWqlc

These two dowload from SUN if needed

120965-01

119131-13

# Note: If HBA's are in server when Solaris is installed these should all have been installed

VIEW HBA’s:

Once the OS is updated and a reboot -- -r has been done check if SUN/Solaris is seeing the HBA's.

Run the luxadm command:

luxadm -e port

Example:

luxadm -e port

/devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0:devctl CONNECTED

/devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED

Above is showing two HBA's 2 and 2,1

OK, good to go.

Before assigning LUN’s

Collect Current HBA information

fcinfo hba-port -l |grep HBA

HBA Port WWN: 210000e08b1c829a

HBA Port WWN: 210000e08b1c2395

Collect LUNs Solaris already knows about

fcinfo remote-port -sl -p 210000e08b0c5518 > 210000e08b0c5518.out

fcinfo remote-port -sl -p 210100e08b2c5518 > 210100e08b2c5518.out

cfgadm -al -o show_SCSI_LUN > currentLUNs.out

(Note added LUNS when available will be seen here as "unconfigured"

Scan deeply LUNs attached to each HBA

This shows HBA’s

luxadm -e port

/devices/pci@1c,600000/pci@1/SUNW,qlc@4/fp@0,0:devctl CONNECTED

/devices/pci@1c,600000/pci@1/SUNW,qlc@5/fp@0,0:devctl CONNECTED

Run following command on each Controller

luxadm -e dump_map /devices/pci@1c,600000/pci@1/SUNW,qlc@4/fp@0,0:devctl

luxadm -e dump_map /devices/pci@1c,600000/pci@1/SUNW,qlc@5/fp@0,0:devctl

Assigning LUN’s

Now is the time to assign LUNs

Assign LUNs.

Configure New LUN’s on Solaris

LUNS should “Just Show Up” on Solaris 10.

Cgfadm –al

Example output

c1::2200000c50401277 disk connected unconfigured unknown

New LUNs show as unconfigured until cfgadm is used.

When LUNs appear configure them

cfgadm -c c1::2200000c50401277
Also command can be done globally for each controller:
cfgadm -c configure c1
cfgadm -c configure c2

It does not effect previously configured LUNs.

If they do not, a few things to try if LUNs don’t show up

Check for legacy txt in sd.conf. Solaris 10 does not need this, and it just slows up booting.

Text in this file “my theory is” can mess finding new LUNs in Solaris 10.

Update sd.conf

vi /kernel/drv/sd.conf

Add new LUN IDs created on Hitachi

After Solaris 9, this should not be needed. 10 does not need this.

Instruct Solaris to re-read sd.conf

update_drv -f sd

9 and 10 can do this. 8 will failed.

Scan scsi bus so Solaris can see the new luns

devfsadm

Find new LUNs

cfgadm -al -o show_SCSI_LUN

(Note added LUNS when available will be seen here as "unconfigured"

If LUNs do not appear, server reboot will be needed.

reboot -r

Create Label for LUN

This is all only valid if you Are Not running Veritas

Now run a format command

# Note: the disk only needs to be labeled on once. The following servers only need mount it

Run format command

format

A disk looking like the following should show up.

c6t600A0B800021E8B90000536B456B26B3d0

/scsi_vhci/ssd@g600a0b800021e8b90000536b456b26b3

Select the disk number. It will need a volume name if it does not already have one.

Since this is the Export directory for Solaris "export" is a good volume name. Solaris does

not like disks without a volume name (label).

Some commands in format to look at are:

format> current

Current Disk = c6t600A0B800021E8B90000536B456B26B3d0

/scsi_vhci/ssd@g600a0b800021e8b90000536b456b26b3

(SUN and Solaris see this as a known disktype)

format> type

select type 19

format> volname

Enter 8-character volume name (remember quotes)[""]:"export"

Ready to label disk, continue? y

format> save

Saving new disk and partition definitions

Enter file name["./format.dat"]:

format> quit

This only needs to be done from one machine

Get LUN Info for Mounting

run the luxadm command to get LUN info

luxadm probe

Example:

root@j2-apps01 # luxadm probe

No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):

Node WWN:200400a0b821eab1 Device Type:Disk device

Logical Path:/dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2

Each server will have a different Path, to the same LUN

Create New Filesystem on LUN/Volume

Now create a new filesystem (format)

Run newfs command on LUN found in luxadm probe command:

newfs /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2

Example:

root@j2-apps01 # newfs /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2

newfs: construct a new filesystem /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2: (y/n)? y

/dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2: 1073676288 sectors in 32766 cylinders of 512 tracks, 64 sectors

524256.0MB in 10922 cyl groups (3 c/g, 48.00MB/g, 5824 i/g)

super-block backups (for fsck -F ufs -o b=#) at:

32, 98400, 196768, 295136, 393504, 491872, 590240, 688608, 786976, 885344,

Initializing cylinder groups:

...............................................................................

...............................................................................

............................................................

super-block backups for last 10 cylinder groups at:

1072703520, 1072801888, 1072900256, 1072998624, 1073096992, 1073195360,

1073293728, 1073392096, 1073490464, 1073588832,

Now the disk is mountable and solaris can understand it

This only needs to be done from one machine

A LUN is a physical disk to Solaris at this point

Edit /etc/vfstab to mount the new LUN

Now edit /etc/vfstab to mount the new LUN

The new LUN/disk in this example (from luxadm) is c6t600A0B800021E8B90000536B456B26B3d0s2

The device path is:

/dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2

Add the folling line to /etc/vfstab:

/dev/dsk/c6t600A0B800021E8B90000536B456B26B3d0s2 /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2 /export/home ufs 1 yes logging

Edit this to match LUN ID of your system

This will mount at boot

Run mount command to test if LUN mounts:

mount /export/home

There is no output if it works

Run the mount command again to see new LUN and mount point

mount

Example:

root# mount |grep export

/export/home on /dev/dsk/c6t600A0B800021E8B90000536B456B26B3d0s2 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=1d80022 on Tue Nov 28 15:33:06 2006

Veritas Volume Setup

If you use Veritas follow these steps

# instruct veritas to scan for new luns

vxdctl enable

# check for new luns on veritas level

vxdisk -o alldgs list

c7t2d11s2 auto:none - - online invalid

c7t1d12s2 auto:none - - online invalid

# initialize new disks

/etc/vx/bin/vxdisksetup -i c7t2d11

# initialize new disk group with disk c3t3d30

vxdg init oraclelogs c7t2d11=c7t2d11

# if group already exists

vxdg -g oraclelogs adddisk c7t1d12=c7t1d12

# check the status of the new disks

vxdisk -o alldgs list

# Check freespace of the diskgroup

# vxdg free

# Size can be gotten from vxdg free

# make a volume of max size (41853696)

vxassist -g oraclelogs make oralogvol01 41853696

# If not all space is used, or task is to grow volume

## After making 19g check free space of new Volume

# vxassist -g oraclelogs maxgrow oralogvol01 Volume oralogvol01 can be extended by 2007040 to: 41852928 (20436Mb) # growto space available vxassist -g oraclelogs growto oralogvol01 41852928

# check the new volume

vxprint -htr

# create a filesystem on the new volume

mkfs -F vxfs /dev/vx/rdsk/oraclelogs/oralogvol01

mkfs -F vxfs /dev/vx/rdsk/oraclelogs/oralogvol02

# make a mount point

mkdir /oralog01

# mount the new filesystem at the new mount point

mount -F vxfs /dev/vx/dsk/oraclelogs/oralogvol01 /oralog01

# verify the new mounted filesystem

cd /oralog01

ls

# verify the size on solaris level

df -h

# make permenent

vi /etc/vfstab

/dev/vx/dsk/oraclelogs/oralogvol01 /dev/vx/rdsk/oraclelogs/oralogvol01 /oralog01 vxfs 2 yes suid

# test vfstab entry

# umount /oralog01

# mount /oralog01