Firstly install the Raid manager 6.22 (6.221) software on the Solaris 8 system. The first thing to do is check the firmware of the A1000. This can be done with the raidutil command. ( I assume the A1000 is on controller 1. If not then change the controller as appropriate. I am assuming that the array is free for full use by ourselves and intend to remove any old luns that might be lying around. The disks are labelled on the front of the A1000 as controller number and disk number seperated by a comma eg. 1,0 1,2 and 2,0 etc, etc. We refer to the disks without using the comma. So the first disk on controller 1 is disk 10 and the 3rd disk on controller 2 is disk 23. we will use disks on both controllers when creating the mirrors. I am starting with the disks on each controller as viewed form the left. The next stage is to create the luns we require. In the below example I will configure a fully populated (12 disks) system which has 18Gb drives into the following sizes. Here we will use the raidutil command again. We are now ready to reboot the system performing a reconfigure. When this is done we can format, partition, newfs and mount the disks in the normal way. The following is a list of possibly useful raid manager commandsConfig and setup
# pkgadd -d . SUNWosar SUNWosafw SUNWosamn SUNWosau
Defending upon your raid manager version and scsi/fibre card type
you will need to patch the system.
The following patches are recommended for Solaris 8.
Solaris 8 & Raid manager 6.22 108553-07
108982-09
111085-02Solaris 8 & Raid manager 6.221 112125-01
108982-09
111085-02Ultra 60 106455-09 Fibre channel card 109571-02 It is probably worth giving the system a reconfigure reboot at this stage.
Firmware
# raidutil -c c1t0d0 -i
If the returned values are less that those shown below
you will have to upgrade the firmware using fwutil.
Product Revision 0301
Boot Level 03.01.03.04
Boot Level Date 07/06/00
Firmware Level 03.01.03.60
Firmware Date 06/30/00
To upgrade the firmware perform the following.
# cd /usr/lib/osa/fw
# fwutil 02050632.bwd c1t0d0
# fwutil 02050632.apd c1t0d0
# fwutil 03010233.bwd c1t0d0
# fwutil 03010235.apd c1t0d0
# fwutil 03010304.bwd c1t0d0
# fwutil 03010360.apd c1t0d0
You can now re-perform the "raidutil -c c1todo -i" command again to verify the firmware changes.
Clean up the array
# raidutil -c c1t0d0 -X
The above command resets the array internals.
We can now remove any old lun's.
To do this run "raidutil -c c1t0d0 -i" and note any luns that are configured.
To delete the luns perform the following command.
# raidutil -c c1t0d0 -i
LUNs found on c1t0d0.
LUN 0 RAID 1 10 MB
Vendor ID Symbios
ProductID StorEDGE A1000
Product Revision 0301
Boot Level 03.01.03.04
Boot Level Date 07/06/00
Firmware Level 03.01.03.60
Firmware Date 06/30/00
raidutil succeeded!
# raidutil -c c1t0d0 -D 0
In the above example we are removing lun 0.
repeat this command changing the lun number as appropriate.
We can now give the array a name of our choice. (Do not use a .)
# storutil -c c1t0d0 -n "dragon_array"
Creating Lun's
# raidutil -c controller -n lun_number -l raid_type -s size -g disk_list
LUN 0 Size 8617mb of a stripped/mirror configuration across half of the first two disks.
# raidutil -c c1t0d0 -n 0 -l 1+0 -s 8617 -g 10,20
LUN 1 Size 8617mb of a stripped/mirror configuration across the second half of the first two disks.
# raidutil -c c1t0d0 -n 1 -l 1+0 -s 8617 -g 10,20
LUN 2 Size 8617mb of a stripped/mirror configuration across half of the next two disks.
# raidutil -c c1t0d0 -n 2 -l 1+0 -s 8617 -g 11,21
LUN 3 Size 8617mb of a stripped/mirror configuration across the second half of the next two disks.
# raidutil -c c1t0d0 -n 3 -l 1+0 -s 8617 -g 11,21
LUN 4 Size 34468mb of a stripped/mirror configuration across the next four disks.
# raidutil -c c1t0d0 -n 4 -l 1+0 -s 34468 -g 12,13,22,23
LUN 5 Size 17234mb of a stripped/mirror configuration across the next two disks.
# raidutil -c c1t0d0 -n 5 -l 1+0 -s 34468 -g 14,24
LUN 6 Size 17234mb of a non mirror configuration on the next disk.
# raidutil -c c1t0d0 -n 6 -l 0 -s 34468 -g 15
This then leaves the disk 25 or disk 5 on the second controller free as a hot spare.
to set up this disk as a hot spare run
# raidutil -h 25
Finishing off
Other commands
EMC Client installation and checking Quick guide to install and how to check that EMC SAN is attached and working Solaris Installing ========================================================== Install Emulex driver/firmware, san packages (SANinfo, HBAinfo, lputil), EMC powerpath Use lputil to update firmware Use lputil to disable boot bios Update /kernel/drv/lpfc.conf Update /kernel/drv/sd.conf Reboot Install ECC agent Note: when adding disks on different FA had to reboot server? List HBA's: /usr/sbin/hbanyware/hbacmd listHBAS (use to get WWN's) /opt/HBAinfo/bin/gethbainfo (script wrapped around hbainfo) grep 'WWN' /var/adm/messages HBA attributes: /opt/EMLXemlxu/bin/emlxadm /usr/sbin/hbanyware/hbacmd HBAAttrib 10:00:00:00:c9:49:28:47 HBA port: /opt/EMLXemlxu/bin/emlxadm /usr/sbin/hbanyware/hbacmd PortAttrib 10:00:00:00:c9:49:28:47 HBA firmware: /opt/EMLXemlxu/bin/emlxadm Fabric login: /opt/HBAinfo/bin/gethbainfo (script wrapped around hbainfo) Adding Additional Disks: cfgadm -c configure c2 Disk available: cfgadm -al -o show_SCSI_lun echo|format inq (use to get serial numbers) Labelling: format Partitioning: vxdiskadm format Filesystem: newfs or mkfs Linux Installing *********************************************************************** Install Emulex driver, san packages (saninfo, hbanyware), firmware (lputil) Configure /etc/modprobe.conf Use lputil to update firmware Use lputil to disable boot bios Create new ram disk so changes to modprobe.conf can take affect. Reboot Install ECC agent List HBA's: /usr/sbin/hbanyware/hbacmd listHBAS (use to get WWN's) cat /proc/scsi/lpfc/* HBA attributes: /usr/sbin/hbanyware/hbacmd HBAAttrib 10:00:00:00:c9:49:28:47 cat /sys/class/scsi_host/host*/info HBA port: /usr/sbin/hbanyware/hbacmd PortAttrib 10:00:00:00:c9:49:28:47 HBA firmware: Lputil Fabric login: cat /sys/class/scsi_host/host*/state Disk available: cat /proc/scsi/scsi fdisk -l |grep -I Disk |grep sd inq (use to get serial numbers) Labelling: parted -s /dev/sda mklabel msdos (like labelling in solaris) parted -s /dev/sda print Partitioning: fdisk parted Filesystem: mkfs -j -L PowerPath HBA Info: /etc/powermt display Disk Info: /etc/powermt display dev=all Rebuild /kernel/drv/emcp.conf: /etc/powercf –q Reconfigure powerpath using emcp.conf: /etc/powermt config Save the configuration: /etc/powermt save Enable and Disable HBA cards used for testing: /etc/powermt display (get card ID) /etc/powermt disable hba=3072 /etc/powermt enable hba=3072
Tasks:
Assign IP
Change Domain ID
Change Hostname
Setting the Date and Time
username: admin
password: password
Default Domain ID is 1.
One of the switches will need a different Domain ID.
IP: 192.16.50.5
Domain: 3
IP: 192.16.50.6
Domain: 1
J2_SANSW01:admin> ipaddrset 172.16.50.5
Ethernet IP Address [172.16.50.5]:
Ethernet Subnetmask [255.255.255.0]:
Fibre Channel IP Address [0.0.0.0]:
Fibre Channel Subnetmask [0.0.0.0]:
Gateway IP Address [172.16.50.1]:
Issuing gratuitous ARP...Done.
IP address is being changed...Done.
Committing configuration...Done.
Not using fibre Channel IP setup in this configuration. Leave blank.
normal operating mode (FCSW compatible). For example:
J2_SANSW01:admin> switchname J2_SANSW01
To synchronize local time with an external source
1. Connect to the switch and log in as admin.
2. Enter the following command:
tsclockserver ipaddr
J2_SANSW01:admin> tsclockserver 172.16.10.1
Updating Clock Server configuration...done.
To set the date and time manually
1. Connect to the switch and log in as admin.
2. Enter the date command at the command line using the following syntax:
• MM is the month; valid values are 01 through 12.
• DD is the date; valid values are 01 through 31.
• hh is the hour; valid values are 00 through 23.
• mm is minutes; valid values are 00 through 59.
• YY is the year; valid values are 00 through 99 (values greater than 69 are interpreted as
1970 through 1999, and values less than 70 are interpreted as 2000-2069).
Example:
You can synchronize the local time
Ensure the following OS packages/patches are installed
This one should be on the first Solaris install DVD
VIEW HBA’s:
Once the OS is updated and a reboot -- -r has been done check if SUN/Solaris is seeing the HBA's.
luxadm -e port
/devices/pci@1d,700000/SUNW,qlc@2,1/fp@0,0:devctl CONNECTED
/devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED
fcinfo hba-port -l |grep HBA
HBA Port WWN: 210000e08b1c829a
HBA Port WWN: 210000e08b1c2395
fcinfo remote-port -sl -p 210000e08b0c5518 > 210000e08b0c5518.out
fcinfo remote-port -sl -p 210100e08b2c5518 > 210100e08b2c5518.out
(Note added LUNS when available will be seen here as "unconfigured"
This shows HBA’s
luxadm -e port
/devices/pci@1c,600000/pci@1/SUNW,qlc@4/fp@0,0:devctl CONNECTED
/devices/pci@1c,600000/pci@1/SUNW,qlc@5/fp@0,0:devctl CONNECTED
luxadm -e dump_map /devices/pci@1c,600000/pci@1/SUNW,qlc@4/fp@0,0:devctl
luxadm -e dump_map /devices/pci@1c,600000/pci@1/SUNW,qlc@5/fp@0,0:devctl
Assign LUNs.
Cgfadm –al
Example output
c1::2200000c50401277 disk connected unconfigured unknown
cfgadm -c c1::2200000c50401277
Also command can be done globally for each controller:
cfgadm -c configure c1
cfgadm -c configure c2
Text in this file “my theory is” can mess finding new LUNs in Solaris 10.
Update sd.conf
vi /kernel/drv/sd.conf
Add new LUN IDs created on
After Solaris 9, this should not be needed. 10 does not need this.
update_drv -f sd
9 and 10 can do this. 8 will failed.
devfsadm
cfgadm -al -o show_SCSI_LUN
(Note added LUNS when available will be seen here as "unconfigured"
reboot -r
Create Label for LUN
c6t600A0B800021E8B90000536B456B26B3d0
/scsi_vhci/ssd@g600a0b800021e8b90000536b456b26b3
Since this is the Export directory for Solaris "export" is a good volume name. Solaris does
not like disks without a volume name (label).
format> current
Current Disk = c6t600A0B800021E8B90000536B456B26B3d0
/scsi_vhci/ssd@g600a0b800021e8b90000536b456b26b3
(SUN and Solaris see this as a known disktype)
select type 19
Enter 8-character volume name (remember quotes)[""]:"export"
Ready to label disk, continue? y
Saving new disk and partition definitions
Enter file name["./format.dat"]:
root@j2-apps01 # luxadm probe
No Network Array enclosures found in /dev/es
Node WWN:200400a0b821eab1 Device Type:Disk device
Logical Path:/dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2
Create New Filesystem on LUN/Volume
root@j2-apps01 # newfs /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2
newfs: construct a new filesystem /dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2: (y/n)? y
/dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2: 1073676288 sectors in 32766 cylinders of 512 tracks, 64 sectors
524256.0MB in 10922 cyl groups (3 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98400, 196768, 295136, 393504, 491872, 590240, 688608, 786976, 885344,
Initializing cylinder groups:
...............................................................................
...............................................................................
............................................................
super-block backups for last 10 cylinder groups at:
1072703520, 1072801888, 1072900256, 1072998624, 1073096992, 1073195360,
1073293728, 1073392096, 1073490464, 1073588832,
This only needs to be done from one machine
/dev/rdsk/c6t600A0B800021E8B90000536B456B26B3d0s2
This will mount at boot
/export/home on /dev/dsk/c6t600A0B800021E8B90000536B456B26B3d0s2 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=1d80022 on Tue Nov 28
If you use Veritas follow these steps
# instruct veritas to scan for new luns
vxdctl enable
vxdisk -o alldgs list
c7t1d12s2 auto:none - - online invalid
/etc/vx/bin/vxdisksetup -i c7t2d11
vxdg init oraclelogs c7t2d11=c7t2d11
# if group already exists
vxdg -g oraclelogs adddisk c7t1d12=c7t1d12
vxdisk -o alldgs list
# vxdg free
# make a volume of max size (41853696)
vxassist -g oraclelogs make oralogvol01 41853696
## After making 19g check free space of new Volume
# vxassist -g oraclelogs maxgrow oralogvol01 Volume oralogvol01 can be extended by 2007040 to: 41852928 (20436Mb) # growto space available vxassist -g oraclelogs growto oralogvol01 41852928
vxprint -htr
mkfs -F vxfs /dev/vx/rdsk/oraclelogs/oralogvol01
mkfs -F vxfs /dev/vx/rdsk/oraclelogs/oralogvol02
mkdir /oralog01
mount -F vxfs /dev/vx/dsk/oraclelogs/oralogvol01 /oralog01
cd /oralog01
ls
df -h
vi /etc/vfstab
/dev/vx/dsk/oraclelogs/oralogvol01 /dev/vx/rdsk/oraclelogs/oralogvol01 /oralog01 vxfs 2 yes suid
# umount /oralog01
# mount /oralog01
UNIX
-
▼
2009
(10)
-
▼
August
(10)
- Raid Manager 6.22 and A1000 config
- EMC Client installation and checking
- Brocade SilkWorm 4100 SAN Switch
- Add and Configure LUNs in Solaris
- File system Output inconsistence
- SVM soft partition mirroring in a sun cluster setup
- Soft Partition
- SUN Cluster Quick Reference commands
- Root Mirroring (Solaris Volume Manager) in the Sol...
- AIX Booting
-
▼
August
(10)