The first article was a bit specific on NexentaOS 3.x bust most of the things I'll write here can be used (currently) 1:1 on OpenIndiana and SmartOS Live (once you get it installed on Disk). I'd not recommend using the UNIX shell within NexentaStor, the Appliance OS that comes with a Storage Management System (NMS) and its respective Shell tools (NMC) and Web-UI (NMV), you risk messing up to management interface temporarily.
At this point I expect you that have configured basic zoning on your FC switches. Actually I simply created a zone for each virtualization node and its targets node containing respective WWN of these Ports (I did not make Zones on port basis). - Maybe I'll wrap up the story of updating my Silkworms 200E's from FabOS 5.1 to 6.2 😉
OK, let's enumerate the disks and create a pool:
root@kodama:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t10d0 <DEFAULT cyl 4497 alt 2 hd 255 sec 63> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@a,0 1. c0t12d0 <SEAGATE-ST3300656SS-0005 cyl 22465 alt 2 hd 6 sec 636> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@c,0 2. c0t13d0 <SEAGATE-ST3146855SS-0001-136.73GB> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@d,0 3. c0t14d0 <SEAGATE-ST3146356SS-0006-136.73GB> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@e,0 4. c0t15d0 <SEAGATE-ST3146855SS-0001-136.73GB> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@f,0 5. c0t19d0 <SEAGATE-ST3300655SS-0001-279.40GB> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@13,0 6. c0t20d0 <SEAGATE-ST3300655SS-0001-279.40GB> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@14,0 7. c0t21d0 <SEAGATE-ST3300655SS-0001-279.40GB> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@15,0 8. c0t22d0 <SEAGATE-ST3300655SS-0001-279.40GB> /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3514@1/pci8086,3478@0/sd@16,0 Specify disk (enter its number): ^C
# zpool create sasmirror1 mirror c0t19d0 c0t20d0 mirror c0t21d0 c0t22d0 spare c0t12d0
Now we can see our newly created pool and start carving out Volumes ("zVols")
# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT sasmirror1 556G 249K 556G 0% 1.00x ONLINE - syspool 34.2G 9.17G 25.1G 26% 1.00x ONLINE - # zpool status sasmirror1 pool: sasmirror1 state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Wed Aug 31 13:06:45 2011 config: NAME STATE READ WRITE CKSUM sasmirror1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t19d0 ONLINE 0 0 0 c0t20d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t21d0 ONLINE 0 0 0 c0t22d0 ONLINE 0 0 0 spares c0t12d0 AVAIL errors: No known data errors
After creating a data pool we can start carving out ZFS volumes (zVol):
# zfs create -V 20G sasmirror1/akebono-scratchvol # zfs list NAME USED AVAIL REFER MOUNTPOINT sasmirror1 433G 114G 31K /sasmirror1 sasmirror1/akebono-scratchvol 20.6G 135G 16K - syspool 10.2G 23.5G 36.5K legacy syspool/dump 7.00G 23.5G 7.00G - syspool/rootfs-nmu-000 1.46G 23.5G 1.03G legacy [...]
sbdadm - SCSI block device administration CLI
Albeit we have our zVol available we have to tell STMF that this is a volume that can be mapped as a LUN, this is why sbdadm is here:
# sbdadm create-lu /dev/zvol/rdsk/sasmirror1/akebono-scratchvol Created the following LU: GUID DATA SIZE SOURCE -------------------------------- ------------------- ---------------- 600144f098680b0000004e632dc60004 21474836480 /dev/zvol/rdsk/sasmirror1/akebono-scratchvol
(Update 2012: You can also use stmfadm to create a LUN, it's up to you what you want to use, I think the output of sbdadm is still better)
At this point I'ld like to remember the flexibility the original engineers at Sun added into into the 'SCSI target mode framework' (STMF):
You can not only map zVols but single (image) files on a filesystem or even single disks. The later one might make sense when you might have a hardware RAID controller where the OS only sees 1 virtual disk instead. - But often zVols tend to be the most integrated way (also in terms of performance) with STMF. In fact, the appliance OS from Nexenta only allows mapping zVols as SCSI LUNs.
stmfadm - SCSI target mode framework CLI
The GUID you saw previously is what we will finally map in STMF - this time I will just map the LUN to every initiator and every target we have:
# stmfadm add-view 600144f098680b0000004e632dc60004 # stmfadm list-view -l 600144f098680b0000004e632dc60004 View Entry: 0 Host group : All Target group : All LUN : 2
In the next post I will write about how creating target and host groups allows to precisely map LUNs to a node with their HBAs. You should now see the newly mapped LUN from any FC-connected Host. - You might need to rescan the bus (i.e. use vendor-specifc script in Linux or refresh the Disk Management on Windows).
2012.02: Updated a comment and fixed some errors.