Nexenta / illumos as FC target: LUN mapping (3)

While you can also do quite some LUN mapping on the level of an FC switch I’d like to write here about what there are for possibilities on the level of the OpenSolaris / illumos STMF:

Last time I simply added a view to a LUN for everybody which isn’t a good idea if you are not using a cluster-aware filesystem (i.e. GFS, OCFS, or NTFS as a cluster-shared volume). Now we need to restrict the access to a LUN: For this STMF allows us to create host groups (hg) and target groups (tg).

Last time I have mapped a LUN to everyone, now I want to restrict the access to this LUN to a node called ‘akebono’ so let’s create a host group for all the HBAs installed in this akebono:

# stmfadm create-hg akebono
# stmfadm list-hg
Host Group: akebono

In the FC aread each HBA card has a unique WWNN (world wide node name) an since there are HBAs with more than one port, each port has its  WWPN (worldwide port name), this is a 64-bit value like the MAC address that every network controller has. stmfadm allows to add different types of names to a host or target group it IQNs (iSCSI qualified name) or WWNs (worldwide names are also used in SAS). We have to know the WWPN from the initiating HBAs. There are several ways to get the WWPNs (and possibly others):

  • Use vendor-provided tools for each platform (in this case this might be the Qlogic SANsurfer CLI)
  • (Linux/BSD/Unix) Lookup dmesg when the HBA driver gets loaded
  • Sometimes it’s written on a lable on the HBA

But if you have a small FC environment, then you can cheat a little:

# stmfadm list-target -v
Target: wwn.50060B0000655664
    Operational Status: Online
    Provider Name     : qlt
    Alias             : qlt3,0
    Protocol          : Fibre Channel
    Sessions          : 1
        Initiator: wwn.210000E08B9BE2DF
            Alias: -
            Logged in since: Sat Sep  3 02:15:56 2011
Target: wwn.2101001B323FE743
    Operational Status: Online
    Provider Name     : qlt
    Alias             : qlt2,0
    Protocol          : Fibre Channel
    Sessions          : 0
Target: wwn.2100001B321FE743
    Operational Status: Online
    Provider Name     : qlt
    Alias             : qlt1,0
    Protocol          : Fibre Channel
    Sessions          : 1
        Initiator: wwn.210000E08B9BF0E1
            Alias: -
            Logged in since: Sat Sep  3 02:15:56 2011
Target: wwn.50060B000065566E
    Operational Status: Online
    Provider Name     : qlt
    Alias             : qlt0,0
    Protocol          : Fibre Channel
    Sessions          : 1
        Initiator: wwn.210000E08B9BF0E1
            Alias: -
            Logged in since: Sat Sep  3 02:15:56 2011

The Nexenta box has 4 HBAs (3 Host Bus Adapters are connected, 2 to the same switch), so what we can now see are the WWNs of the targets and those of the (yet) single initiating node. Now we can add them to the host group  – don’t forget to have ww.<yourWWPN> because that’s how  STMF distinguishes between iSCSI IQNs, FC & SAS WWNs and EUI’s (Extended Unique Identifier):

# stmfadm add-hg-member -g akebono wwn.210000E08B9BF0E1
# stmfadm add-hg-member -g akebono wwn.210000E08B9BE2DF
# stmfadm list-hg -v
Host Group: akebono
        Member: wwn.210000E08B9BF0E1
        Member: wwn.210000E08B9BE2DF

Now we can delete our mapped LUN and re-map it properly so only HBAs in the host group akebono will see this LUN and be able to mount it:

# stmfadm list-view -l 600144f098680b0000004e632dc60004
View Entry: 0
    Host group   : All
    Target group : All
    LUN          : 2
# stmfadm remove-view -a -l 600144f098680b0000004e632dc60004
# stmfadm list-view -l 600144f098680b0000004e632dc60004
stmfadm: 600144f098680b0000004e632dc60004: no views found

# stmfadm add-view -h akebono 600144f098680b0000004e632dc60004
# stmfadm list-view -l 600144f098680b0000004e632dc60004
View Entry: 0
    Host group   : akebono
    Target group : All
    LUN          : 2

Voilà  – well that’s it: If you want to further restrict a node to utilize only let’s say 2 out of 4 HBAs you can also create target groups too – currently akebono will be able to connect to this LUN over every reachable target path (be it FC or every other target, i.e. iSCSI). There is also a possibility to group all FC ports together, but be aware that in order to add a any target  to a target group, you will have to offline it for a short period (this is now problem if you have fully working multipathing):

# stmfadm create-tg fc-ports
# stmfadm add-tg-member -g fc-ports wwn.50060B000065566E
stmfadm: STMF target must be offline

# stmfadm offline-target  wwn.50060B000065566E
# stmfadm add-tg-member -g fc-ports wwn.50060B000065566E

# stmfadm online-target wwn.50060B000065566E

This offline-online procedure seems to be mandatory for every target added to a target group. Later on you can  (if you want) add a view to a LUN by also adding ‘-t <targetGroupName>’ including the host group like before. – It might also be  good thing if you want to manually balance the load on all of your target-mode HBAs.

Next up: Setting up multipathing on Linux (Debian and Scientific Linux) and Windows (2008 R2).

September 5, 2011

Posted In: Uncategorized

Tags: , ,

Leave a Comment

Nexenta / illumos as FC target (2)

The first article was a bit specific on NexentaOS 3.x bust most of the things I’ll write here can be used (currently) 1:1 on OpenIndiana and SmartOS  Live (once you get it installed on Disk). I’d not recommend using the UNIX shell within NexentaStor, the Appliance OS that comes with a Storage Management System (NMS) and its respective Shell tools (NMC) and Web-UI (NMV), you risk messing up to management interface temporarily.

At this point I expect you that have configured basic zoning on your FC switches. Actually I simply created a zone for each virtualization node and its targets node containing respective WWN of these Ports  (I did not make Zones on port basis). – Maybe I’ll wrap up the story of updating my Silkworms 200E’s from FabOS 5.1 to 6.2 😉

OK, let’s enumerate the disks and create a pool:

root@kodama:~# format
Searching for disks...done

 0. c0t10d0 <DEFAULT cyl 4497 alt 2 hd 255 sec 63>
 1. c0t12d0 <SEAGATE-ST3300656SS-0005 cyl 22465 alt 2 hd 6 sec 636>
 2. c0t13d0 <SEAGATE-ST3146855SS-0001-136.73GB>
 3. c0t14d0 <SEAGATE-ST3146356SS-0006-136.73GB>
 4. c0t15d0 <SEAGATE-ST3146855SS-0001-136.73GB>
 5. c0t19d0 <SEAGATE-ST3300655SS-0001-279.40GB>
 6. c0t20d0 <SEAGATE-ST3300655SS-0001-279.40GB>
 7. c0t21d0 <SEAGATE-ST3300655SS-0001-279.40GB>
 8. c0t22d0 <SEAGATE-ST3300655SS-0001-279.40GB>
Specify disk (enter its number): ^C
# zpool create sasmirror1 mirror c0t19d0 c0t20d0 mirror c0t21d0 c0t22d0 spare c0t12d0

Now we can see our newly created pool and start carving out Volumes (“zVols”)

# zpool list
sasmirror1     556G   249K   556G     0%  1.00x  ONLINE  -
syspool       34.2G  9.17G  25.1G    26%  1.00x  ONLINE  -

# zpool status sasmirror1
  pool: sasmirror1
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Wed Aug 31 13:06:45 2011

        NAME         STATE     READ WRITE CKSUM
        sasmirror1   ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c0t19d0  ONLINE       0     0     0
            c0t20d0  ONLINE       0     0     0
          mirror-1   ONLINE       0     0     0
            c0t21d0  ONLINE       0     0     0
            c0t22d0  ONLINE       0     0     0
          c0t12d0    AVAIL   

errors: No known data errors

After creating a data pool we can start carving out ZFS volumes (zVol):

# zfs create -V 20G sasmirror1/akebono-scratchvol
# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
sasmirror1                      433G   114G    31K  /sasmirror1
sasmirror1/akebono-scratchvol  20.6G   135G    16K  -
syspool                        10.2G  23.5G  36.5K  legacy
syspool/dump                   7.00G  23.5G  7.00G  -
syspool/rootfs-nmu-000         1.46G  23.5G  1.03G  legacy

sbdadm – SCSI block device administration CLI

Albeit we have our zVol available we have to tell STMF that this is a volume that can be mapped as a LUN, this is why sbdadm is here:

# sbdadm create-lu /dev/zvol/rdsk/sasmirror1/akebono-scratchvol
Created the following LU:

              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f098680b0000004e632dc60004  21474836480          /dev/zvol/rdsk/sasmirror1/akebono-scratchvol

(Update 2012: You can also use stmfadm to create a LUN, it’s up to you what you want to use, I think the output of sbdadm is still better)

At this point I’ld like to remember the flexibility the original engineers at Sun added into into  the ‘SCSI target mode framework’ (STMF):

You can not only map zVols but single (image) files on a filesystem or even single disks. The later one might make sense when you  might have a hardware RAID controller where the OS only sees 1 virtual disk instead. – But often zVols tend to be the most integrated way (also in terms of performance) with STMF.  In fact, the appliance OS from Nexenta only allows mapping zVols as SCSI LUNs.

stmfadm – SCSI target mode framework CLI

The GUID you saw previously is what we will finally map in STMF – this time I will just map the LUN to every initiator and every target we have:

# stmfadm add-view 600144f098680b0000004e632dc60004
# stmfadm list-view -l 600144f098680b0000004e632dc60004
View Entry: 0
    Host group   : All
    Target group : All
    LUN          : 2

In the next post I will write about how creating target and host groups allows to precisely map LUNs to a node with their HBAs. You should now see the newly mapped LUN from any FC-connected Host. – You might need to rescan the bus (i.e. use vendor-specifc script in Linux or refresh the  Disk Management on Windows).

2012.02: Updated a comment and fixed some errors.

September 4, 2011

Posted In: Uncategorized

Tags: , ,

Leave a Comment

Nexenta / illumos as FC target (1)

I have been lucky to get some elderly 4 GBit FC  hardware step by step from used-hardware resellers for a reasonably low  price – this originated in the interest after deploying a iSCSI SAN at work using the OpenSolaris (soon to be illumos) based NexentaStor appliance OS. Thankfully my employer allowed my to install and test the gear at work – because I don’t have home similar to  @tschokko.

Since the free-as-in-beer Community Edition doesn’t allow managing FC target other than via native OS shell, I went with Nexenta Core 3, which is their free-as-in-speech distribution of OpenSolaris snv134 + a ton of patches from later ONNV and illumos.- I chose it because I wanted to have the most similar kernel to what is used in the commercial edition. In this series I’d like wrap up how I how I went the ride (i.e. self-documentation…).

Preparing the OS:

Installing the Nexenta Core OS  is pretty much straightforward if your hardware is supported I won’t comment on that besides that you should give a static IP since if you are not a daily Solaris admin, you will have to do more googling how you have to disable NWAM and do manual config. 😉

After installation of NexentaOS 3.0.1 I’d recommend you to upgrade to the latest bits, but before that you should install sunwlibc because without this, the STMF won’t run (mostly equivalent what went into NexentaStor 3.1.1 curently):

apt-get update
apt-get install sunwlibc
apt-clone upgrade

You can then reboot into the clone of the updated OS, the original Kernel in 3.0.1 has a couple bugs that were squashed later on – but most importantly you will get the latest open source ZFS file system (v5) and pool version (v28).

Enabling STMF and switching HBAs to target mode:

root@kodama:~# svcadm enable stmf
root@kodama:~# svcs -v stmf
STATE          NSTATE        STIME    CTID   FMRI
online         -             Aug_31        - svc:/system/stmf:default

Afterwards we have to switch HBAs into target mode – assuming you have 4G or 8G FC HBA this the driver we need is called ‘qlt’. – There is also a driver for Emulex HBAs where things are a bit different. Important side

root@kodama:~# mdb -k
> ::devbindings -q qlc
ffffff03597fe030 pciex1077,2432, instance #0 (driver name: qlc)
ffffff03597fb2c0 pciex1077,2432, instance #1 (driver name: qlc)
ffffff03597fb038 pciex1077,2432, instance #2 (driver name: qlc)
ffffff03597f6ce8 pciex1077,2432, instance #3 (driver name: qlc)
> $q

You can use a command to tell the OS not using qlc but qlt – but you can also edit the /etc/driver_aliases and replace the occurence of qlc where pxiex1077 appears:


qlt "pciex1077,2432"
qlc "pciex1077,2532"

After you have done this you will have to reboot the system for a last time. Enabling STMF (SCSI Target Mode Framework) is important since it will handle the upload of a Qlogic target mode firmware on all your HBAs. Without this firmware your HBAs will continue blinking (~ no link) and stay unoperational. If you made things right, you should see something like this:

root@kodama:~# fcinfo hba-port
HBA Port WWN: 50060b000065566e
        Port Mode: Target
        Port ID: 10000
        OS Device Name: Not Applicable
        Manufacturer: QLogic Corp.
        Model: HPAE311 (-> This is a HP-branded QLE2460)
        Firmware Version: 5.2.1
        FCode/BIOS Version: N/A
        Serial Number: not available
        Driver Name: COMSTAR QLT
        Driver Version: 20100505-1.05
        Type: F-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 50060b000065566f
HBA Port WWN: 2100001b321fe743
        Port Mode: Target
        Port ID: 10400
root@kodama:~# stmfadm list-target
Target: wwn.50060B0000655664
Target: wwn.2101001B323FE743
Target: wwn.2100001B321FE743
Target: wwn.50060B000065566E

Congratulations you have a working fibre channel target box – You might also re-do the same mdb -k command and search for devbindings of qlt and qlc.

Next up: Carvin out volumes and do LUN mapping

September 2, 2011

Posted In: Uncategorized

Tags: , ,