Nexenta / illumos as FC target (1)

I have been lucky to get some elderly 4 GBit FC  hardware step by step from used-hardware resellers for a reasonably low  price – this originated in the interest after deploying a iSCSI SAN at work using the OpenSolaris (soon to be illumos) based NexentaStor appliance OS. Thankfully my employer allowed my to install and test the gear at work – because I don’t have home similar to  @tschokko.

Since the free-as-in-beer Community Edition doesn’t allow managing FC target other than via native OS shell, I went with Nexenta Core 3, which is their free-as-in-speech distribution of OpenSolaris snv134 + a ton of patches from later ONNV and illumos.- I chose it because I wanted to have the most similar kernel to what is used in the commercial edition. In this series I’d like wrap up how I how I went the ride (i.e. self-documentation…).

Preparing the OS:

Installing the Nexenta Core OS  is pretty much straightforward if your hardware is supported I won’t comment on that besides that you should give a static IP since if you are not a daily Solaris admin, you will have to do more googling how you have to disable NWAM and do manual config. 😉

After installation of NexentaOS 3.0.1 I’d recommend you to upgrade to the latest bits, but before that you should install sunwlibc because without this, the STMF won’t run (mostly equivalent what went into NexentaStor 3.1.1 curently):

apt-get update
apt-get install sunwlibc
apt-clone upgrade

You can then reboot into the clone of the updated OS, the original Kernel in 3.0.1 has a couple bugs that were squashed later on – but most importantly you will get the latest open source ZFS file system (v5) and pool version (v28).

Enabling STMF and switching HBAs to target mode:

root@kodama:~# svcadm enable stmf
root@kodama:~# svcs -v stmf
STATE          NSTATE        STIME    CTID   FMRI
online         -             Aug_31        - svc:/system/stmf:default

Afterwards we have to switch HBAs into target mode – assuming you have 4G or 8G FC HBA this the driver we need is called ‘qlt’. – There is also a driver for Emulex HBAs where things are a bit different. Important side

root@kodama:~# mdb -k
> ::devbindings -q qlc
ffffff03597fe030 pciex1077,2432, instance #0 (driver name: qlc)
ffffff03597fb2c0 pciex1077,2432, instance #1 (driver name: qlc)
ffffff03597fb038 pciex1077,2432, instance #2 (driver name: qlc)
ffffff03597f6ce8 pciex1077,2432, instance #3 (driver name: qlc)
> $q

You can use a command to tell the OS not using qlc but qlt – but you can also edit the /etc/driver_aliases and replace the occurence of qlc where pxiex1077 appears:


qlt "pciex1077,2432"
qlc "pciex1077,2532"

After you have done this you will have to reboot the system for a last time. Enabling STMF (SCSI Target Mode Framework) is important since it will handle the upload of a Qlogic target mode firmware on all your HBAs. Without this firmware your HBAs will continue blinking (~ no link) and stay unoperational. If you made things right, you should see something like this:

root@kodama:~# fcinfo hba-port
HBA Port WWN: 50060b000065566e
        Port Mode: Target
        Port ID: 10000
        OS Device Name: Not Applicable
        Manufacturer: QLogic Corp.
        Model: HPAE311 (-> This is a HP-branded QLE2460)
        Firmware Version: 5.2.1
        FCode/BIOS Version: N/A
        Serial Number: not available
        Driver Name: COMSTAR QLT
        Driver Version: 20100505-1.05
        Type: F-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 50060b000065566f
HBA Port WWN: 2100001b321fe743
        Port Mode: Target
        Port ID: 10400
root@kodama:~# stmfadm list-target
Target: wwn.50060B0000655664
Target: wwn.2101001B323FE743
Target: wwn.2100001B321FE743
Target: wwn.50060B000065566E

Congratulations you have a working fibre channel target box – You might also re-do the same mdb -k command and search for devbindings of qlt and qlc.

Next up: Carvin out volumes and do LUN mapping

September 2, 2011

Posted In: Uncategorized

Tags: , ,


BIND / NSD zone files explained

At work there was a old physical Windows who run DNS for several small domains I got the job of transfering the DNS zones to a NSD server on a Linux VM – with lower memory footprint. I was able to transfer the zones either by copying and fixing the Window zone files (need to be fixed anyway) from C:\Windows\system32\dns\ or by using dig if you can’t copy zone files but the remote DNS allows zone transfer:

dig @OriginatingNS AXFR >

I’ve been used to edit zone files manually on my private BIND and  NSD – while it compiles its own DB to be faster: It’s the same source format. But my knowhow got a bit rusty on that topic and I learned I can write less to get the same result. (less repetitive and error prone). A big thanks goes to the helping hands in #DNS on the freenode IRC, who helped me to get back in the seat and fix the strangenesses of the Windows zones and refactor the stuff .

If you have to cleanup a zone file anyway I hope this commented sample gives you some ideas:

; Zone file for

$TTL 86400

@ IN SOA (
40 ; serial number
21600 ; refresh
7200 ; retry
691200 ; expire
86400 ) ; default TTL

; NS

; MX
MX 10 mySMTP
MX 10

; A
     A <IP>
sub1 A <IP>
sub2 A <IP>

www2 CNAME @
www2 CNAME sub1
www1 CNAME


About the SOA record: There are people who know better about TTL and other settings. But more about the mail address you put in the zone file: RFC2142 says hostmaster@domain should exist (or forwarded) to someone on charge for this DNS zone. You can put any other valid address but the RFC strongly encourages you to have hostmaster@domain available for commidity. (as is postmaser@domain btw)

If you do it manually, I’d recommend using a zone serial of YYYYMMDD01, I read that in some BIND book, I hope this is still valid.

Shorter entries

By adding a $ORIGIN variable (don’t forget to a “.” at the end)  you can already write less. Here is acorrect but long example (i.e. via AXFR from dig):       86400 IN A <IP> 86400 IN A <IP>

Short but sufficient if $TTL and $ORIGIN set:

@    A <IP>
sub1 A <IP>

Explanation: @ uses $ORIGIN variable and replaces it, $TTL  is also inserted automatically without mentioning it and IN can be left out as it seems to be automatically assumed of not present. For A records pointing to you can even leave out the @, it’s also sufficient:

A <IP>

With CNAMEs you can also use the @ if the A record is pointing to

demo CNAME @
     CNAME @

Be warned this one won’t work:

demo CNAME

If you have external names, let’s say you don’t operate your own MX, you can add this entry ending with the FQDN completed with “.”

August 6, 2011

Posted In: Uncategorized

Tags: , ,

Leave a Comment

Another Hyper-V and Linux

Since this blog Post Microsoft has Released RPM-Packaged Linux IC  3.1 supporting RHEL 6/6.1 and Centos 6 – making things significantly easier and the in-Kernel Modules continued to mature. expect a another bunch of fixes in 3.2 (the merge windows for 3.1 was too short to get the hv_blkvsc dumped)

Since my last (long ago) post about using Linux on Hyper-V (since I have that at work and have to use that) I can confirm the situation has improved albeit I cannot safely recommend production usage of Linux using th Hyper-V drivers in staging area yet. I still have a mixed feeling about their stability. Lots of things will com into Linux 3.0.x

For those using RHEL5-based distributions: Yes it works.

  • MS announced support CentOS 5 – meaning that if you are using CentOS you can soon hope for a ‘supported by MS’ badge.
  • I have seen proper stability using Scientific Linux 5.5, 5.6rolling as well as the CERN variant SLC56
  • Recommendation: Use DKMS (it’s in EPEL) and create a rpm package containing bits for your kernel that is yet installed to save time during deployment if you have multiple machines.

For all the others (Debian mainly as I use it): I have been tracking Microsoft’s effort to get those hv bits in a sane state, they have hired a former Novell employee (KYS) who has done considerable cleanup on the 20k+ LoC shrinking it by a couple of thousands already. Yet, there are over 350 patches in the pipeline that will hopefully make it into Linux 3.0.x, but the staging maintainer gregk has mentioned that storvsc and netvsc needed a bunch more of cleanup.

I have an experimental branch of the ‘stable’ branch (2.6.39.x yet) where I cherry-picked all of these changes that are in staging-next – I just did some git log parsed and grep’ed, spiced with some sed and only 2 manual fixups where automatic cherry-picking failed. The result can be found on gitorious (at your own risk). I’ll try to track the changes from stable and if possible from HV and report on my results on Squeeze. At least the storage driver showed less prone to random lockups. Netvsc still had its problems in the 2.6.38 series – because of that I kicked in the legacy NIC  on all non-RHEL VMs to be safe but I will re-try soon.

Closing thoughts: It seems also with the recent rumours from MS that they think about supporting even Ubuntu – and possibly Debian, that they are serious on supporting Linux on their virtualization infrastructure, better for me as consumer on their platform. :-/

June 3, 2011

Posted In: Uncategorized

Leave a Comment

GPO version control on Windows?!

Sometimes you just need to see it another way to realize the would be a solution to your struggle:

I started using computers with the age of MS-DOS and ran Windows but only knowing there was an alternative (on which I tried but then often struggled since I almost completely switched) which were Unix-like OS. Nowadays most often I prefer the way how they stich to the principles that configurations are stored on some way but at least in normal text files and not in binary blobs. – I learned how to use Subversion and Git (which I prefer over SVN now) and then asked myself why I didn’t realize that I struggled with a problem I ran into much earlier:

Version control on group policies would be really great thought I – and I realized not only MS was offering something like AGPM (Advanced Group Policy Management) or GPOAdmin from Quest. The integration to the Group Policy Management Console might be great but after reading more, I realized a problem comparing to the unix-like OS who can integrate configuration management with version control to: They bring in their own VCS for GPO change control? – Why:

Let me point out why I don’t think it’s a good idea:

  • Git / Subversion / Mercurial etc. are universal SCMs: You put a lot of different things under version control, be it your locally modified version of some code, software packages, Puppet manifests etc. or simply some configuration files saved via etckeeper
  • AGPM’s VCS  once more extra (specific) version control systems: I have to setup a version control server system only for AGPM – I can’t use it for anything else and even if I’d use MS’ version control system they don’t work together.

This means that I have to implement and maintain at least 2 different version control systems: One for our code and one for Group Policies and I can’t use the same tools for version control I anyway use as developer and admin  – why?

It would be great if someone would (have?) create a GPO management tool and instead of adding their own way of  version control: Create an adapter to a universal version control system? Why re-inventing the wheel? – Puppet’s manifests look more like code and hide less things than GPO’s do: You have a nice GUI but in long term I really prefer the way Manifests are written using Configuration Management Systems like Puppet / Chef or others.

April 3, 2011

Posted In: Uncategorized

Leave a Comment

Hyper-V Ubuntu -> CentOS

I have preferred Debian and Ubuntu over RHEL mostly because their package collection was much wider. Thus at work I wanted to go with Ubuntu LTS as deal between Ubuntu’s bleeding-edge and a more conservative approach of package versions while Debian Sqeeeze is not yet out. Happy thought I with the arrival of Microsoft hv-Moduls in the staging area things would be easy now. – Especially after 2.6.35 when MS included virtual SMP and shutdown integration. I will tell you later about more details but at the moment I can only give you this advice:

Either you don’t see error messages when using Ubuntu on Hyper-V during boot and during usage – and you are lucky – or you can see dieing network drivers and block devices that just kick out and will not be accessible until a full reboot of the respective VM. MS tested its 2.1 version against RHEL 5 and some SLES versions which are enterprise linux with mostly older kernels. They work, at least MS has certified them they should and I know people who don’t see problems in bigger setups. MS’ Linux IC 2.1 are yet the way also when using CentOS.

For the Yet I have to say: hv is fine for you nerds and those interested in experimenting. For some it may work on non-Enterprise distributions, but be prepared for surprises.

I will have to switch all my Linux-on-Hyper-V to CentOS or RHEL yet, neither was I succuessfull under R1 or R2 with Ubuntu LTS or Debian squeeze and lenny. I had lots of enthusiasm for Microsofts hv-Module and hoped it was in better shape. But when I just need a server to work, I will not touch a staging-driver anymore for something critical, it hurt and cost me enough of time.

For the Debian guy in me it’s like changing from his beloved comfortable car with apt and big bazaar of packages made by the Debian developers to a “also well working” but less attractive Vokswagen transporter: It’s older, but at least: It works but almost every extra tool will need to be imported from outside (for example from remirepo). – But in the end it’s solidity that matters.

I recommend you to read the comments here for example about dying Network interfaces:

December 20, 2010

Posted In: Uncategorized

One Comment

About getting the right (sur-)name

This is not a computer related blog post but I “have to”  write this rant since I’m regularly remembered about this cultural Problem: There happen to be people who have a suname that is often used as name like Philipp or Simon – the later one being my family name. It’s understandable for us that people mess up our name and surname. Since I know more male names used as surname more often the men of such families more are often confronted with this.

In oral language this isn’t a problem but I’ve tried several tricks to point people in written correspondence to hint them to my correct name. I haven’t found good answer. If you happen to have such a surname it won’t help if you write your surname in CAPITALS after your name. Even if you abbreviate your name and then write your surname – there isn’t a guarantee that the other opposite will get it right until even the third time. A good way if you are with collegues: Just tell them your name and don’t tell your surname. 🙂

But what is one of the most embarassing things is the usage of forms:
It’s usual to fill in forms where you have to put your name and surname in clearly differentiated fields. This works quite good in at least bureaucracy. I’ve never gotten a letter from official side who’d have gotten my name wrong. But I’m regularly confronted with the situation where people did think I did fill in the form wrongly and mix the names – once more. – I even double-check the naming of the fields!

So please: If you have to transfer names and surnames from a list and you realize that yet a surname is more commonly used as name -DON’T CORRECT  it by switching name and surname. These people haven’t most often missunderstood your form. Just copy the values and it will be right. Thank you!
It saves US bearing such names the time to re-explain it to YOU that it was really meant this way! 🙂

September 24, 2010

Posted In: Uncategorized

Tags: , ,

Leave a Comment

OpenSolaris – where to go?

I’ve not been that long with *nix OS so my interest in OpenSolaris doesn’t date much back. Mainly it came out of the combination of  the great ZFS filesystem and COMSTAR Framework (Common Multiprocol SCSI Target). I will post later once why we opted for NexentaStor at work. This is a commercially supported variant of ON source made by Nexenta Systems. We knew that OpenSolaris had an uncertain future but nonetheless the FUD was outweight by the advantages.

Time has been turbulent – and still is – around OpenSolaris and I try to compile for even more outsiders than I am where you might want to go if you want to play or even deploy something productively on ZFS. As you, might certainly know, the OpenSolaris distro, “Project Indiana” is dead, a 2010.x release has and will never happen. Oracle has changed to more closed development model. They announced to release allready CDDL’ed source after major release – but ATM we don’t know if that is really going to happen.

So where can I go then if I don’t want or can’t go Oracle?

Some people believe in a future of the ON code without Oracle releasing bi-weekly putbacks. Nexenta Systems’ Garret D’Amore started an effort to continue core ON development with the main task of replacing all remaining closed source parts of OpenSolaris code like internationalization of libc, some crypto and drivers. This Project called IllumOS and backed by an independent foundation – they try to omit the errors of dependency like OpenSolaris had. As Jörg Möllenkamp at Oracle has posted on his blog: It is not because of idealism this happened. Businesses have built their products by using OS/Net source code from Oracle-Sun. Theyre businesses depends on ON source technologies: E.g Nexenta Sysems heavily uses ZFS and COMSTAR for it’s Storage Appliance OS and Joyent uses an own Distribution of ON Kernel and NetBSD for its cloud services. They knew Sun could die or turn the tap of to code and fixes.

So what is IllumOS in short?

IllumOS is a project that aims at replacing closed source parts in OpenSolaris to create a fully open OS/Net source and later continue core development – eventually integrating Oracle source releases to keep up the track. It’s definitely not aimed at replacing any distribution efforts. It’s comparable as Linux’ but with lot’s more like userland applications. You can compile IllumOS and boot it but you have to do it yourself. (yet)

OK, but I need a distro!

Happily, there are some you can choose from. Diversity and some competition helps! These aren’t all but some of the most important.


Tries to make a possible upgrade path for OpenSolaris users by compiling last source (147) of Oracle’s ON. This is a IPS-Packaging Distro where you can upgrade your OpenSolaris release or latest developer build (134) from. At this moment, this is a developer release including latest ZFS and zPool versions. But it does not include IllumOS ON at the moment but will. This project is under the hood of IllumOS foundation lead by Alasdair Lumsden.

What is SchilliX?

This distribution created in major parts by Jörg Schilling (Schilly) owns the legacy of being the first compiled distribution of ON code even before former Sun first released first “Indiana” Distribution called 2008.05. Since Schilly prefers Sun’s userland over GNU tools they are the primary ones unlike OpenSolaris and OpenIndi. It will also be based on IllumOS but atm it also incorporates Oracle’s last ON code “snv147”.

Hey but I need something more stable and cleanly installable!

Nexenta Systems creates  its own distribution with the community called Nexenta Core Plattform. Since the Nexenta guys have background in Linux iSCSI they happen to like Debian’s apt packaging and GNU environment. Nexenta’s open source distro is the base for their commercial NexentaStor product which essentially adds storage-specific Management tools (which are proprietary) and can be expanded with commercial plugins for easy replication, VM storage management and lots more.

Since it is aimed to be stable and Sun’s 134 codebase meant to lead to stable OSol 2010.x release Nexenta decided to base it’s distro on this release. Nexenta has incorporated lot’s of patches and fixes from later source drops and thus has matured the not-so stable 134b source code. Am I wrong if I compare it with a RHEL or Debian stable in the Linux world? You won’t find the most recent code in Nexenta, even the RC releases happen to be quite stable.

What you get is a ON married with GNU userland and Debian’s apt. Since it uses Sun’s libc unlike what Debian GNU/kFreeBSD has done with GNU libc and the FreeBSD Kernel the packaging needs more adaptation and sometimes NCP specific patches. It feels like a Debian-based distro but as Debian and Ubuntu user you can get quite familiar fast. It’s the most GNU-ish distro since it uses Ubuntu’s 8.04 LTS packages. This is absolutely sufficient for a server distro. Also at default there is no UI since Nexenta aims NCP to be their minimal base for their commercial offering. You can install a GNOME on it but it’s not the primary aim. Since Nexenta is backing IllumOS with monay and developers (i.e. Garret D’Amore) future NCP versions will be based on IllumOS core.

Last but not least there is FreeBSD’s ZFS port which includes zPool 15 in latest 8.1 and can be experimentally patched to zPool 28 at your own risk.

This is certainly my personal overview but maybe this can help you in deciding where to go 🙂

September 17, 2010

Posted In: Uncategorized

Leave a Comment

Ubuntu LTS on Hyper-V: Experiences and what’s to come


At my workplace (besides study) we’ve been into virtualization for more than 3 years starting then with a mere 3 GB in the metal – squeezing 2-3 VMs onto a W2k3 + Virtual Server 2005 on it. It worked but it was rather slow, yet enough for Active directory, print server, DNS etc.

Hyper-V & *nix: Love and hate

In end 2008  VMware was out of price tag for the features we needed, KVM not ready enough for necessary Windows virtualization performance and Xen was an utter beast for all those of us who had no UNIX/Linux experience nor did integrate 100% well for Windows guests. – Therefore we were quite early adopters of Hyper-V. It was not a heart’s choice but a rather a rational one: It worked not best but in was best for our needs. While it’s not offering all the things VMware has and had some rough edges, it worked quite well with Windows guests.

The bad thing (was): Due to the paravirtualization nature of Hyper-V the guest OS needs to be aware of its virtualization. – It needs integration drivers or the little magic that Microsoft calls “enlightenment”. The bad news was that the these extra Linux components were proprietary and restricted to SUSE Enterprise.

Other OS were restricted to Emulated ATA Disks + Networking and only 1 vCPU which  limited non-Windows guests to not-so ciritical services as Hyper-V guests. Ubuntu LTS 8.04 was our first production experience on Hyper-V. While it was (and still is) rock-solid, we can’t use the full power.

Time has changed and MS released Hyper-V drivers under Dual BSD and GPL license. My early testings (thanks to IT from all angles’ Blog) showed how much performance gain was possible by just switching to the paravirtualized NICs. SCSI drivers were sometimes a bit unreliable with 2.6.33, leaving randomly the VM with a read-only EXT3 FS. This was all still using Hyper-V v1.

Now Ubuntu LTS 10.04 itself does not have a 2.6.35 readily available but a backport is planned and already maintaned in PPA by Ubuntu using the next Maverick (10.10) kernel. Yet with this Kernel you have (still yet staging) integration drivers but they do work now reasonably well and you don’t need to recompile your own kernel.

I’m quite happy that we can now use Linux VMs without significant performance restrictions on Hyper-V.

Yet just one warning:
My personal experience is that Linux Integration Components (Linux IC) do work with the first release of Hyper-V but are less optimized to it and appear less stable thant on a Hyper-V V2 machines. It’s also quite interesting that starting with Hyper-V v2 Microsoft expanded “official” support for Linux guests to Red Hat Enterprise Linux while limiting RHEL’s compatibility to the V2 release.

I’m looking forward to post 2.6.35 development in hope the “hv”-Code in the tree matures even more. Guest OS choice makes life easier – and I personally like the fact Hyper-V got better compatibility with non-Windows guests.

September 7, 2010

Posted In: Uncategorized

Leave a Comment

Hello world!

Hi – and welcome to this “yet another” Blog.

I hope to post here some ramblings from my Computer science and Sysadmin life as well as some more or less interesting other things 🙂

Now that I’ve switched from to Cyon as Hosting provider I hope to give my bloggin a new start. – You can also follow me on

June 18, 2010

Posted In: Uncategorized

Leave a Comment