Restoring FreeNAS in UEFI mode

Introduction: On good and and bad luck

… and likely some bad practise: Sometimes bad and good luck come together bringing both things together. I had my FreeNAS Mini box humming on 11.1-U1 and … like I regularly do after waiting a couple of days after a release: Update the OS not expecting any major issues.

On major OS updates I exported the configs but on minor version bumps, and since I don’t regularly change a lot of options on my FreeNAS I didn’t always export the configuration. Luckily FreeNAS 11.x introduced an additional message if I wanted to export the config before updating which I accepted.

This time however the box didn’t come up and after 10′ I connected to the IP KVM of the AsRock board and saw the following message:

failed to read pad2 area of primary vdev
ZFS: i/o error -  all block copies copies unavailable
ZFS can't read MOS of pool tank
gptzfsboot: failed to mount default pool tank

A quick search revealed that in many cases who have encountered this message, they had to look for a restore. Lucky me: I had exported the configuration this time so I guess things should be easy and I didn’t bother digging down the rabbit hole of a full restore since I also had a spare SSD.

Restoring – and falling into another trap

Since I didn’t have the time to test if the Apacer SATA DOM was at actual fault and I wanted to have access to my data again, I first replaced it with a known-good used 120 GB SATA SSDs and booted the latest 11.2-U2 installer disc over KVM remote media and booted selecting UEFI mode.

It’s been years since I last saw the actual FreeNAS installer but the setup is pretty straightforward: Select a disk, provide a root password and decide whether to boot configure UEFI boot or legacy but known MBR boot.

Since with FreeBSD 11 UEFI support has improved and since I have started to only use BIOS mode whenever a system doesn’t support UEFI (or where early UEFI firmware were buggy) thus I selected UEFI because – why not?

Althought the motherboard wasn’t configured with CSM only boot, it didn’t want to boot from the new system drive after the otherwise successful install. Then it dawned me that the loader wasn’t detected by the UEFI firmware since all I saw was either the legacy boot drives or the EFI-bootable install disc.

Booting the OS temporarily

Dropping into EFI shell showed that FS0:, the EFI service partition (ESP) was properly discovered but the Asrock UEFI firmware didn’t like how the bootloader was named. FreeBSD and hence FreeNAS put the EFI loader into boot/efi/BOOTx64.efi

Manually selecting ESP and loading the bootloader started FreeNAS without any issue so I whent ahead and restored the configuration asking for another reboot (hence repeating the manual boot selection) doing the following in the EFI shell:

cd boot\efi

The good part: After booting and waiting for 10 minutes the system configuration was restored and things I needed such as shares were accessible again. Definitely a moment of relief even thogh I’d have had backups.

However manually selecting the bootloader at a reboot or after updates wasn’t going to cut it in the future.

Fixing bootup: Adding a boot variable using EFI shell

Unfortunately some UEFI vendors don’t always search for all possible and legit bootloaders available. Some vendors only look for EFI\Microsoft\Boot\bootmgr.efi for internal drives (as seen in FreeNAS but #16280). I also remembered some similar issues with virtual machines using the (based on TianoCore) OVMF UEFI firmware for Virtual Machines on Linux KVM some years ago.

Unlike Debian 9 for what I can tell, the FreeNAS installer does not configure a EFI variable which it the UEFI firmware could use. The toold that would do this is on Linux and the same command is part of FreeBSD. efibootmgr is also part of FreeNAS 11.2, however as of writing efirt.ko isn’t shipped with this release, hence adding a new boot variable wasn’t going to work from the OS.

From what it seems FreeBSD 11.2 and 12.0 were the first releases to ship efirt.ko by default. FreeNAS 11.2 is based on FreeBSD 11.2-STABLE, therefore I imagine some configurations are currently not building the necessary kernel module on FreeNAS. UPDATE: efirt.ko should be available with something after FreeNAS 11.2-U2, it was simply not enabled in FreeNAS builds. Once that module is available, you should be able to avoid booting into an EFI shell.

No worries, EFI shell – which I have never used beyond manually executing an EFI loader – should allow me to do this with the bcfg command. (See: Arch Linux Wiki: Unified Extensible Firmware Interface)

However in its infinite wisdom Asrock’s has decided to limit/castrate its EFI shell to not include this command. Thankfully the TianoCore edk2 repository contains pre-built EFI shell binaries for x86-64 to load separately from Github.

Since I didn’t want to plug a USB drive on it I, temporarily mounted the ESP partition on the boot and dropped the edk2 EFI shell binary onto it:

# camcontrol devlist | grep INTEL
<INTEL SSDSC2CW120A3 400i>         at scbus14 target 0 lun 0 (ada4,pass5)

# gpart show /dev/ada4
=>       40  234441568  ada4  GPT  (112G)
         40     532480     1  efi  (260M)
     532520  233897984     2  freebsd-zfs  (112G)
  234430504      11104        - free -  (5.4M)

mkdir /mnt/efi
mount_msdofs /dev/ada4p1 /mnt/efi
cd /mnt/efi
fetch https://<url-to-latest-edk2-efi-shell-binary>

Afterwards I rebooted into the AsRock EFI shell and from there I started the edk2 EFI shell:

cd boot\efi

Now with the help of the Arch Linux Wiki I was able to construct the bcfg command required:

# Show current entries
bcfg boot dump

# Since no 3rd option is present, add one
bcfg boot add 3 FS0:\efi\boot\BOOTx64.efi "FreeNAS"

Afterward I rebooted into the UEFI configuration and switched the boot priority to the new Boot Variable "FreeNAS" to be the first. Now booting directly into FreeNAS works.


Once I had found the quirks (FreeNAS not providing efirt.ko, EFI shell of the motherboard being restrictive) I am happy to say that the system works as expected again. Maybe I can provide some feedback and input to provide a better integration of UEFI support into FreeNAS, we’ll see.

Had I remained with CSM boot, the restore procedure would have been faster, however I’m happy that I attempted it during the restore. UEFI isn’t per se "better" than legacy boot, but newer systems that I encounter start defaulting to UEFI and some even flat out start behaving buggy when using CSM/legacy boot these days. (i.e. AsRock’s J3455B-ITX board)

I have to thank the crew at iXsystems and the community around FreeNAS for providing a fully restorable configuration for the NAS OS of my choice for a couple of years already, for exactly those thing that I normally don’t want to mess around.


  1. I’m looking into automating a config export for my backups, it might be possible via the API. However I’ll definitely change my behaviour / use to exporting a config before updating.
  2. Explicitely adding a NVRAM Boot Options to the UEFI wouldn’t have been needed at all, if the UEFI on my particular system wouldn’t only look for Windows-style UEFI loaders on ESP partitions.
  3. I’ve learnt that writing Boot Options to the UEFI NVRAM during install isn’t yet integrated supported on any given FreeBSD release as of 12.0-RELEASE. UPDATE: Thanks to code contributed by Rebecca Cran (bcran@), FreeBSD 13 installer will add boot entries. The respective commit was made in December 2019: Mostly in r342637. Since then Rebeccas has contributed other EFI-related improvements to FreeBSD, thanks Rebecca!
  4. Unfortunately some UEFI implementations don’t keep custom Boot Options after reboots. It seems some implementations even more are buggy, luckily AsRock’s implementation didn’t have this bug.
  5. It should be noted that although FreeBSD’s efibootmgr(8) is the same command as in most Linux distributions, comparing the manpages reveals that the CLI parameters DO differ.


Posted In: Uncategorized

Tags: , ,

Leave a Comment

MegaRAID drive inconsistent: consistency check stuck

Many server vendors ship their servers with branded MegaRAID RAID controllers. The relationship I have with the many flavours they come in is sometimes a bit poluted I dare to saay.

An excellent quick guide to the more modern StorCLI utility available to Linux, Windows, FreeBSD and others is Thomas Krenn’s Wiki page on StorCLI. More often than not I have to look up in the CLI reference made available by (today) Broadcom.

Now I’ve come across a slightly stubborn controller that reported a Virtual Drive not being consistent.

I’ve come across a system with an IBM-branded MegaRAID controller where one of the virtual disks (a RAID5) show an optimal state but not consistent.

# /opt/MegaRAID/storcli/storcli64 /c0 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.0409.0000.0000 Nov 06, 2017
Operating system = Linux3.2.0-5-amd64

Product Name = ServeRAID M5110
FW Package Build = 23.34.0-0018
BIOS Version =
FW Version = 3.460.105-6456



DG/VD TYPE State Access Consist Cache Cac sCC Size Name
0/0 RAID1 Optl RW Yes RWBD - ON 278.464 GB
1/1 RAID5 Optl RW No RWBD - ON 3.635 TB

I’ve attempted to start a consistency check on that VD but the controller always reported it couldn’t:

root@ieu04-sr21:~# /opt/MegaRAID/storcli/storcli64 /c0 /v1 start cc
VD Operation Status ErrCd ErrMsg
1 CC Failed 255 Start CC not possible

At first I suspected a running patrol read would block it. (storcli64 /c0 show pr) but the consistency check wouldn’t start afterwards either. In the end after trying to resume it it was finally OK:

# /opt/MegaRAID/storcli/storcli64 /c0 /v1 show cc
VD Operation Progress% Status Estimated Time Left
1 CC - Paused -

# /opt/MegaRAID/storcli/storcli64 /c0 /v1 resume cc
CLI Version = 007.0409.0000.0000 Nov 06, 2017
Operating system = Linux3.2.0-5-amd64
Controller = 0
Status = Success
Description = Resume CC Operation Success

Afterwards the consistency check quickly ended as done and the alert issued by the monitoring system got cleared. Nonetheless I manually started a consistency check in order to see if any issues would be found. I guess the consistency check was paused at some point in time due to a preceeding firmware updated but wasn’t restarted automatically afterwards.


Posted In: Uncategorized

Tags: , ,

Leave a Comment

FreeRADIUS packages for Debian

Although announced a bit more than a month ago on the FreeRADIUS-users list, I though I might mention the package repository that I maintain for FreeRADIUS on Debian: Welcome 1Labs Packages!

There you can find:

  • Point releases for Debian jessie and wheezy of 3.0
  • (Bi-)weekly snapshots for Debian jessie and wheezy of 3.0
  • (Bi-)weekly snapshots for Debian jessie for 3.1 (until 3.1 gets released)

This is a complementary service that Fajar Nugraha has been providing via an unofficial Ubuntu PPA for the community. A similar offer hasn’t existed yet for Debian (only a rather stale SuSE OBS depot). So this is where I’d like to complement the current gap for Debian for anyone insterested.

(… from my point of view and knowledge) As of early to mid 2016 Debian and its siblings including Ubuntu only ship FreeRADIUS 2.x, this release branch has been EoL-ed by the FreeRADIUS project. Citing the commit message right after 2.2.9 preparing for a future 2.2.10 release:

commit c69b7e0abdb69e821133bbe030749bb119466256
Author: Alan T. DeKok <removed>
Date:   Tue Oct 6 09:11:27 2015 -0400

    Bump for 2.2.10

    Which will only be released if there are catastrophic security
    bugs.  Everyone should upgrade to 3.0

This isn’t the only occurrence where FreeRADIUS main developers emphasize on upgrading to 3.0 since their focus has shifted on this stable branch and the upcoming 3.1/3.2. I’d like to point out that the first 3.0.0 release has been made in October 2013. Most Linux distributions and the BSDs ship FreeRADIUS 3.0 in some way (even the rather conservative RHEL 7!) but not Debian and any derived distribution who get source packages from Debian.

As of now a bug report 797181 exists in Debian about FreeRADIUS 3, however in Alan’s DeKok’s own words:

The Debian maintainer seems to have disappeared, and is unresponsive.  
We need a new Debian maintainer, but the process isn't trivial, and no one has stepped forward.

(freeradius-users from Feb 26 2016)

Becoming a Debian maintainer isn’t such a quick thing and requires some efforts if you want to provide quality packages. This isn’t something I can afford my time on right now. However if nobody uses FreeRADIUS on Debian and Ubuntu, issues will go unseen and FreeRADIUS is going to get blamed for not running well on these distributions.

The FreeRADIUS source tree has been shipping with Debian packaging definitions for a long time but the exchange between up- and downstream hasn’t been so active. For some time the Debian packaging information in the FreeRADIUS source tree has been not in the best shape. Luckily this has changed for some months now and (IMHO) all deal-breaking issues have been resolved by a couple of helpful individuals (for example). With the snapshots build I hope to capture regressions more quickly and get a wider audience testing FreeRADIUS on Debian in order to make a future Debian maintainer’s life as smooth as possible. For what I can tell I’ve made rather positive experience with bug reports and small pull requests I have been involved with so far.

I can’t give any warranty that my package builds are bug free or are made with the best practises. For me it’s a learning experience and I have promised on the FreeRADIUS list to keep people informed if I have to or find a good reason to cease this effort. For the moment that’s what I can help with making the FreeRADIUS experience a bit more smooth and ease a future Debian maintainers’ effort.


Posted In: Uncategorized

Tags: ,

Leave a Comment

FreeRADIUS 2 to 3.0: Migration experience and nested LDAP groups

Since I migrated a FreeRADIUS 2.x server to 3.0 (finally!) I ran across some syntactical changes that have been in this release branch and sharing a small bit on the experience here on my blog.

Do I have redo my config from scratch?

The good news from my experience: No. If you haven’t totally messed up the default configurations a lot of snippets will still work with quick minor changes. Read twice, enter once and it’s likely going to fly quickly. As the project strongly recommends: You should really NOT reuse your FreeRADIUS 2.x configuration but start with the clean default configuration for v3, then modify them to the desired state. Unfortunately the experience of successfully adding back previous settings one by one I didn’t spot a difference that made my LDAP queries not work as I had expected them.

In FreeRADIUS 2.1.2 modules/ldap had the following default:

ldap {

   groupmembership_filter = "(|(&(objectClass=group)(member=%{control:Ldap-UserDn}))(&(objectClass=top)(uniquemember=%{control:Ldap-UserDn})))"

In FreeRADIUS 3.0.11 these options are split into into their own group section but the groupmembership_filter was also separated:

    group {
    #  Filter for group objects, should match all available
    #  group objects a user might be a member of.
    filter = '(objectClass=posixGroup)'

    #  Filter to find group objects a user is a member of.
    #  That is, group objects with attributes that
    #  identify members (the inverse of membership_attribute).
#   membership_filter = "(|(member=%{control:Ldap-UserDn})(memberUid=%{%{Stripped-User-Name}:-%{User-Name}}))"


Now the filter for to identity group objects is short with filter and the membership_filter contains the query related to filtering the membership.

In my case the LDAP directory is an Active Directory where nested groups are not only allowed but I see them quite frequently used. Unfortunately standard queries won’t reveal nested memberships as this requires a specific query only supported by AD LDAP servers. Thankfully Nasser Heidari blogged how this could be achieved with FreeRADIUS 2 – and since most of my local changes worked readily with FreeRADIUS 3 I didn’t realize this had to be changed in a minor way.

In short in FreeRADIUS 2.x it was

groupmembership_filter = "(&(objectcategory=group)(member:1.2.840.113556.1.4.1941:=%{control:Ldap-UserDn}))"

In FreeRADIUS 3.0 you want to split this to i.e.

filter = '(objectCategory=group)'
membership_filter = "(member:1.2.840.113556.1.4.1941:=%{control:Ldap-UserDn})"

Not a huge difference but if someone is rubbing his or her eyes during migration, this one seemed to be ok for me.

If you care about not running outdated software that isn’t really receiving much more love by the developers I can only encourage you moving to 3.0. Not only have there been a ton of performance improvements – there are definitely improvements in the configuration syntax I like as well.


Posted In: Uncategorized

Tags: , , ,

Leave a Comment

Unbricking a Sun Fire v210

These are notes on a system I made a couple of months ago but never got around polishing up, but finally kicked it out

At time of writing the Sun Fire v210, a entry-level UltraSPARC IIIi server from around 2003 was already dated, nonetheless I was able to get my hands on such a machine including another Sun box,  T1000. I just wanted to get my hands dirty on some non-x86 machines so that’s what I got myself pretty cheaply (being 1U they don’t take up too much space). Unfortunately I had to learn the hard way that some older Sun machines can be locked down pretty much so well that you can’t unbreak them without more advanced hackery that I currently am able off / want to invest my time — or with the help of similar/identical unlocked hardware.

The situation the box came to me:

  • The onboard system controller ALOM (service/management processor) started correctly but its password was unknown to me.
  • OpenBoot PROM (OBP) had both ‘security-mode=on’ and an active ‘security-password’, with the following result: Booting from anything else than the default (auto-boot set to the SCSI disk) required entering a “Firmware password” unknown as well. This exluded booting from the network or an optical drive. (for x86 folks like me: OpenBoot is roughly the equivalent to BIOS/UEFI but with some/lots more functionality)
  • With the disk inserted the box came up with a Solaris 9 that still seemed booting healthy, yet again: I didn’t have the a root password.
  • The machine came without a CD or DVD drive…

If you are in this situation lots of posts I found mentioned that you may require another Sun machine with a disk and a known root password that you can yank in and boot the locked system off to fix the issues. I didn’t have that luck (the T1000 is SAS based, the v210 U160 SCSI). Also: Mounting a UFS partition from a SPARC system on x86 may to fail because of different endianness. If it had been ZFS, well things would have been easier since it actually handles this for you. Also without any bootable Solarish’ OS I currently believe that you are out of luck with such a machine.

I’d also say that access to OpenBoot is the first key in order to get back into a more sane situation, ALOM is useful too but without OpenBoot you’re out. It is the key piece that enables you to boot the box from whatever source OpenBoot can boot from.

What didn’t work:

  • In contrast to x86 systems where you can most often short some jumpers or remove CMOS batteries most values in OpenBoot are stored in persistent storage areas, only the system clock is buffered by a CR2032 battery.
  • Modifying OpenBoot with enabled security-mode requires either knowing the firmware password or a bootable Solaris OS with access to command named eeprom(1M)
  • Without access to OpenBoot itself no jumper will reset the ALOM config like mentioned for the Sun Fire V100: On the V210 the equivalent jumper only reboots ALOM and that’s about it.
  • Without the ALOM’s password the sole way to modify its config is via the Solaris or illumos-only scadm(1M) command. A bootable OS is required for that. Unlike on the T1000: Hitting the ESC key in early start process of the ALOM startup doesn’t offer you the option to reset it to default settings and forget the passwords.

The magical System Configuration Card

While most LOM (LAN on motherboard) on x86 systems have their MAC-adresses burned in, on those Sun machines the MAC address is stored on a Smartcard called “System Configuration Card” (SCC) inserted at the front of the server. I came to learn that a box without this particular card would be pretty much useless as you can’t use the onboard NICs it. Also lots of other values of OpenBoot are stored on the SCC, including the (bloody!) firmware password.

At this place : Thanks to helpful explanation about the SCC’s functionality from a very kind local UNIX and Solaris consultant (Who said to me ~ “It’s my job I should know how to fix that.”). Thanks to him I tried powering up the system without any disk drives and the SCC pulled. OBP greeted me with errors on missing IDprom but went straight to a standard ‘ok’ prompt, now I had something to work with…

How i unbricked my particular machine:

  • The SCC card had to be removed (the box was powered off – it is NOT hot-pluggable)
  • Next a IDE CD-ROM was connected to the onboard IDE port (thanks to my dad who keept a nice stock of old computer parts)
  • With the hard disks pulled I powered the server on and let it boot to OBP, ending up some errors about the IDprom missing but a plain ‘ok’ prompt
  • Insert the disk with the OS and issue ‘probe-scsi’: The SCSI disk in question was recognized by OBP, ok.
  • Issue ‘probe-ide’ shows that the IDE CD-ROM was recognized (I realized a DVD writer didn’t work, thatis why I used the CD-ROM drive)
  • Inserted a Solaris 8 CD (9 or 10 should work too, not 11 with the removal of sun4u support) and hit ‘boot cdrom -s’ (boot cdrom launches the installer, -s goes into single user mode)
  • The first attempt without SCC always failed with card not inserted (missing IDprom…) but repeating it boots Solaris from the CD.
  • After some waiting a Solaris shell was ready

Without the SCC inserted, there was not network, nonetheless, things got more interesting from there

# List disks and get its name with

# Create temporary folder as mount point 
mkdir /mnt/recovery

# Mount the local SCSI disk and switch to its /etc
mount /dev/dsk/c1t0d0s0 /tmp/recover
cd /tmp/recover/etc

Now being in a chroot environment as root allowed me to modify /etc/passwd. Due to the fact that both sed and vi were different on Solaris 8 from what I knew from Linux and the BSDs I opted to make copy of /etc/passwd and /etc/shadow and use sed (there is no -i as in GNU sed) to replace the hash of the root password by nothing. (Source about /etc/passwd)

This resulted in an OS with a an empty root password. Time to exit the chroot environment and shut down the OS booted from the CD. Next I could insert the SCC back since that was the thing blocking the access to the rest of the system. As expected the OS was “kind enough” to accept an empty password for the root user the next time it booted from the hard disk.

Unfortunately I didn’t keep notes from that stage but from manpage it says the eeprom binary is platform-specific and located in a platform-specific folder, but anyway:

  • ‘eeprom’ showed that the ‘security-mode’ was set to ‘command’
  • ‘eeprom security-mode=none’ means the system should stop asking for a firmware password
  • ‘scadm’ can also then be used to re-set the ALOM password as well
  • Once done, reboot the OS and voilà: OBP is unlocked ! Additionnaly ALOM for remote management was now accessible too.

Was it economical? Nope, but it was a fun brain-twisting experience.

2016.02: Fix specs, the Sun Fire v210 and v240 only have U160 SCSI controllers, not U320 as I initially wrote.


Posted In: Uncategorized

Tags: , , ,

Leave a Comment

Finding a host via its MAC

Just something quick, when you have to find the IP address of a host that is tacking its MAC address via DHCP – and you don’t have access to the DHCP server in charge and its logs about given leases. There are a couple of ways doing so, and mine is likely not the most ideal one…

A couple of tools exist to scan your local network completely using nmap or arp-scan:

# nmap  -sn

# arp-scan --interface=em0 -l

If you know the MAC address and don’t want to blast the whole network with broadcasts (although for a short time) you can use arping:

# arping 00:00:00:00:00:00

For sure the IP in use has to be within the same IP subnet since it sends broadcasts to the network in the IP level which ARP resolves to a given MAC address. Since ARP is not use for routed traffic this won’t help you if that machine is in another subnet.


Posted In: Uncategorized

Tags: , ,

Leave a Comment OpenVPN tutorial: Full tunnel add-on

Recently I had the need to get myself a VPN since I was more travelling and had to use networks with proxies that not only blocked illegal sites, but yielded timeouts on perfectly legal content. Using BSDnow‘s excellent OpenVPN tutorial I whent ahead in a breeze.  But normally I use OpenVPN to connect to a remote (internal) network. This time I it needed to pass all traffic through the VPN so I could access the net ignoring local proxies.

As the tutorial mentions, adding push “redirect-gateway def1 bypass-dhcp” to the server’s openvpn.conf temporarily overrides the client-side default gateway forcing all traffic throug the VPN, that’s what I needed. Another (later on added) line was “topology subnet” since otherwise each client only gets a /32 subnet per connection which makes things a bit hairy at the next stage. tun0 on the server side then uses a /24 by default. “Large” missing piece for my use case was pf to NAT the VPN clients through the box to the internet. Here is the minimal ruleset for /etc/pf.conf I used:

ext_if = "vtnet0"
vpn_if = "tun0"

vpn_net = ""

nat on $ext_if from $vpn_net to any -> ($ext_if)
pass in on $ext_if inet proto tcp from any to ($ext_if) port 22
pass in on $ext_if proto tcp from any to any port 1194

I had to declare vpn_net since instead of i.e. a $vpn_if:network since at boot pf doesn’t know the network tun0 uses before OpenVPN comes up. Then it just NAT’s traffic arriving from VPN network on tun0 to the external interface. The 2 other rules let me SSH to the VM and allows OpenVPN traffic.

Since this setup needs forward traffic (two) network interfaces I had to enable packet forwarding, in the end all it needed was:

# sysrc gateway_enable="YES"
# sysrc pf_enable="YES"

To avoid a reboot and enable forwarding right now:
# sysctl net.inet.ip.forwarding=1 

Start pf and restart OpenVPN
# service pf start
# service openvpn restart 

Just in case: Make sure pf.conf is really loadable
# pfctl -f /etc/pf.conf

And that’s about it. It’s not a full tutorial, but just an add-on to the one over at BSDnow for those situations where you need a full tunnel for $REASON (and no, ToR isn’t always a working solution). A big thanks to the Adam McDougall, the original author and helping hand – and the BSDnow crew for their high quality content!


Posted In: Uncategorized

Tags: , , ,

One Comment

Java WebStart and >= TLS 1.0 – oh why…

An awful lot of security issues has lead Oracle to tighten things when it comes to Java WebStart as it is used in an awful lot of KVM over IP solutions. Some of those systems are even very picky on the Java version used. *blimey*

Now I had those shiny new IBM SystemX x3650 m4 and a x3550 m4 that I was exploring when I was documenting settings for their remote service processor. In IBM (soon Lenovo?) SystemX M4 series this thing is called IMM2 (Integrated Management Module 2) and once you have installed it with a (not so cheap) license key you get a shiny remote KVM ability.

I was unfortunate to look at the documentation and to discover the CLI parameter ‘tls’:

system> help tls

   tls [-options] - configures the minimum TLS level
   -min <1.0 | 1.1 | 1.2> - Selects the minimum TLS level
   -h - Lists usage and options

system> tls
-min 1.0

So I though: “TLS 1.0 is aging let’s at least bump it to 1.1”. – Until that moment the remote management capability worked pretty well beyond some random Java quircks. But unfortunately I tried out a couple of settings so at first it wasn’t obvious that it was due to this that Java stopped loading the Avocent KVM over IP applets and instead got greeted by this: Remote host closed connection during handshake
	at Source)
	at Source)
	at Source)
	at Source)
	at Source)
	at Source)
	at Source)
	at Source)
	at Source)
	at Source)
	at Source)
	at com.sun.deploy.cache.ResourceProviderImpl.checkUpdateAvailable(Unknown Source)
	at com.sun.deploy.cache.ResourceProviderImpl.isUpdateAvailable(Unknown Source)
	at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source)
	at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source)
	at com.sun.javaws.LaunchDownload$ Source)
	at Source)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor$ Source)
	at Source)
Caused by: SSL peer shut down incorrectly
	at Source)
	... 20 more

OK, why did that happen – I though TLS 1.1 was supported by Java 7 for som time now- right?

  • Java 6 did only support upt to TLS 1.0
  • Java 7 (at least up to Update 55) did add support TLS 1.1 and TLS 1.2 but actually never enabled it by default
    If you want to enable it, open the Java Control Panel (javacpl) and enabled the newer TLS versions
  • Java 8 seemingly comes with TLS 1.1/1.2 enabled by default

If I had properly read the error message (*dou*) I would have far more quickly realized where to look for.

Curently you have to either set IMM2’s TLS version minimum to 1.0 (default) or fix your Java to allow newer TLS version.



Posted In: Uncategorized

Leave a Comment

“Hidden” CLI interface on Netgear GS110TP

The price difference between cheaper “smart managed” and the higher priced “fully managed” switches is often made up by removing a) serial console access and b) disabling access to a remote CLI. After working more often with managed switches I really appreciate a CLI access since most GUIs I’ve so far used (Netgear, HP-H3C Comware, Cisco IOS) were not much of a pleasure and most often slow. Serial console was less of use but it becomes very handy if the device doesn’t want to boot or for initial configuration.

Some vendors restrict or hide CLI access on their larger smart switches – maybe for support or developer purpose – one that I know about was the HP 1910’s that I’ve used (formerly H3C-based 3Com 2928). It was during a port scan on my GS110TP where I realized there were more than the expected HTTP and HTTPS ports responding. After increasing the scope to a full TCP  scan I saw 2 ports in the upper range that took my interest:

# nmap -p 1-65535 -T4 -A -v <ip>
Completed NSE at 08:44, 35.57s elapsed
Nmap scan report for (<ip>)
Host is up (0.011s latency).
Not shown: 65528 closed ports
22/tcp    filtered ssh
23/tcp    filtered telnet
80/tcp    open     http?
|_http-methods: HEAD GET OPTIONS
|_http-title: NETGEAR GS110TP
161/tcp   filtered snmp
443/tcp   open     ssl/https?
| ssl-cert: Subject: commonName=<removed>
4242/tcp  open     vrml-multi-use?
60000/tcp open     unknown

For sure the default telnet and ssh didn’t return anything interesting, but there were TCP 4242 and TCP 60000 remaining. Apparently 4242 isn’t to much use, possibly a management interface for Netgear but it seems to have been detected by others for a couple of Netgear switches. During a quick search I came across a post from Koos van den Hout who had detected a telnet server on a larger, rackmount GS716T using an older firmware, thus at least there was a trace for Netgear to have a “hidden” CLI access for some of their larger smart switches. I tried my luck using a telnet client on my tiny 10-Port switch and what I got resembled much to Koos’ GS716T.

(Broadcom FASTPATH Switching) Applying Interface configuration, please wait ...

I continued as follows: Since GS110TP doesn’t allow defining different users nor RADIUS-based management authentication tried what Koos suggested and used the default ‘admin’ user as found on larger switches that do have user name for login.  This resulted in a password prompt. To get full access, enter ‘enable’ and enter twice (Cisco IOS – anyone?).  Now I can confirm that this works for the GS110TP running version, and likely the GS108Tv2 (uses same firmware image):

(Broadcom FASTPATH Switching)
Applying Interface configuration, please wait ...admin
(Broadcom FASTPATH Switching) >
(Broadcom FASTPATH Switching) >?

enable                   Enter into user privilege mode.
help                     Display help for various special keys.
logout                   Exit this session. Any unsaved changes are lost.
passwd                   Change an existing user's password.
ping                     Send ICMP echo packets to a specified IP address.
quit                     Exit this session. Any unsaved changes are lost.
show                     Display Switch Options and Settings.

(Broadcom FASTPATH Switching) >enable
(Broadcom FASTPATH Switching) #show version
Switch: 1

System Description............................. GS110TP
Machine Type................................... GS110TP
Machine Model.................................. GS110TP smartSwitch
Serial Number.................................. [...]
FRU Number.....................................
Part Number.................................... BCM53312
Maintenance Level.............................. A
Manufacturer................................... 0xbc00
Burned In MAC Address.......................... [...]
Software Version...............................
Operating System............................... ecos-2.0
Network Processing Device...................... BCM53312_B0
Additional Packages............................ FASTPATH QOS
                                                FASTPATH IPv6 Management

(Broadcom FASTPATH Switching) #configure
(Broadcom FASTPATH Switching) (Config)#

As you can see at the end, even going into config mode is possible. If you are familiar with the Cisco IOS CLI you’ll realize how similar things are on the Netgear switches (Google tells us FASTPATH is from Broadcom). Also you can have a look at Netgear’s M4100 or M5300 CLI guides to get a closer idea of the CLI command usage, though not all commands are available on this box. If you change things via CLI, remember to save the running config to the NVRAM’s startup config which is what the web UI automatically does for you. (#copy system:running nvram:startup-config)

Warning: Some commands cause instant reboot
However, as Koos for the GS716T already confirmed, certain commands don’t seem to be recognized and may cause an instant reboot of the switch without saving to the NVRAM (i.e. #ip ssh server enable). That might be the cause why Netgear preferred disabling regular CLI access on this firmware since they didn’t want to support it. Still it can be quite useful to know that even on such a small entry-level manageable switch, there is still a  CLI available in case you need it.


Posted In: Uncategorized

Tags: , ,


Quick-tip: Put the UniFi wireless controller on a separate LV

I’ve been running a Ubiquiti UniFi wireless controller in production for a while known that it requires quite some amount of storage. But it heavily depends on how many APs and users you are having, larger setups will have to take that into account earlier, others might just be happy with it. As Ubiquiti states for auditing purpose the UniFi server keeps a lot of date around which can easily grow a couple of gigabytes per – let’s say 4 months – in my current experience. In the Debian and Ubuntu packages the data from the controller is principialy located in:

  • /var/lib/unifi: Database, AP config files
  • /usr/lib/unifi: Firmware binaries, base config files

For other platforms like FreeBSD it might be else where like somewhere in /usr/local/…

The first one seems to be the one to take care little more closely. While it will initially grow to roughly +3GB at first run the MongoDB preallocation files are generally OK and disabling journaling isn’t worth the risk and hassle during power outage either. However this can easily grow and if you forget about it, it might fill up the partition and either cause other services to stop working (actually 2.4.x seems to continue quite happily as long as you don’t try adopting new APs, that’s when I experienced problems)*

Anyhow from lessons learned I’d suggest putting at least “/var/lib/unifi” on a separate partition , or (likely easier to grow online) a LVM logical volume and have it mounted there. – For sure if you have something like ZFS things are even easier by creating a new ZFS and adding a mountpoint to it with some limitation so it doesn’t outgrow the other disk space available. Of course you have to stop the unifi daemon first, create that new volume/fs move the data over there and be happy.

Afterwards consider to look at Ubiquiti’s pruning script in their FAQ and decide how much data you really need to retain and consider pruning on more-or-less regular basis. I hope putting it on a extra volume keeps me

* The config files for a new AP can’t be written to disk, thus the newly adopted AP won’t get any valid config and you then might have to forgt, reset and re-adopt the AP to get it working.


Posted In: Uncategorized

Tags: , ,

Leave a Comment