I faced the grueling ordeal of upgrading my home server with Solaris 11.1 and ZFS (zpool version 34) on VMware ESXi to a new machine with LSI 2308 HBA. If somebody is in the same situation as I am these notes maybe provide some assistance.
I made a choice long time back (probably a mistake) to continue with Solaris 11 Express -> Solaris 11.1 and upgraded my zpool beyond version 28 which locked me with the Oracle’s release of Solaris. All the other Solaris clones (off shoots of OpenSolaris) supports version 28 of zpool which was shipped with the last open source release of Solaris. So it’s very hard for me to switch to a different flavor of Solaris.
This winter I planned to upgrade my aging hardware with a newer one. On my home server I have VMware ESXi (free version) and on it I host multiple virtual machines. One of the VM is my NAS that runs Solaris 11.1 with ZFS hosted on several disks that are presented to the VM using VT-d (Passthrough). I wanted to move the same setup to a new hardware… and that’s where I came across series of issues.
|CPU||Intel Xeon E5-2690 v2|
|Memory||64GB ECC Registered DDR3|
|OnBoard RAID||LSI 2308|
Issue #1: Firmware Quagmire
The motherboard shipped with an older version of firmware that did not recognize the newer Ivy Bridge based Intel E5 v2 CPUs. Symptom After installing the CPU and turning on the power I would not get any BIOS Error Beeps or anything on the display. This typically indicates a bad CPU (or in this case a non functional CPU due to firmware incompatibility).
Note I was however able to connect to the IPMI subsystem which runs independently. The IPMI subsystem on Supermicro board was set to DHCP so I was able to quickly scan the IP Address and login using a web-browser. Unfortunately there is little you can do with the IPMI interface if the CPU is incompatable (at least that’s what I initially thought)
I was very disappointed and was pretty sure that this would require a motherboard exchange with SuperMicro resulting in 10-14 days of delay. I also sent a mail to firstname.lastname@example.org and am still awaiting their response (12 days as of this blog post). But after googling around I stumbled upon a reference to a beta software called Supermicro Update Manager. It allows Out-Of-Band updates to X9 and X10 motherboards through IPMI interface (if you have a motherboard that has IPMI).
The software update is not freely available and you have to request for the Evaluation Software, which I promptly did and to my surprise got an instantaneous response from James Chiang @ Supermicro (thanks a lot for being super helpful) with the access code and key for downloading & activating the feature.
In order to use SUM the IPMI firmware should be version 3.x and ofcourse I also had an outdated firmaware on IPMI. But as the IPMI interface was up and accessible I was able to update the firmware using the web-interface easily. The latest firmware could be downloaded from Supermicro’s website.
Once the IPMI is on the latest revision I was able to enter the key provided by Supermicro under “Maintainence” -> “Bios Update” section of the interface.
With the IPMI at the latest levels and SUM Activated, updating to the latest BIOS through the web interface is just click and go operation. Upgrading motherboards BIOS without a CPU is pretty impressive capability. I am glad I bought this Supermicro Motherboard. I was able to boot the system after the upgrade…
Issue #2: Wrong Firmware on LSI2308
The default firmware that ships with the motherboard is the IR version that provides the capability to create RAID groups and present them as LUNs. But in this mode it is not possible to expose individual disks which I needed for my ZFS server. There is no need for RAID subsystem with ZFS as ZFS includes RAID and Checksum capabilities.
Fortunately for exposing individual disks LSI has another firmware which turns on the IT mode. Upgrading the LSI firmware was slightly more involved but still pretty straight forward.
Get the latest firmware from Supermicro’s ftp.
Extract the zip file and copy the contents of UEFI folder to a DOS formatted USB Disk and plug the disk to the server
Get the SAS ID of the LSI controller. This could either be found in a sticker on the motherboard or by getting into the LSI BIOS menu (Ctrl + C) during POST session. SAS ID is a 16xHex String seperated by a colon “XXXXXXXX:YYYYYYYY”
Boot the server and Press F11 for selecting the device to boot from and select EFI Shell.
Step 4 should get you to EFI Shell with a prompt (ideally). In my case the boot process was getting stuck somewhere I was not able to get to the prompt. I was able to resolve the issue by getting into BIOS and reverting all the settings to factory defaults and going through the above process again.
Change the drive to fs0 and directory UEFI that contains the firmware. And execute SMC2308T.NSH. It should go through the update process and in the end would ask you to enter the last 9 digits of the SAS ID that you noted step 3 “XYYYYYYYY”.
Issue #3: Solaris 11.1 and LSI2308
My woes were not over (yet). With ESXi 5.5 I was able to successfully configure Passthrough of the LSI 2308 adapter. But Solaris 11.1 would not boot if the PCI adapter was linked to the VM. Without the adapter assigned I was able to boot the VM.
To isolate the issue I modified my grub to post verbose message during boot process and added “-m verbose -v” after $kern on file /rpool/boot/grub/grub.cfg
$multiboot /ROOT/01-03-2014-FixedMPTSAS/@/$kern $kern -m verbose -v\ -B console=graphics -B $zfs_bootfs
With this addition pressing Esc during the boot process lists verbose messages along with the name of the service as they are started. I noticed that the VM would always hang after loading the driver mpt_sas0.
pseudo-device: systrace0 systrace0 is /pseudo/systrace@0 pseudo-device: ucode0 ucode0 is /pseudo/ucode@0 pseudo-device: bpf0 bpf0 is /pseudo/bpf@0 pseudo-device: fssnap0 fssnap0 is /pseudo/fssnap@0 IP Filter: v4.1.9, running. pseudo-device: nsmb0 nsmb0 is /pseudo/nsmb@0 pseudo-device: winlock0 winlock0 is /pseudo/winlock@0 /pci@0,0/pci15ad,7a0@16/pci15d9,691@0 (mpt_sas0): mptsas0 Firmware version v18.104.22.168 (?) /pci@0,0/pci15ad,7a0@16/pci15d9,691@0 (mpt_sas0): mptsas0: IOC Operational. /pci@0,0/pci15ad,7a0@16/pci15d9,691@0 (mpt_sas0): mptsas0: Initiator WWNs: 0x500304800eaa9d00-0x500304800eaa9d07
After searching the net this issue was validated by a post at servethehome also. According to the discussion also the Solaris offshoot OmniOS was able to use LSI 2308 but not Solaris 11 or 11.1. Changing the OS was not an option for me as I alsready had a ZFS volume with zpool version 34 that would not be recognized by OmniOS.
After several failed efforts the second last one did work and I will just get to it skipping the ones that did not work. I performed an OmniOS install in a Virtual Machine and copied the mpt drivers over to the Solaris 11.1 Virtual Machine and that seemed to have worked.
- Move the original files from Solaris 11.1 server to a backup location (just in case - or create a boot environment)
/kernel/drv/mpt.conf /kernel/drv/mpt_sas.conf /kernel/drv/amd64/mpt_sas /kernel/drv/amd64/mpt /kernel/kmdb/amd64/mpt /kernel/kmdb/amd64/mpt_sas
- Copy the mpt drivers from OmniOS and move them to the same location on Solaris 11.1 server
/kernel/drv/mpt.conf /kernel/drv/mpt_sas.conf /kernel/drv/mpt /kernel/drv/mpt_sas /kernel/drv/amd64/mpt_sas /kernel/drv/amd64/mpt /kernel/kmdb/mpt /kernel/kmdb/mpt_sas /kernel/kmdb/amd64/mpt /kernel/kmdb/amd64/mpt_sas
- Connect the PCI devide representing LSI 2308 on your VM and reboot. You should be able to view all disks and the ZFS file systems that are there.
Note The number of files on OmniOS are a few more than the ones on the Solaris 11.1 server. I am pretty sure not all of them are required but just to save some cycles I never tested which ones are essentials and which ones are not. Copying all of them works just fine.
Note You can either follow the same steps or feel free to use the drivers that I extracted from OmniOS from this linkcomments powered by Disqus