Sun have released Solaris 10/09 as Virtual Appliance. However it's
not readily usable with VMWare ESXi Server 4.0 without a bit of
tweaking. So in a nutshell, here's how to get it working:
http://communities.vmware.com/message/907527
http://www.mail-archive.com/opensolaris-discuss@opensolaris.org/msg35493.html
Step 1 in detail
The OVF config that comes as part of the Solaris Virtual Appliance download throws up a bunch of errors when using VMWare OVF Tool:
- Use the VMWare vSphere client to create a Solaris 10 VM and export it as an OVF appliance. Modify the config file within this new appliance and replace the disk file with the one downloaded as part of the Solaris OVF appliance download from Sun's website here.
- Use the VMWare OVF Tool (available here) to convert the now modified OVF appliance to VMWare.
- Start the VM and modify some boot config files so that the VM can start correctly.
http://communities.vmware.com/message/907527
http://www.mail-archive.com/opensolaris-discuss@opensolaris.org/msg35493.html
Step 1 in detail
The OVF config that comes as part of the Solaris Virtual Appliance download throws up a bunch of errors when using VMWare OVF Tool:
- Line 8: Unsupported value 'http://www.vmware.com/specifications/vmdk.html#sparse' for attribute 'format' on element 'Disk'.
Give up trying to fix it.
Within the VMWare vSphere Client tool, create a new Solaris 10 VM (with a
tiny disk) and export it as an OVF appliance. Modify the OVF config
file as part of this new appliance and replace the disk file with the
one downloaded as part of the Solaris OVF appliance download. The config
file is an XML file and it's not that difficult to work out what should
be where.
I guess you could either move the Solaris disk file (downloaded as part of the Solaris OVF downloaded) to the directory containing newly create 'template' OVF, or replace the existing OVF config file with the 'template' OVF. I think I did the later.
Step 2 in detail
I guess you could either move the Solaris disk file (downloaded as part of the Solaris OVF downloaded) to the directory containing newly create 'template' OVF, or replace the existing OVF config file with the 'template' OVF. I think I did the later.
Step 2 in detail
Once you have a compatible
OVF appliance ready you should be able to run the VMWare OVF Tool and
have that migrate the appliance to your ESXi server. Something like the
following should do it:
C:\Program Files\VMware\VMware OVF Tool>ovftool.exe --datastore="external_storage" --network="VM Network" "c:\TEMP\Solaris10_1009_virtual_image\solaris.ovf" "vi://root:password@myesxiserver"If it worked then you should see the following:
Opening OVF source: c:\TEMP\Solaris10_1009_virtual_image\solaris.ovf
Warning: No manifest file
Opening VI target: vi://root@myesxiserver/
Target: vi://myesxiserver/
Disk Transfer Completed
Completed successfully
Step 3 in detail
Unfortunately starting the VMWare as is will produce the following:
The issue is that the boot device has changed and both /boot/solaris/bootenv.rc
and /etc/vfstab need updating aswell as performing a
reconfiguration boot.
1. Start the VM and at the grub prompt, enter fail-safe mode and allow the root file system to be mounted as read/write under /a.
2. Identify the new device name using the format command and get ready to use vi. You can brush up your vi skills here. Mine was /pci@0,0/pci15ad,1976@10/sd@0,0--so replace where necessary from now on. Also vi may not display correctly so fix the terminal type with the following command:
1. Start the VM and at the grub prompt, enter fail-safe mode and allow the root file system to be mounted as read/write under /a.
2. Identify the new device name using the format command and get ready to use vi. You can brush up your vi skills here. Mine was /pci@0,0/pci15ad,1976@10/sd@0,0--so replace where necessary from now on. Also vi may not display correctly so fix the terminal type with the following command:
# TERM=sun-color; export TERM3. Now edit /a/boot/solaris/bootenv.rc and update the line that starts with setprob boothpath so that it reads:
setprop bootpath '/pci@0,0/pci15ad,1976@10/sd@0,0:a'4. Once you've done that update the boot archive:
# bootadm update-archive -R /a
5. Then edit /a/etc/vfstab (making a copy
first) and modify the line mounting the root file system so instead of /dev/dsk/c0d0s0
and /dev/rdsk/c0d0s0 it reads the following absolute paths. Do
not forget to suffix a: and ,raw respectively.
8. Login and edit /etc/vfstab again so that you can replace those absolute paths. If you made a backup of this file before the last update, it would be easier to start with this. Before you do this use ls -l /dev/dsk to determine the new disks and update this file as appropriate. For example mine now reads:
/devices/pci@0,0/pci15ad,1976@10/sd@0,0:a /devices/pci@0,0/pci15ad,1976@10/sd@0,0:a,raw / ufs 1 no -6. Now we need to force a reconfiguration boot so that the system recreates the /etc/path_to_inst file that contains physical device to logical instance mappings.
# touch /a/reconfigure7. The system should now reboot and selecting the default grub option (i.e. non fail-safe) should perform a reconfiguration boot before bringing you to the graphical X login. If it doesn't then something went wrong :-(. Trace-back your steps.
# reboot
8. Login and edit /etc/vfstab again so that you can replace those absolute paths. If you made a backup of this file before the last update, it would be easier to start with this. Before you do this use ls -l /dev/dsk to determine the new disks and update this file as appropriate. For example mine now reads:
/dev/dsk/c3t0d0s1 - - swap - no -9. Reboot again and that should be it. System is now ready for use.
/dev/dsk/c3t0d0s0 /dev/rdsk/c3t0d0s0 / ufs 1 no -
/dev/dsk/c3t0d0s7 /dev/rdsk/c3t0d0s7 /export/home ufs 2 yes -