qemu-system-s390x has the apparent quirk that it can only boot via something like
qemu-system-s390x -kernel kernel.debian -initrd initrd.debian -m 512 -nographic --drive file=rootimage.img,format=raw,if=none,id=c1
This means I think what I want to do is something like the following
& imageBuilt (RawDiskImage "/srv/vm/bricklin.img") bricklinChroot
MSDOS
[ partition EXT4 `mountedAt` "/"
`addFreeSpace` MegaBytes 100
`mountOpt` errorReadonly
, swapPartition (MegaBytes 256)
]
where
bricklinChroot d = debootstrapped mempty d $ props
& osDebian (Stable "stretch") S390X
& Apt.installed [ "linux-image-s390x" ]
This seems to build the image OK (see end of post), but propellor fails because the image is not bootable (the image contents might need adjustment as well, but first things first).
I'm not sure what this style of booting is called, but I see people providing "cloud images" meant to be used this way, with separate initrd and and kernel. Is it sensible to customize imageBuilt for this purpose, or would it be better write my own nonBootableImageBuilt
or something like that?
/srv/vm/bricklin.img.chroot apt installed linux-image-s390x ... done
/srv/vm/bricklin.img.chroot cache cleaned ... ok
creating /srv/vm/bricklin.img of size 1.02 gigabytes
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
[snip]
Use 'apt autoremove' to remove them.
The following NEW packages will be installed:
kpartx
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 33.8 kB of archives.
After this operation, 76.8 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 kpartx amd64 0.6.4-5 [33.8 kB]
Fetched 33.8 kB in 0s (118 kB/s)
Selecting previously unselected package kpartx.
(Reading database ... 238863 files and directories currently installed.)
Preparing to unpack .../kpartx_0.6.4-5_amd64.deb ...
Unpacking kpartx (0.6.4-5) ...
Setting up kpartx (0.6.4-5) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up swapspace version 1, size = 248 MiB (260042752 bytes)
no label, UUID=65c5b131-98bf-4b8c-afad-9c75405c6391
loop deleted : /dev/loop0
433,093,140 99% 220.62MB/s 0:00:01 (xfr#11289, to-chk=0/14615)
** warning: image is not bootable: no bootloader is installed
loop deleted : /dev/loop0
concave.cs.unb.ca built disk image /srv/vm/bricklin.img ... failed
concave.cs.unb.ca s390x server image (bricklin) ... failed
concave.cs.unb.ca overall ... failed
Here is a simple approach, that at least allows the image building to complete. I also managed to boot one of the images on AMD64. Probably it needs more testing, and I'm sure there are style and naming issues.
I pushed the changes to
https://salsa.debian.org/bremner/propellor/commits/proposed/direct-boot
CodeWwhitespace review:
Naming ideas: Basically this is using qemu as the bootloader, rather than going through an (emulated) BIOS to start a bootloader. So I'm thinking names like QemuBootloader or NoBootloader, or NoBIOS. Don't want to bikeshed this too hard, it would be ok to keep the DirectBoot name, but I think Propellor.Property.DirectBoot at least needs a comment explaining what it's for, it would be confusing for a propellor user to stumble across that module without context.
Your idea to copy the kernel and initrd out of the image so qemu can use them seems to point toward having a Property that gets one of these images booted up using qemu. And then the QemuBootloader name would make a lot of sense, because it would allow for later expansion to other emulators. Not that you have to build such a thing, but it's worth considering that someone may later want to.
(In fact I could use such a thing, but I don't know how I'd want it to work. Should propellor only use the chroot for initial image build, and then ssh into the booted VM and run propellor in there when there are config updates? Or restart the VM when the image is changed?)
I'm not too attached to the terminology, "direct booting" just what an unscientific survey of sysadmin types came up with. QemuBoot is more descriptive, although I do wonder if it's really specific to Qemu. As far as I understand (which is just from reading the Xen wiki), Xen dom0 can also (and originally could only) boot this way. In fact I think the last Xen VPS I had worked like this, which was a pain as a guest admin. Maybe the connecting thread is that the boot is controlled by the host rather than the guest VM. HostBoot would be one option, although NoBootloader is also fine with me. I think things like QemuBootloader or whatever would likely be layered on top.
I'm less sure about copying kernel and initrd than when I first wrote that. If it's reasonable to depend on the existence of the foo.img.chroot, then it's quite convenient to boot from the already conveniently present kernel and initramfs.
I'll have to see how hard it is to get basic networking with just qemu. It might make sense to use libvirt to run the VMs, especially since that's what the production deployment uses. I saw spwhitton had some ideas about libvirt.
Ah right, support for libvirt KVM VMs; your patch is kind of a prerequisite for that. Pulling the kernel and initrd out of the chroot is a good idea!
"Host" is slightly ambiguous around disk image building because a disk image can be built for a Host by
imageBuiltFor
. I've gone ahead and merged it in with the NoBootloader name.