A Virtual Machine Monitor for modern Cloud workloads. Features include CPU, memory and device hotplug, support for running Windows and Linux guests, device offload with vhost-user and a minimal compact footprint. Written in Rust with a strong focus on security.
Bot releases are visible (Hide)
Published by github-actions[bot] over 3 years ago
This release has been tracked through the v15.0 project.
Highlights for cloud-hypervisor
version v15.0 include:
This release is the first in a new version numbering scheme to represent that
we believe Cloud Hypervisor is maturing and entering a period of stability.
With this new release we are beginning our new stability guarantees:
Currently the following items are not guaranteed across updates:
Building on our existing support for rate limiting block activity the network
device also now supports rate limiting. Full details of the controls are in the
IO throttling documentation.
virtio-net
guest offloadThe guest is now able to change the offload settings for the virtio-net
device. As well as providing a useful control this mitigates an issue in the
Linux kernel where the guest will attempt to reprogram the offload settings
even if they are not advertised as configurable (#2528).
--api-socket
supports file descriptor parameterThe --api-socket
can now take an fd=
parameter to specify an existing file
descriptor to use. This is particularly beneficial for frameworks that need to
programmatically control Cloud Hypervisor.
virtio-pmem
(#2277).Deprecated features will be removed in a subsequent release and users should plan to use alternatives
bzImage
CONFIG_PVH=y
. Will be removed in v16.0.Many thanks to everyone who has contributed to our release including some new faces.
Published by github-actions[bot] over 3 years ago
Bug fix release branched off the v0.14.0 release. The following bugs
were fixed in this release:
Published by github-actions[bot] over 3 years ago
This release has been tracked through the 0.14.0 project.
Highlights for cloud-hypervisor
version 0.14.0 include:
A new option was added to the VMM --event-monitor
which reports structured
events (JSON) over a file or file descriptor at key events in the lifecycle of
the VM. The list of events is limited at the moment but will be further
extended over subsequent releases. The events exposed form part of the Cloud
Hypervisor API surface.
Basic support has been added for running Windows guests atop the MSHV
hypervisor as an alternative to KVM and further improvements have been made to
the MSHV support.
The aarch64 platform has been enhanced with more devices exposed to the running
VM including an enhanced serial UART.
The documentation for the hotplug support has been updated to reflect the use
of the ch-remote
tool and to include details of virtio-mem
based hotplug as
well as documenting hotplug of paravirtualised and VFIO devices.
virtio-console
The --serial
and --console
parameters can now direct the console to a PTY
allowing programmatic control of the console from another process through the
PTY subsystem.
The block device performance can now be constrained as part of the VM
configuration allowing rate limiting. Full details of the controls are in the
IO throttling doumentation.
Deprecated features will be removed in a subsequent release and users should plan to use alternatives
bzImage
CONFIG_PVH=y
.Many thanks to everyone who has contributed to our 0.14.0 release including
some new faces.
Published by github-actions[bot] over 3 years ago
This release has been tracked through the 0.13.0 project.
Highlights for cloud-hypervisor
version 0.13.0 include:
It is now possible to use Cloud Hypervisor's VFIO support to passthrough PCI
devices that do not support MSI or MSI-X and instead rely on INTx interrupts.
Most notably this widens the support to most NVIDIA cards with the proprietary
drivers.
Through the addition of hugepage_size
on --memory
it is now possible to
specify the desired size of the huge pages used when allocating the guest
memory. The user is required to ensure they have sufficient pages of the
desired size in their pool.
It is now possible to provide file descriptors using the fd
parameter to
--net
which point at TAP devices that have already been opened by the user.
This aids integration with libvirt
but also permits the use of MACvTAP
support. This is documented in dedicated macvtap documentation.
It is now possible to use VHD (fixed) disk images as well as QCOWv2 and raw
disk image with Cloud Hypervisor.
Device threads are now derived from the main VMM thread which allows more
restrictive seccomp filters to be applied to them. The threads also have a
predictable name derived from the device id.
It is now possible to request that the guest VM shut itself down by triggering
a synthetic ACPI power button press from the VMM. If the guest is listening for
such an event (e.g. using systemd) then it will process the event and cleanly
shut down. This functionality is exposed through the HTTP API and can be
triggered via ch-remote --api-socket=<API socket> power-button
.
Many thanks to everyone who has contributed to our 0.13.0 release including
some new faces.
Published by github-actions[bot] almost 4 years ago
This release has been tracked through the 0.12.0 project.
Highlights for cloud-hypervisor
version 0.12.0 include:
The use of --watchdog
is now fully supported as is the ability to reboot the
VM from within the guest when running Cloud Hypervisor on an ARM64 system.
vhost-user-net
and vhost-user-block
self spawningIn order to use vhost-user-net
or vhost-user-block
backends the user is now
responsible for starting the backend and providing the socket for the VMM to
use. This functionality was deprecated in the last release and how now been
removed.
vhost-user-fs
backendThe vhost-user-fs
backend is no longer included in Cloud Hypervisor and it is
instead hosted in it's own
repository
The vm.info
HTTP API endpoint has been extended to include the details of the
devices used by the VM including any VFIO devices used.
Many thanks to everyone who has contributed to our 0.12.0 release:
Published by github-actions[bot] almost 4 years ago
This release has been tracked through the 0.11.0 project.
Highlights for cloud-hypervisor
version 0.11.0 include:
io_uring
support by default for virtio-block
Provided that the host OS supports it (Linux kernel 5.8+) then io_uring
will
be used for a significantly higher performance block device.
This is the first release where we officially support Windows running as a
guest. Full details of how to setup the image and run Cloud Hypervisor with a
Windows guest can be found in the dedicated Windows
documentation.
vhost-user
"Self Spawning" DeprecationAutomatically spawning a vhost-user-net
or vhost-user-block
backend is now
deprecated. Users of this functionality will receive a warning and should make
adjustments. The functionality will be removed in the next release.
virtio-mmmio
RemovalSupport for using the virtio-mmio
transport, rather than using PCI, has been
removed. This has been to simplify the code and significantly
reduce the testing burden of the project.
When running on the ARM64 architecture snapshot and restore has now been
implemented.
The time to boot the Linux kernel has been significantly improved by the
identifying some areas of delays around PCI bus probing, IOAPIC programming and
MPTABLE issues. Full details can be seen in #1728.
SIGTERM/SIGINT
Interrupt Signal HandlingWhen the VMM process receives the SIGTERM
or SIGINT
signals then it will
trigger the VMM process to cleanly deallocate resources before exiting. The
guest VM will not be cleanly shutdown but the VMM process will clean up its
resources.
The default logging level was changed to include warnings which should make it
easier to see potential issues. New logging
documentation was also added.
--balloon
Parameter AddedControl of the setup of virtio-balloon
has been moved from --memory
to its
own dedicated parameter. This makes it easier to add more balloon specific
controls without overloading --memory
.
virtio-watchdog
SupportSupport for using a new virtio-watchdog
has been added which can be used to
have the VMM reboot the guest if the guest userspace fails to ping the
watchdog. This is enabled with --watchdog
and requires kernel support.
CMD.EXE
under Windows SAC (#1170)virtio-pmem
withdiscard_writes=on
no longer marks the guest memory asMany thanks to everyone who has contributed to our 0.11.0 release including some new faces.
Published by github-actions[bot] about 4 years ago
This release has been tracked through the 0.10.0 project.
Highlights for cloud-hypervisor
version 0.10.0 include:
virtio-block
Support for Multiple DescriptorsSome virtio-block
device drivers may generate requests with multiple descriptors and support has been added for those drivers.
Support has been added for fine grained control of memory allocation for the guest. This includes controlling the backing of sections of guest memory, assigning to specific host NUMA nodes and assigning memory and vCPUs to specific memory nodes inside the guest. Full details of this can be found in the memory documentation.
Seccomp
Sandbox ImprovementsAll the remaining threads and devices are now isolated within their own seccomp
filters. This provides a layer of sandboxing and enhances the security model of cloud-hypervisor
.
A new option (kvm_hyperv
) has been added to --cpus
to provide an option to toggle on KVM's HyperV emulation support. This enables progress towards booting Windows without adding extra emulated devices.
ch-remote
to resize the VM parameter now accepts the standard sizes suffices (#1596)cloud-hypervisor
no longer panics when started with --memory hotplug_method=virtio-mem
and no hotplug_size
(#1564)--memory hotplug_method=virtio-mem
(#1593)--version
shows the version for released binaries (#1669)virtio
devices are now printed out (#1551)Many thanks to everyone who has contributed to our 0.10.0 release including some new faces.
Published by github-actions[bot] about 4 years ago
This release has been tracked through the 0.9.0 project.
Highlights for cloud-hypervisor
version 0.9.0 include:
io_uring
Based Block Device SupportIf the io_uring
feature is enabled and the host kernel supports it then io_uring
will be used for block devices. This results a very significant performance improvement.
Statistics for activity of the virtio
network and block devices is now exposed through a new vm.counters
HTTP API entry point. These take the form of simple counters which can be used to observe the activity of the VM.
The HTTP API for adding devices now responds with the name that was assigned to the device as well the PCI BDF.
A topology
parameter has been added to --cpus
which allows the configuration of the guest CPU topology allowing the user to specify the numbers of sockets, packages per socket, cores per package and threads per core.
Our release build is now built with LTO (Link Time Optimization) which results in a ~20% reduction in the binary size.
A new abstraction has been introduced, in the form of a hypervisor
crate so as to enable the support of additional hypervisors beyond KVM
.
Multiple improvements have been made to the VM snapshot/restore support that was added in the last release. This includes persisting more vCPU state and in particular preserving the guest paravirtualized clock in order to avoid vCPU hangs inside the guest when running with multiple vCPUs.
A virtio-balloon
device has been added, controlled through the resize
control, which allows the reclamation of host memory by resizing a memory balloon inside the guest.
The ARM64 support introduced in the last release has been further enhanced with support for using PCI for exposing devices into the guest as well as multiple bug fixes. It also now supports using an initramfs when booting.
The guest can now use Intel SGX if the host supports it. Details can be found in the dedicated SGX documentation.
Seccomp
Sandbox ImprovementsThe most frequently used virtio devices are now isolated with their own seccomp
filters. It is also now possible to pass --seccomp=log
which result in the logging of requests that would have otherwise been denied to further aid development.
virtio-vsock
implementation has been resynced with the implementation from Firecracker and includes multiple bug fixes.virtio-mmio
based devices are now more widely tested (#275).virtio-console
and the serial. (#1521)Many thanks to everyone who has contributed to our 0.9.0 release including some new faces.
Published by github-actions[bot] over 4 years ago
This release has been tracked through the 0.8.0 project.
Highlights for cloud-hypervisor
version 0.8.0 include:
This release includes the first version of the snapshot and restore feature.
This allows a VM to be paused and then subsequently snapshotted. At a later
point that snapshot may be restored into a new running VM identical to the
original VM at the point it was paused.
This feature can be used for offline migration from one VM host to another, to
allow the upgrading or rebooting of the host machine transparently to the guest
or for templating the VM. This is an experimental feature and cannot be used on
a VM using passthrough (VFIO) devices. Issues with SMP have also been observed
(#1176).
Included in this release is experimental support for running on ARM64.
Currently only virtio-mmio
devices and a serial port are supported. Full
details can be found in the ARM64 documentation.
If the host supports it the guest is now enabled for 5-level paging (aka LA57).
This works when booting the Linux kernel with a vmlinux, bzImage or firmware
based boot. However booting an ELF kernel built with CONFIG_PVH=y
does not
work due to current limitations in the PVH boot process.
With virtio-net
and vhost-user-net
devices the guest can suppress
interrupts from the VMM by using the VIRTIO_RING_F_EVENT_IDX
feature. This
can lead to an improvement in performance by reducing the number of interrupts
the guest must service.
vhost_user_fs
ImprovementsThe implementation in Cloud Hypervisor of the VirtioFS server now supports sandboxing itself with seccomp
.
tap
device ahead of creating the VM it is not required tocloud-hypervisor
binary with CAP_NET_ADMIN
(#1273).virtio-block
or vhost-user-block
now correctly adheres toMPTABLE
. When compiled with the acpi
feature theMPTABLE
will no longer be generated (#1132).mmio
buildsThis is non exhaustive list of HTTP API and command line changes:
socket
sock
in some cases.ch-remote
tool now shows any error message generated by the VMMwce
parameter has been removed from --disk
as the feature is always--net
has gained a host_mac
option that allows the setting of the MACtap
device on the host.Many thanks to everyone who has contributed to our 0.8.0 release including some new faces.
Published by github-actions[bot] over 4 years ago
This release has been tracked through the 0.7.0 project.
Highlights for cloud-hypervisor
version 0.7.0 include:
Further to our effort to support modifying a running guest we now support
hotplug and unplug of the following virtio backed devices: block, network,
pmem, virtio-fs and vsock. This functionality is available on the (default) PCI
based tranport and is exposed through the HTTP API. The ch-remote
utility
provides a CLI for adding or removing these device types after the VM has
booted. User can use the id
parameter on the devices to choose names for
devices to ease their removal.
libc
SupportCloud Hypervisor can now be compiled with the musl
C library and this release
contains a static binary compiled using that toolchain.
vhost-user
BackendsThe vhost-user
backends for network and block support that are shipped by
Cloud Hypervisor have been enhanced to support multiple threads and queues to
improve throughput. These backends are used automatically if vhost_user=true
is passed when the devices are created.
By passing the --initramfs
command line option the user can specify a file to
be loaded into the guest memory to be used as the kernel initial filesystem.
This is usually used to allow the loading of drivers needed to be able to
access the real root filesystem but it can also be used standalone for a very
minimal image.
virtio-mem
As well as supporting ACPI based hotplug Cloud Hypervisor now supports using
the virtio-mem
hotplug alternative. This can be controlled by the
hotplug_method
parameter on the --memory
command line option. It currently
requires kernel patches to be able to support it.
Seccomp
SandboxingCloud Hypervisor now has support for restricting the system calls that the
process can use via the seccomp
security API. This on by default and is
controlled by the --seccomp
command line option.
With the release of Ubuntu 20.04 we have added that to the list of supported
distributions and is part of our regular testing programme.
This is non exhaustive list of HTTP API and command line changes
id
fields added for devices to allow them to be named to ease removal.--memory
's shared
and hugepages
controls for determining backing--vsock
parameter only takes one device as the Linux kernel onlyvhost-user
backedch-remote
has added add-disk
, add-fs
, add-net
, add-pmem
andadd-vsock
subcommands. For removal remove-device
is used. The REST APIsize
with --pmem
is no longer required and instead the sizediscard_writes
option has also been added--block-backend
have been changed to more closely align--disk
.Many thanks to everyone who has contributed to our 0.7.0 release including some new faces.
Published by github-actions[bot] over 4 years ago
This release has been tracked through the 0.6.0 project.
Highlights for cloud-hypervisor
version 0.6.0 include:
We continued our efforts around supporting dynamically changing the guest
resources. After adding support for CPU and memory hotplug, Cloud Hypervisor
now supports hot plugging and hot unplugging directly assigned (a.k.a. VFIO
)
devices into an already running guest. This closes the features gap for
providing a complete Kata Containers workloads support with Cloud Hypervisor.
We enhanced our shared filesystem support through many virtio-fs
improvements.
By adding support for DAX, parallel processing of multiple requests, FS_IO
,
LSEEK
and the MMIO
virtio transport layer to our vhost_user_fs
daemon, we
improved our filesystem sharing performance, but also made it more stable and
compatible with other virtio-fs
implementations.
When choosing to offload the paravirtualized block and networking I/O to an
external process (through the vhost-user
protocol), Cloud Hypervisor now
automatically spawns its default vhost-user-blk
and vhost-user-net
backends
into their own, separate processes.
This provides a seamless parvirtualized I/O user experience for those who want
to run their guest I/O into separate executions contexts.
More and more Cloud Hypervisor services are exposed through the
Rest API and thus only accessible via relatively cumbersome HTTP calls. In order
to abstract those calls into a more user friendly tool, we created a Cloud
Hypervisor Command Line Interface (CLI) called ch-remote
.
The ch-remote
binary is created with each build and available e.g. at
cloud-hypervisor/target/debug/ch-remote
when doing a debug build.
Please check ch-remote --help
for a complete description of all available
commands.
In addition to the traditional Linux boot protocol, Cloud Hypervisor now
supports direct kernel booting through the PVH ABI.
With the 0.6.0 release, we are welcoming a few new contributors. Many thanks
to them and to everyone that contributed to this release:
Published by cloud-hypervisor-bot over 4 years ago
This is a bugfix release branched off v0.5.0. It contains the following fixes:
Published by cloud-hypervisor-bot over 4 years ago
This release has been tracked through the 0.5.0 project.
Highlights for cloud-hypervisor
version 0.5.0 include:
With 0.4.0 we added support for CPU hot plug, and 0.5.0 adds CPU hot unplug and memory hot plug as well. This allows to dynamically resize Cloud Hypervisor guests which is needed for e.g. Kubernetes related use cases.
The memory hot plug implementation is based on the same framework as the CPU hot plug/unplug one, i.e. hardware-reduced ACPI notifications to the guest.
Next on our VM resizing roadmap is the PCI devices hotplug feature.
We enhanced our virtio networking and block support by having both devices use multiple I/O queues handled by multiple threads. This improves our default paravirtualized networking and block devices throughput.
We improved our interrupt management implementation by introducing an Interrupt Manager framework, based on the currently on-going rust-vmm vm-device crates discussions. This move made the code significantly cleaner, and allowed us to remove several KVM related dependencies from crates like the PCI and virtio ones.
In order to provide a better developer experience, we worked on improving our build, development and testing tools.
Somehow similar to the excellent Firecracker's devtool, we now provide a dev_cli script.
With this new tool, our users and contributors will be able to build and test Cloud Hypervisor through a containerized environment.
We spent some significant time and efforts debugging and fixing our integration with the Kata Containers project. Cloud Hypervisor is now a fully supported Kata Containers hypervisor, and is integrated into the project's CI.
Many thanks to everyone that contributed to the 0.5.0 release:
Published by cloud-hypervisor-bot almost 5 years ago
This release has been tracked through the 0.4.0 project.
Highlights for cloud-hypervisor
version 0.4.0 include:
As a way to vertically scale Cloud-Hypervisor guests, we now support dynamically
adding virtual CPUs to the guests, a mechanism also known as CPU hot plug.
Through hardware-reduced ACPI notifications, Cloud Hypervisor can now add CPUs
to an already running guest and the high level operations for that process are
documented here
During the next release cycles we are planning to extend Cloud Hypervisor
hot plug framework to other resources, namely PCI devices and memory.
As part of the CPU hot plug feature enablement, and as a requirement for hot
plugging other resources like devices or RAM, we added support for
programmatically generating the needed ACPI tables. Through a dedicated
acpi-tables
crate, we now have a flexible and clean way of generating those
tables based on the VMM device model and topology.
Our objective of running all Cloud Hypervisor paravirtualized I/O to a
vhost-user based framework is getting closer as we've added Rust based
implementations for vhost-user-blk and virtiofs backends. Together with the
vhost-user-net backend that came with the 0.3.0 release, this will form the
default Cloud Hypervisor I/O architecture.
As an initial requiremnt for enabling live migration, we added support for
pausing and resuming any VMM components. As an intermediate step towards live
migration, the upcoming guest snapshotting feature will be based on the pause
and resume capabilities.
As a way to simplify our device manager implementation, but also in order to
stay away from privileged rings as often as possible, any device that relies on
pin based interrupts will be using the userspace IOAPIC implementation by
default.
In order to allow for a more flexible device model, and also support guests
that would want to move PCI devices, we added support for PCI devices BAR
reprogramming.
cloud-hypervisor
organizationAs we wanted to be more flexible on how we manage the Cloud Hypervisor project,
we decided to move it under a dedicated GitHub organization.
Together with the cloud-hypervisor
project, this new organization also now hosts our kernel
and firmware
repositories. We may also use it to host any rust-vmm that we'd need to
temporarily fork.
Thanks to GitHub's seamless repository redirections, the move is completely
transparent to all Cloud Hypervisor contributors, users and followers.
Many thanks to everyone that contributed to the 0.4.0 release:
Published by cloud-hypervisor-bot about 5 years ago
This release has been tracked through the 0.3.0 project.
Highlights for cloud-hypervisor
version 0.3.0 include:
We continue to work on offloading paravirtualized I/O to external processes, and we added support for vhost-user-blk backends.
This enables cloud-hypervisor
users to plug a vhost-user
based block device like SPDK) into the VMM as their paravirtualized storage backend.
The previous release provided support for vhost-user-net backends. Now we also provide a TAP based vhost-user-net backend, implemented in Rust. Together with the vhost-user-net device implementation, this will eventually become the Cloud Hypervisor default paravirtualized networking architecture.
In order to more efficiently and securely communicate between host and guest, we added an hybrid implementation of the VSOCK socket address family over virtio. Credits go to the Firecracker project as our implementation is a copy of theirs.
In anticipation of the need to support asynchronous operations to Cloud Hypervisor guests (e.g. resources hotplug and guest migration), we added a HTTP based API to the VMM. The API will be more extensively documented during the next release cycle.
In order to support potential PCI-free use cases, we added support for the virtio MMIO transport layer. This will allow us to support simple, minimal guest configurations that do not require a PCI bus emulation.
As we want to improve our nested guests support, we added support for exposing a paravirtualized IOMMU device through virtio. This allows for a safer nested virtio and directly assigned devices support.
To add the IOMMU support, we had to make some CLI changes for Cloud Hypervisor users to be able to specify if devices had to be handled through this virtual IOMMU or not. In particular, the --disk
option now expects disk paths to be prefixed with a path=
string, and supports an optional iommu=[on|off]
setting.
With the latest hypervisor firmware, we can now support the latest Ubuntu 19.10 (Eoan Ermine) cloud images.
After simplifying and changing our guest address space handling, we can now support guests with large amount of memory (more than 64GB).
Published by cloud-hypervisor-bot about 5 years ago
This release has been tracked through the 0.2.0 project.
Highlights for cloud-hypervisor
version 0.2.0 include:
As part of our general effort to offload paravirtualized I/O to external
processes, we added support for
vhost-user-net backends. This
enables cloud-hypervisor
users to plug a vhost-user
based networking device
(e.g. DPDK) into the VMM as their virtio network backend.
In order to properly implement and guest reset and shutdown, we implemented
a minimal version of the hardware-reduced ACPI specification. Together with
a tiny I/O port based ACPI device, this allows cloud-hypervisor
guests to
cleanly reboot and shutdown.
The ACPI implementation is a cloud-hypervisor
build time option that is
enabled by default.
Based on the Firecracker idea of using a dedicated I/O port to measure guest
boot times, we added support for logging guest events through the
0x80
PC debug port. This allows, among other things, for granular guest boot time
measurements. See our debug port documentation
for more details.
We fixed a major performance issue with our initial VFIO implementation: When
enabling VT-d through the KVM and VFIO APIs, our guest memory writes and reads
were (in many cases) not cached. After correctly tagging the guest memory from
cloud-hypervisor
we're now able to reach the expected performance from
directly assigned devices.
We added shared memory region with DAX
support to our virtio-fs shared file system.
This provides better shared filesystem IO performance with a smaller guest
memory footprint.
Thanks to our simple KVM firmware
improvements, we are now able to boot Ubuntu bionic images. We added those to
our CI pipeline.
This release has been tracked through the 0.1.0 project.
Highlights for cloud-hypervisor
version 0.1.0 include:
We added support for the virtio-fs shared file
system, allowing for an efficient and reliable way of sharing a filesystem
between the host and the cloud-hypervisor
guest.
See our filesystem sharing
documentation for more details on how to use virtio-fs with cloud-hypervisor
.
VFIO (Virtual Function I/O) is a kernel framework that exposes direct device
access to userspace. cloud-hypervisor
uses VFIO to directly assign host
physical devices into its guest.
See our VFIO
documentation for more detail on how to directly assign host devices to
cloud-hypervisor
guests.
cloud-hypervisor
supports a so-called split IRQ chip implementation by
implementing support for the IOAPIC.
By moving part of the IRQ chip implementation from kernel space to user space,
the IRQ chip emulation does not always run in a fully privileged mode.
The virtio-pmem
implementation emulates a virtual persistent memory device
that cloud-hypervisor
can e.g. boot from. Booting from a virtio-pmem
device
allows to bypass the guest page cache and improve the guest memory footprint.
The cloud-hypervisor
linux kernel loader now supports direct kernel boot from
bzImage
kernel images, which is usually the format that Linux distributions
use to ship their kernels. For example, this allows for booting from the host
distribution kernel image.
cloud-hypervisor
now exposes a virtio-console
device to the guest. Although
using this device as a guest console can potentially cut some early boot
messages, it can reduce the guest boot time and provides a complete console
implementation.
The virtio-console
device is enabled by default for the guest console.
Switching back to the legacy serial port is done by selecting
--serial tty --console off
from the command line.
We now run all unit tests from all our crates directly from our CI.
The CI cycle run time has been significantly reduced by refactoring our
integration tests; allowing them to all be run in parallel.