libvirt architecture

Currently libvirt supports 2 kind of virtualization, and its internal structure is based on a driver model which simplifies adding new engines:

Xen support

When running in a Xen environment, programs using libvirt have to execute in "Domain 0", which is the primary Linux OS loaded on the machine. That OS kernel provides most if not all of the actual drivers used by the set of domains. It also runs the Xen Store, a database of information shared by the hypervisor, the backend drivers, any running domains, and libxl (aka libxenlight). libxl provides a set of APIs for creating and managing domains, which can be used by applications such as the xl tool provided by Xen or libvirt. The hypervisor, drivers, kernels and daemons communicate though a shared system bus implemented in the hypervisor. The figure below tries to provide a view of this environment:

The Xen architecture

The library will interact with libxl for all management operations on a Xen system.

Note that the libvirt libxl driver only supports root access.

QEMU and KVM support

The model for QEMU and KVM is completely similar, basically KVM is based on QEMU for the process controlling a new domain, only small details differs between the two. In both case the libvirt API is provided by a controlling process forked by libvirt in the background and which launch and control the QEMU or KVM process. That program called libvirt_qemud talks though a specific protocol to the library, and connects to the console of the QEMU process in order to control and report on its status. Libvirt tries to expose all the emulations models of QEMU, the selection is done when creating the new domain, by specifying the architecture and machine type targeted.

The code controlling the QEMU process is available in the qemud/ directory.

Driver based architecture

As the previous section explains, libvirt can communicate using different channels with the current hypervisor, and should also be able to use different kind of hypervisor. To simplify the internal design, code, ease maintenance and simplify the support of other virtualization engine the internals have been structured as one core component, the libvirt.c module acting as a front-end for the library API and a set of hypervisor drivers defining a common set of routines. That way the Xen Daemon access, the Xen Store one, the Hypervisor hypercall are all isolated in separate C modules implementing at least a subset of the common operations defined by the drivers present in driver.h:

Note that a given driver may only implement a subset of those functions, (for example saving a Xen domain state to disk and restoring it is only possible though the Xen Daemon), in that case the driver entry points for unsupported functions are initialized to NULL.