Skip to content

Unikraft fork that contains the virtiofs shared file-system implementation. (Bachelor's Thesis)

License

Notifications You must be signed in to change notification settings

astrynzha/virtiofs_unikraft

Repository files navigation

Virtiofs Implementation in Unikraft

This is a fork of the Unikraft OS with an implemented kernel layer for the virtiofs shared-file system.

Background (Virtiofs, Guest-Host File-Sharing)

Virtiofs is a file-system technology that allows a guest to mount a file-system physically located on the host. The host also has parallel access to the file-system in parallel. Overall, virtiofs allows multiple guests and the host to securely share a single file-system, located on the host, with a parallel read-write access.

This is an alternative to the remote file-sharing mechanism like NFS or SMTP that can be used by the guests and the host over a virtualized network. However, the virtiofs is more efficient in many ways. A significant one is the use of shared-memory for communication since the guests and the host run on a single machine.

A common use-case for the general guest-host file-sharing is Development. During development it is often useful to start the software in a dedicated virtual environment which can be a VM. Then the files have to be brought from the host into the VM. Additionally, some test and log data might need to get transfered between the environment VM and the development host.

Implementation

Here we describe the two main parts of work that we performed on Unikraft to enable the virtiofs support.

Virtiofs Kernel Layer

The architecture of the kernel layer is as follows:

Figure 1: red - components of the virtiofs kernel layer. Blue - components that existed previously.

As part of the thesis the virtiofs driver (plat/drivers/virtio/virtio_fs.c) and the ukfuse (lib/ukfuse/) components have been implemented (see Figure 1).

For a more in-depth explanation and analysis, see the thesis paper here.

Virtiofs Subsystem Upgrade

The second major part of the work has been an upgrade of the virtio subsystem (Figure 1) to support the modern virtio standard for PCI devices, which is what virtiofs is presented as by the hypervisor and seen by Unikraft.

The architectural diagram for these changes is as follows:

Figure 2: red color denotes changes made to the 'virtio subsystem'.

The virtio_pci_modern.c (plat/drivers/virtio/virtio_pci_modern.c) component has been added and is used instead of the legacy virtio_pci.c for modern virtio PCI devices.
Furthermore, additional functionality for scanning of PCI capability lists has been added to the pci_bus_x86.c (plat/common/x86/pci_bus_x86.c), in order to work with the modern virtio devices.

For a more in-depth explanation and analysis, see the thesis paper here.

Results (Performance Improvement)

We have implemented a custom set of benchmarks (here and in this repo at lib/benchmarks/) for common file-system operations to evaluate the virtiofs performance. These operations are: sequential/random read and write; file creation, deletion and directory listing.

With these benchmarks, we have measured the speed of our virtiofs implementation and compared it against the 9pfs file-system (it is a virtiofs alternative that had previously been implemented in Unikraft) and the native Linux host (here, the file operations have been measured on the Linux host directly rather than from the guest through virtiofs/9pfs). Virtiofs has two ways of performing the read and write operations - 'FUSE' and 'DAX' (in the plots).

The results are as follows.

Read:

For reads and writes the 'buffer sizes' on the X axis is the amount of data we gave to each POSIX read or write request. The less data we give to each request, the larger the overall number of reqeusts. This drives down the performance because each request entails an expensive guest-host context switch.

Write:

Create/Remove/List:

The main takeaways are:

  • Virtiofs through DAX is significantly faster than 9pfs for buffer sizes < 128 KiB.
    • For example, with 4 KiB buffers virtiofs is faster than 9pfs
      • ~17 times for sequential reads (from 73 MiB/s to 1287 MiB/s).
      • ~106 times for sequential writes (from 12 MiB/s to 947 MiB/s).
  • Virtiofs with DAX is faster than the native Linux for smaller buffers
    • This is only due to the fact that the guest is a unikernel, where system calls (fs operations) have less overhead than in a conventional OS like the host's Linux.
  • 9pfs is to prefer for:
    • removing operations.
    • reading with buffers > 128 KiB.
  • In all other cases virtiofs provides better performance.

For a more in-depth analysis and discussion of the result, see the thesis paper here.


Resources

About

Unikraft fork that contains the virtiofs shared file-system implementation. (Bachelor's Thesis)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published