Running KVM on Linux Mint 19 random jots
General
Exactly like my previous post from 14 years ago, these are random jots that I took as I set up a QEMU/KVM-based virtual machine on my Linux Mint 19 computer. This time, the purpose was to prepare myself for moving a server from an OpenVZ container to KVM.
Other version details, for the record: libvirt version 4.0.0, QEMU version 2.11.1, Virtual Machine manager 1.5.1.
Installation
Install some relevant packages:
# apt install qemu-kvm qemu-utils libvirt-daemon-system libvirt-clients virt-manager virt-viewer ebtables ovmf
This clearly installed a few services: libvirt-bin, libvirtd, libvirt-guest, virtlogd, qemu-kvm, ebtables, and a couple of sockets: virtlockd.socket and virtlogd.socket with their attached services.
My regular username on the computer was added automatically to the “libvirt” group, however that doesn’t take effect until one logs out and and in again. Without belonging to this group, one gets the error message “Unable to connect to libvirt qemu:///system” when attempting to run the Virtual Machine Manager. Or in more detail: “libvirtError: Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: Permission denied”.
The lazy and temporary solution is to run the Virtual Machine Manager with “sg”. So instead of the usual command for starting the GUI tool (NOT as root):
$ virt-manager &
Use “sg” (or start a session with the “newgroup” command):
$ sg libvirt virt-manager &
This is necessary only until next time you log in to the console. I think. I didn’t get that far. Who logs out?
There’s also a command-line utility, virsh. For example, to list all running machines:
$ sudo virsh list
Or just “sudo virsh” for an interactive shell.
Note that without root permissions, the list is simply empty. This is really misleading.
General notes
- Virtual machines are called “domains” in several contexts (within virsh in particular).
- To get the mouse out of the graphical window, use Ctrl-Alt.
- For networking to work, some rules related to virbr0 are automatically added to the iptables firewall. If these are absent, go “systemctl restart libvirtd” (don’t do this with virtual machines running, of course).
- These iptables rules are important in particular for WAN connections. Apparently, these allow virbr0 to make DNS queries to the local machine (adding rules to INPUT and OUTPUT chains). In addition, the FORWARD rule allows forwarding anything to and from virbr0 (as long as the correct address mask is matched). Plus a whole lot off stuff around POSTROUTING. Quite disgusting, actually.
- There are two Ethernet interfaces related to KVM virtualization: vnet0 and virbr0 (typically). For sniffing, virbr0 is a better choice, as it’s the virtual machine’s own bridge to the system, so there is less noise. This is also the interface that has an IP address of its own.
- A vnetN pops up for each virtual machine that is running, virbr0 is there regardless.
- The configuration files are kept as fairly readable XML files in /etc/libvirt/qemu
- The images are typically held at /var/lib/libvirt/images, owned by root with 0600 permissions.
- The libvirtd service runs /usr/sbin/libvirtd as well as two processes of /usr/sbin/dnsmasq. When a virtual machine runs, it also runs an instance of qemu-system-x86_64 on its behalf.
Creating a new virtual machine
Start the Virtual Manager. The GUI is good enough for my purposes.
$ sg libvirt virt-manager &
- Click on the “Create new virtual machine” and choose “Local install media”. Set the other parameters as necessary.
- As for storage, choose “Select or create custom storage” and create a qcow2 volume in a convenient position on the disk (/var/lib/libvirt/images is hardly a good place for that, as it’s on the root partition).
- In the last step, choose “customize configuration before install”.
- Network selection: Virtual nework ‘default’: NAT.
- Change the NIC, Disk and Video to VirtIO as mentioned below.
- Click “Begin Installation”.
Do it with VirtIO
That is, use Linux’ paravirtualization drivers, rather than emulation of hardware.
To set up a machine’s settings, go View > Details.
This is lspci’s response with a default virtual machine:
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 04) 00:03.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20) 00:04.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 01) 00:05.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:05.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:05.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:05.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:06.0 Communication controller: Red Hat, Inc Virtio console 00:07.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
Cute, but all interfaces are emulations of real hardware. In other words, this will run really slowly.
Testing link speed: On the host machine:
$ nc -l 1234 < /dev/null > /dev/null
And on the guest:
$ dd if=/dev/zero bs=128k count=4k | nc -q 0 10.1.1.3 1234 4096+0 records in 4096+0 records out 536870912 bytes (537 MB, 512 MiB) copied, 3.74558 s, 143 MB/s
Quite impressive for hardware emulation, I must admit. But it can get better.
Things to change from the default settings:
- NIC: Choose “virtio” as device model, keep “Virtual network ‘default’” as NAT.
- Disk: On “Disk bus”, don’t use IDE, but rather “VirtIO” (it will appear as /dev/vda etc.).
- Video: Don’t use QXL, but Virtio (without 3D acceleration, it wasn’t supported on my machine). Actually, I’m not so sure about this one. For example, Ubuntu’s installation live boot gave me a black screen occasionally with Virtio.
Note that it’s possible to use a VNC server instead of “Display spice”.
After making these changes:
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Red Hat, Inc Virtio GPU (rev 01) 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device 00:04.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 01) 00:05.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:05.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:05.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:05.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:06.0 Communication controller: Red Hat, Inc Virtio console 00:07.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon 00:08.0 SCSI storage controller: Red Hat, Inc Virtio block device
Try the speed test again?
$ dd if=/dev/zero bs=128k count=4k | nc -q 0 10.1.1.3 1234 4096+0 records in 4096+0 records out 536870912 bytes (537 MB, 512 MiB) copied, 0.426422 s, 1.3 GB/s
Almost ten times faster.
Preparing a live Ubuntu ISO for ssh
$ sudo su # apt install openssh-server # passwd ubuntu
In the installation of the openssh-server, there’s a question of which configuration files to use. Choose the package maintainer’s version.