Computer lessons

How to install kvm virtualization systems. Rubber hypervisor


The other day, an interesting report was released by Principled Technologies, a company that specializes, among other things, in all kinds of testing of hardware and software environments. The document " " explains that the ESXi hypervisor can run more virtual machines on the same hardware than the RHEV KVM hypervisor.

It is clear that the study is biased (at least if you look at the title), but since there are not many such documents, we decided to pay attention to it.

For testing, we used a Lenovo x3650 M5 rack server, on which the Microsoft SQL Server 2016 DBMS with an OLTP load was running in virtual machines. The main performance indicator was OPM (orders per minute), which displays a quantitative assessment of executed transactions.

If you do not use Memory Overcommit techniques, then the result of running on 15 virtual machines of one host in the OPM number is approximately the same on both hypervisors:

But when the number of virtual machines increases, vSphere performs much better:

Crosses mark machines that simply did not start on RHV; the product console produced the following error:

Despite enabling memory optimization techniques in Red Hat Virtualization Manager (RHV-M), such as memory ballooning and kernel shared memory, the sixteenth virtual machine still refused to start on KVM:

Well, on vSphere they continued to increase the number of VMs until they ran into a lack of resources:

It turned out that with the overcommit techniques, 24 virtual machines were launched on vSphere, and only 15 on RHV. As a result, we concluded that VMware vSphere can run 1.6 times more virtual machines:

Not to say that this is an objective test, but it is obvious that ESXi in this case works better than KVM in terms of any optimizations of memory and other VM resources.


Tags: VMware, Red Hat, Performance, RHV, vSphere, ESXi, KVM
Tags: KVM, oVirt, Open Source, Update

Recall that RHEV is based on the Kernel-based Virtual Machine (KVM) hypervisor and supports the open cloud architecture OpenStack. Let's see what's new in the updated RHEV version 3.4.

Infrastructure

  • SNMP configuration service to support third-party monitoring systems.
  • Saving the settings of the RHEV cloud installation for the possibility of its recovery in case of failure or for the purpose of replication in other clouds.
  • Rewritten and improved RHEV authentication services.
  • The ability to hot add a processor to a VM (Hot Plug CPU). This requires support from the OS.
  • Non-root users now have access to logs.
  • New installer based on TUI (textual user interface).
  • IPv6 support.
  • Ability to select a connection to the VM console in Native Client or noVNC mode.
  • Ability to change some settings of a running virtual machine.
  • Full support for RHEL 7 as a guest OS.
  • Ability to enable/disable KSM (Kernel Samepage Merging) at the cluster level.
  • Ability to reboot a VM from RHEVM or using a console command.

Networking

  • Tighter integration with OpenStack infrastructure:
    • Security and scalability improvements for networks deployed with Neutron.
    • Support for Open vSwitch technology (extensible virtual switch) and SDN network capabilities.
  • Network Labels - labels that can be used when accessing devices.
  • Correct numbering order for virtual network adapters (vNICs).
  • iproute2 support.
  • A single point of configuration for the network settings of multiple hosts on a specified network.

Storage capabilities

  • Mixed storage domains - the ability to simultaneously use disk devices from iSCSI, FCP, NFS, Posix and Gluster storage to organize the storage of virtual machines.
  • Multiple Storage Domains - the ability to distribute the disks of one virtual machine across several storage facilities within the data center.
  • The ability to specify disks that will participate in creating snapshots, as well as those that will not.
  • The mechanism for restoring a VM from a backup has been improved - it is now possible to specify a snapshot of the state to which you want to roll back.
  • Asynchronous task management of Gluster storages.
  • Read-Only Disk for Engine - This feature gives Red Hat Enterprise Virtualization Manager the ability to use read-only disks.
  • Access via multiple paths (multipathing) for iSCSI storages.

Virtualization Tools

  • Guest OS agents (ovirt-guest-agent) for OpenSUSE and Ubuntu.
  • SPICE Proxy - the ability to use proxy servers for user access to their VMs (if, for example, they are located outside the infrastructure network).
  • SSO (Single Sign-On) Method Control - the ability to switch between different end-to-end authentication mechanisms. For now there are only two options: guest agent SSO and without SSO.
  • Supports multiple versions of one virtual machine template.

Scheduler and service level enhancements

  • Improvements to the virtual machine scheduler.
  • Affinity/Anti-Affinity groups (rules for the existence of virtual machines on hosts - place machines together or separately).
  • Power-Off Capacity - a power policy that allows you to turn off the host and prepare its virtual machines for migration to another location.
  • Even Virtual Machine Distribution - the ability to distribute virtual machines among hosts based on the number of VMs.
  • High-Availability Virtual Machine Reservation - a mechanism that allows you to guarantee the recovery of virtual machines in the event of a failure of one or more host servers. It works on the basis of calculating the available capacity of the computing resources of the cluster hosts.

Interface improvements

  • Fixes bugs related to the fact that the interface did not always respond to events occurring in the infrastructure.
  • Support for low screen resolutions (when some elements of the control console were not visible at low resolutions).

You can download Red Hat Enterprise Virtualization 3.4 from this link. Documentation is available.


Tags: Red Hat, RHEV, Update, Linux, KVM

The new version of the RHEL OS has many new interesting features, many of which relate to virtualization technologies. Some major new features in RHEL 7:

  • Native support for packaged Docker applications.
  • Kernel patching utility Technology Preview - patching the kernel without rebooting the OS.
  • Direct and indirect integration with Microsoft Active Directory, described in more detail.
  • For boot, root and user data partitions, the default file system is now XFS.
    • For XFS, the maximum file system size has been increased from 100 TB to 500 TB.
    • For ext4 this size has been increased from 16 TB to 50 TB.
  • Improved OS installation process (new wizard).
  • Ability to manage Linux servers using Open Linux Management Infrastructure (OpenLMI).
  • Improvements to NFS and GFS2 file systems.
  • New features of KVM virtualization technology.
  • Ability to run RHEL 7 as a guest OS.
  • Improvements to NetworkManager and a new command line utility for performing NM-CLI network tasks.
  • Supports Ethernet network connections at speeds up to 40 Gbps.
  • Supports WiGig (IEEE 802.11ad) wireless technology (at speeds up to 7 Gbps).
  • A new Team Driver mechanism that virtually combines network devices and ports into a single interface at the L2 level.
  • New dynamic service FirewallD, which is a flexible firewall that has an advantage over iptables and supports several network trust zones.
  • GNOME 3 in classic desktop mode.

Red Hat provides more details about the new features of RHEL 7.

In terms of virtualization, Red Hat Enterprise Linux 7 has the following major innovations:

  • Technology preview of the virtio-blk-data-plane feature, which allows QEMU I/O commands to be executed in a separate, optimized thread.
  • A technology preview of PCI Bridge technology has appeared, allowing support for more than 32 PCI devices in QEMU.
  • QEMU Sandboxing - improved isolation between guest OSes of the RHEL 7 host.
  • Support for “hot” adding virtual processors to machines (vCPU Hot Add).
  • Multiple Queue NICs - each vCPU has its own transmit and receive queues, which allows you to avoid using other vCPUs (for Linux guest OSes only).
  • Page Delta Compression technology allows the KVM hypervisor to perform migrations faster.
  • KVM now comes with support for paravirtualized Microsoft OS features, such as Memory Management Unit (MMU) and Virtual Interrupt Controller. This allows Windows guests to run faster (these features are disabled by default).
  • Supports EOI Acceleration technology based on the Advanced Programmable Interrupt Controller (APIC) interface from Intel and AMD.
  • Technological preview of USB 3.0 support in guest operating systems on KVM.
  • Support for Windows 8, Windows 8.1, Windows Server 2012 and Windows Server 2012 R2 guest operating systems on the KVM hypervisor.
  • I/O Throttling functions for guest operating systems on QEMU.
  • Support for Ballooning and transparent huge pages technologies.
  • The new virtio-rng device is available as a random number generator for guest OSes.
  • Support for hot migration of guest OSes from a Red Hat Enterprise Linux 6.5 host to a Red Hat Enterprise Linux 7 host.
  • Support for designating NVIDIA GRID and Quadro devices as a second device in addition to the emulated VGA.
  • Para-Virtualized Ticketlocks technology, which improves performance when there are more virtual vCPUs than physical ones on the host.
  • Improved PCIe device error handling.
  • New Virtual Function I/O (VFIO) driver improves security.
  • Supports Intel VT-d Large Pages technology when using the VFIO driver.
  • Improved delivery of accurate time to virtual machines on KVM.
  • Support for QCOW2 version 3 format images.
  • Improved Live Migration statistics - total time, expected downtime and bandwidth.
  • Dedicated thread for Live Migration, allowing hot migration to not impact guest OS performance.
  • Emulation of AMD Opteron G5 processors.
  • Support for new instructions of Intel processors for guest operating systems on KVM.
  • Support for VPC and VHDX virtual disk formats in read-only mode.
  • New features of the libguestfs utility for working with virtual disks of machines.
  • New Windows Hardware Quality Labs (WHQL) drivers for Windows guest operating systems.
  • Integration with VMware vSphere: Open VM Tools, 3D graphics drivers for OpenGL and X11, as well as an improved communication mechanism between the guest OS and the ESXi hypervisor.

Release Notes of the new OS version are available at this link. You can read about the virtualization functions in the new release of RHEL 7 (and - in Russian). Source codes for Red Hat Enterprise Linux 7 rpm packages are now available only through the Git repository.


Tags: Linux, QEMU, KVM, Update, RHEL, Red Hat

Ravello has found an interesting way to use nested virtualization in its product Cloud Application Hypervisor, which allows you to universalize the deployment of VMs of different virtualization platforms in public clouds of various service providers.

The main component of this system is HVX technology - its own hypervisor (based on Xen), which is part of the Linux OS and launches nested virtual machines without changing them using binary translation techniques. These machines can then be hosted in Amazon EC2, HP Cloud, Rackspace clouds, and even private clouds managed by VMware vCloud Director (support for the latter is expected soon).

The Ravello product is a SaaS service, and such nesting dolls can simply be uploaded to any of the supported hosting services, regardless of the hypervisor it uses. A virtual network between machines is created through an L2 overlay over the existing L3 infrastructure of the hoster using a GRE-like protocol (UDP-based only):

The mechanics of the proposed Cloud Application Hypervisor service are as follows:

  • The user uploads virtual machines to the cloud (machines created on ESXi/KVM/Xen platforms are supported).
  • Describes multi-machine applications using a special GUI or API.
  • Publishes its VMs to one or more supported clouds.
  • The resulting configuration is saved as a snapshot in the Ravello cloud (then, if something happens, it can be restored or uploaded) - this storage can be created either on the basis of cloud storage Amazon S3, CloudFiles, or on the basis of its own block storage or NFS volumes.
  • Each user can then obtain a multi-machine configuration of their application on demand.

The obvious question that comes up first is: what about performance? Well, first of all, the Cloud Application Hypervisor solution is designed for development and testing teams for which performance is not a critical factor.

And secondly, the results of performance tests of such nested nesting dolls show not such bad results:

For those interested in HVX technology, there is a good overview video in Runglish:


Tags: Rovello, Nested Virtualization, Cloud, HVX, VMware, ESXi, KVM, Xen, VMachines, Amazon, Rackspace

The new version of the open virtualization platform RHEV 3.0 is based on the Red Ha Enterprise Linux distribution version 6 and, traditionally, the KVM hypervisor.

New features in Red Hat Enterprise Virtualization 3.0:

  • The Red Hat Enterprise Virtualization Manager management tool is now built on Java, running on the JBoss platform (previously .NET was used, and, accordingly, it was tied to Windows, but now you can use Linux for the management server).
  • A self-service portal for users that allows them to self-deploy virtual machines, create templates, and administer their own environments.
  • New RESTful API that allows you to access all components of the solution from third-party applications.
  • An advanced administration mechanism that provides the ability to granularly assign permissions, delegate authority based on user roles, and hierarchical privilege management.
  • Support for local server disks as storage for virtual machines (but Live Migration is not supported for them).
  • Integrated reporting mechanism that allows you to analyze historical performance data and build forecasts for the development of virtual infrastructure.
  • Optimization for WAN connections, including dynamic compression technologies (picture compression) and automatic adjustment of desktop effects and color depth. In addition, the new version of SPICE has expanded support for desktops with Linux guest operating systems.
  • Updated KVM hypervisor based on the latest Red Hat Enterprise Linux 6.1, released in May 2011.
  • Supports up to 160 logical CPUs and 2 TB of memory for host servers, 64 vCPUs and 512 GB of memory for virtual machines.
  • New features for administering large installations of RHEV 3.0.
  • Support for large memory pages (Transparant Huge Pages, 2 MB instead of 4 KB) in guest OSes, which improves performance due to fewer reads.
  • Optimization of the vhost-net component. The KVM network stack has now been moved from user mode to kernel mode, which significantly increases performance and reduces network latency.
  • Using the functions of the sVirt library, which ensures hypervisor security.
  • A paravirtualized x2paic controller has appeared, which reduces the overhead of maintaining a VM (especially effective for intensive loads).
  • Async-IO technology to optimize I/O and improve performance.

You can download the final release of Red Hat Enterprise Virtualization 3.0 from this link.

And finally, a short video review of Red Hat Enterprise Virtualization Manager 3.0 (RHEV-M):


Tags: Red Hat, Enterprise, Update, KVM, Linux

Well done NetApp! Roman, we are waiting for translation into Russian)


Tags: Red Hat, KVM, NetApp, Storage, NFS

ConVirt 2.0 Open Source allows you to manage Xen and KVM hypervisors included in free and commercial editions of Linux distributions, deploy virtual servers from templates, monitor performance, automate administrator tasks and configure all aspects of virtual infrastructure. ConVirt 2.0 supports the functions of hot migration of virtual machines, "thin" virtual disks (growing as they are filled with data), control of resources of virtual machines (including running ones), extensive monitoring functions and tools for intelligent placement of virtual machines on host servers (manual load balancing).

ConVirt 2.0 currently exists only in the Open Source edition, but the developers promise to soon release the ConVirt 2.0 Enteprise edition, which will differ from the free version in the following features:

FeatureConVirt 2.0
Open Source
ConVirt 2.0 Enterprise

Architecture
Multi-platform Support
Agent-less Architecture
Universal Web Access
Datacenter-wide Console

Administration
Start, Stop, Pause, Resume
Maintanence Mode
Snapshot
Change Resource Allocation on a Running VM

Monitoring
Real-time Data
Historical Information
Server Pools
Storage Pools
Alerts and Notifications

Provisioning
Templates-based Provisioning
Template Library
Integrated Virtual Appliance Catalogs
Thin Provisioning
Scheduled Provisioning

Automation
Intelligent Virtual Machine Placement
Live Migration
Host Private Networking
SAN, NAS Storage Support

Advanced Automation
High Availability
Backup and Recovery
VLAN Setup
Storage Automation
Dynamic Resource Allocation
Power Saving Mode

Security
SSH Access
Multi-user Administration
Auditing
Fine Grained Access Control

Integration
Open Repository
Command Line Interface
Programmatic API

Tags: Xen, KVM, Convirt, Citrix, Red Hat, Free, Open Source,

The Convirture company, which was involved in the XenMan project in 2007, which was a GUI for managing the XEN hypervisor, recently released free product Convirture ConVirt 1.0, to which XenMan changed its name.

With ConVirt, you can manage Xen and KVM hypervisors using the following capabilities:

  • Managing multiple hosting servers.
  • Snapshots.
  • Hot migration of virtual machines between hosts (Live Migration).
  • VM backup.
  • Simple monitoring of hosts and virtual machines.
  • Support for Virtual Appliances.

You can download Convirture ConVirt 1.0 from this link:

Convirture ConVirt 1.0
Tags: Xen, KVM

In this introductory article, I will briefly introduce all the software tools used in the service development process. They will be discussed in more detail in the following articles.

Why ? This operating system is close and understandable to me, so there was no torment, torment or tossing when choosing a distribution. It does not have any particular advantages over Red Hat Enterprise Linux, but the decision was made to work with a familiar system.

If you are planning to independently deploy an infrastructure using similar technologies, I would advise you to take RHEL: thanks to good documentation and well-written application programs, it will be, if not an order of magnitude, then certainly twice as simpler, and thanks to the developed certification system, you can easily will find a number of specialists who are familiar with this OS at the proper level.

We, again, decided to use Debian Squeeze with a set of packages from Sid/Experimental and some packages backported and compiled with our patches.
There are plans to publish a repository with packages.

When choosing virtualization technology, two options were considered - Xen and KVM.

Also, the fact that there was a huge number of developers, hosters, and commercial solutions based on Xen was taken into account - the more interesting it was to implement a solution based on KVM.

The main reason why we decided to use KVM is the need to run virtual machines with FreeBSD and, in the future, MS Windows.

To manage virtual machines, it turned out to be extremely convenient to use products that use its API: virsh, virt-manager, virt-install, etc.

This is a system that stores the settings of virtual machines, manages them, keeps statistics on them, makes sure that the interface of the virtual machine is raised when starting, connects devices to the machine - in general, it does a lot of useful work and a little more than that.

Of course, the solution is not perfect. The disadvantages include:

  • Absolutely insane error messages.
  • Inability to change part of the virtual machine configuration on the fly, although QMP (QEMU Monitor Protocol) allows this.
  • Sometimes, for some unknown reason, it is impossible to connect to libvirtd - it stops responding to external events.

The main problem in implementing the service at the very beginning was the limitation of resources for virtual machines. In Xen, this problem was solved with the help of an internal scheduler that distributes resources between virtual machines - and what’s best is that the ability to limit disk operations was also implemented.

There was nothing like this in KVM until the advent of the kernel resource allocation mechanism. As usual in Linux, access to these functions was implemented through a special file system cgroup, in which, using the normal write() system calls, one could add a process to a group, assign it its parrot weight, specify the core on which it will run, specify the disk bandwidth that the process can use, or, again, assign a weight to it.

The benefit is that all this is implemented inside the kernel, and it can be used not only for the server, but also for the desktop (which was used in the famous “The ~200 Line Linux Kernel Patch That Does Wonders”). And in my opinion, this is one of the most significant changes in the 2.6 branch, not counting my favorite #12309, and not the filing of another file system. Well, perhaps, except for POHMELFS (but purely because of the name).

My attitude towards this utility library is very ambiguous.

On the one hand it looks something like this:

And this thing is also damn difficult to assemble from source, much less into a package: sometimes it seems to me that Linux From Scratch is a little easier to build from scratch.

On the other hand, it is a very powerful thing that allows you to create images for virtual machines, modify them, compress them, install grub, modify the partition table, manage configuration files, transfer hardware machines to a virtual environment, transfer virtual machines from one image to another, transfer virtual machines from the image to hardware and, to be honest, here my imagination lets me down a little. Oh, yes: you can also run a daemon inside a Linux virtual machine and access the virtual machine data live, and do all this in shell, python, perl, java, ocaml. This is a short and by no means exhaustive list of what you can do with .

Interestingly, most of the code is generated at the time of assembly, as well as the documentation for the project. Ocaml and perl are widely used. The code itself is written in C, which is then wrapped in OCaml, and the repeated pieces of code are generated themselves. Working with images is carried out by launching a special service image (supermin appliance), to which commands are sent through a channel into it. This rescue image contains a certain set of utilities, such as parted, mkfs and others useful for a system administrator.

Recently I even started using it at home, when I extracted the data I needed from the nandroid image. But this requires a yaffs-enabled kernel.

Other

Below are some more interesting links to a description of the software used - read and study it yourself if you are interested. For example,

There comes a time in the life of a system administrator when you have to build an enterprise infrastructure from scratch or remake an existing one that has been inherited. In this article I will talk about how to properly deploy a hypervisor based on Linux KVM and libvirt with LVM (logical group) support.

We'll go through all the intricacies of hypervisor management, including console and GUI utilities, resource expansion, and migrating virtual machines to another hypervisor.

First, let's understand what virtualization is. The official definition is: “Virtualization is the provision of a set of computing resources or their logical combination, abstracted from the hardware implementation, while providing logical isolation from each other of computing processes running on the same physical resource.” That is, in human terms, having one powerful server, we can turn it into several medium-sized servers, and each of them will perform its task assigned to it in the infrastructure, without interfering with others.

System administrators who work closely with virtualization in the enterprise, masters and virtuosos of their craft, are divided into two camps. Some are adherents of high-tech, but insanely expensive VMware for Windows. Others are fans of open source and free solutions based on Linux VM. We could list the advantages of VMware for a long time, but here we will focus on virtualization based on Linux VM.

Virtualization technologies and hardware requirements

Now there are two popular virtualization technologies: Intel VT and AMD-V. Intel VT (from Intel Virtualization Technology) implements real addressing mode virtualization; the corresponding hardware I/O virtualization is called VT-d. This technology is often referred to by the abbreviation VMX (Virtual Machine eXtension). AMD created its own virtualization extensions and initially called them AMD Secure Virtual Machine (SVM). When the technology reached the market, it became known as AMD Virtualization (abbreviated as AMD-V).

Before putting the hardware into operation, make sure that the equipment supports one of these two technologies (you can look at the specifications on the manufacturer’s website). If virtualization support is available, it must be enabled in the BIOS before deploying the hypervisor.

Other requirements of hypervisors include support for hardware RAID (1, 5, 10), which increases the fault tolerance of the hypervisor when hard drives fail. If there is no support for hardware RAID, then you can use software as a last resort. But RAID is a must-have!

The solution described in this article hosts three virtual machines and runs successfully on the minimum requirements: Core 2 Quad Q6600 / 8 GB DDR2 PC6400 / 2 × 250 GB HDD SATA (hardware RAID 1).

Installing and configuring a hypervisor

I will show you how to configure a hypervisor using Debian Linux 9.6.0 - X64-86 as an example. You can use any Linux distribution you like.

When you decide on the choice of hardware and it is finally delivered, the time will come to install the hypervisor. When installing the OS, we do everything as usual, with the exception of disk partitioning. Inexperienced administrators often select the “Automatically partition all disk space without using LVM” option. Then all data will be written to one volume, which is not good for several reasons. First, if your hard drive fails, you will lose all your data. Secondly, changing the file system will cause a lot of trouble.

In general, to avoid unnecessary steps and waste of time, I recommend using disk partitioning with LVM.

Logical Volume Manager

Logical Volume Manager (LVM) is a subsystem available on Linux and OS/2, built on top of Device Mapper. Its task is to represent different areas from one hard drive or areas from several hard drives as one logical volume. LVM creates a logical volume group (VG - Volumes Group) from physical volumes (PV - Physical Volumes). It, in turn, consists of logical volumes (LV - Logical Volume).

Now all Linux distributions with kernel 2.6 and higher have support for LVM2. To use LVM2 on an OS with kernel 2.4, you need to install a patch.

After the system has detected hard drives, the hard drive partition manager will launch. Select Guided - use entire disk and set up LVM.


Now we select the disk on which our volume group will be installed.



The system will offer options for media layout. Select “Write all files to one partition” and move on.




After saving the changes, we will get one logical group and two volumes in it. The first is the root partition and the second is the swap file. Here many will ask the question: why not choose the markup manually and create the LVM yourself?

I will answer simply: when creating a logical group VG, the boot partition is not written to VG, but is created as a separate partition with the ext2 file system. If this is not taken into account, the boot volume will end up in a logical group. This will doom you to agony and suffering when restoring the boot volume. This is why the boot partition is sent to a non-LVM volume.



Let's move on to the configuration of the logical group for the hypervisor. Select the item “Logical Volume Manager Configuration”.



The system will notify you that all changes will be written to disk. We agree.



Let's create a new group - for example, call it vg_sata.



INFO

The servers use SATA, SSD, SAS, SCSI, NVMe media. When creating a logical group, it is good practice to specify not the host name, but the type of media that is used in the group. I advise you to name the logical group like this: vg_sata, vg_ssd, vg_nvme and so on. This will help you understand what media the logical group is built from.




Let's create our first logical volume. This will be the volume for the root partition of the operating system. Select the “Create logical volume” item.



Select a group for the new logical volume. We only have one.



We assign a name to the logical volume. When assigning a name, it is most correct to use a prefix in the form of the name of a logical group - for example, vg_sata_root, vg_ssd_root, and so on.



Specify the volume for the new logical volume. I advise you to allocate 10 GB for the root, but less is possible, since the logical volume can always be expanded.



By analogy with the example above, we create the following logical volumes:

  • vg_sata_home - 20 GB for user directories;
  • vg_sata_opt - 10 GB for installing application software;
  • vg_sata_var - 10 GB for frequently changing data, for example system logs and other programs;
  • vg_sata_tmp - 5 GB for temporary data, if the amount of temporary data is large, more can be done. In our example, this section was not created as unnecessary;
  • vg_sata_swap - equal to the amount of RAM. This is a section for swap, and we create it as a safety net - in case the RAM on the hypervisor runs out.

After creating all the volumes, we complete the work of the manager.



Now we have several volumes to create operating system partitions. It is not difficult to guess that each partition has its own logical volume.



We create a partition of the same name for each logical volume.



Save and record the changes made.



After saving the disk layout changes, the basic system components will begin to be installed, and then you will be prompted to select and install additional system components. Of all the components, we will need ssh-server and standard system utilities.



After installation, the GRUB boot loader will be generated and written to disk. We install it on the physical disk where the boot partition is saved, that is /dev/sda.




Now we wait until the boot loader finishes writing to disk, and after the notification we reboot the hypervisor.





After the system reboots, log into the hypervisor via SSH. First of all, under root, install the utilities necessary for work.

$ sudo apt-get install -y sudo htop screen net-tools dnsutils bind9utils sysstat telnet traceroute tcpdump wget curl gcc rsync

Configure SSH to your liking. I advise you to immediately do authorization using keys. Restart and check the functionality of the service.

$ sudo nano /etc/ssh/sshd_config $ sudo systemctl restart sshd; sudo systemctl status sshd

Before installing virtualization software, you need to check the physical volumes and the state of the logical group.

$sudo pvscan $sudo lvs

We install virtualization components and utilities to create a network bridge on the hypervisor interface.

$ sudo apt-get update; apt-get upgrade -y $ sudo apt install qemu-kvm libvirt-bin libvirt-dev libvirt-daemon-system libvirt-clients virtinst bridge-utils

After installation, we configure the network bridge on the hypervisor. Comment on the network interface settings and set new ones:

$ sudo nano /etc/network/interfaces

The content will be something like this:

Auto BR0 IFACE BR0 Inet Static Address 192.168.1.61 Netmask 255.255.255.192 Gateway 192.168.1.1 Boadcast 192.168.0.61 DNS-NAMESERVER 127.0.1 DNS-SEARCH Site Bridge_Ports Enp2S Bridge_stp Off Bridge_WaitPort 0 Bridge_fd 0

We add our user, under whom we will work with the hypervisor, to the libvirt and kvm groups (for RHEL the group is called qemu).

$ sudo gpasswd -a iryzhevtsev kvm $ sudo gpasswd -a iryzhevtsev libvirt

Now we need to initialize our logical group to work with the hypervisor, launch it and add it to startup when the system starts.

$ sudo virsh pool-list $ sudo virsh pool-define-as vg_sata logical --target /dev/vg_sata $ sudo virsh pool-start vg_sata; sudo virsh pool-autostart vg_sata $ sudo virsh pool-list

INFO

For an LVM group to work properly with QEMU-KVM, you must first activate the logical group via the virsh console.

Now download the distribution for installation on guest systems and put it in the desired folder.

$ sudo wget https://mirror.yandex.ru/debian-cd/9.5.0/amd64/iso-cd/debian-9.5.0-amd64-netinst.iso $ sudo mv debian-9.5.0-amd64-netinst .iso /var/lib/libvirt/images/; ls -al /var/lib/libvirt/images/

To connect to virtual machines via VNC, edit the /etc/libvirt/libvirtd.conf file:

$ sudo grep "listen_addr = " /etc/libvirt/libvirtd.conf

Let's uncomment and change the line listen_addr = "0.0.0.0" . We save the file, reboot the hypervisor and check that all services have started and are working.

Continuation is available only to members

Option 1. Join the “site” community to read all materials on the site

Membership in the community within the specified period will give you access to ALL Hacker materials, increase your personal cumulative discount and allow you to accumulate a professional Xakep Score rating!

Let's say you are a young, but still poor student, which means that of all possible platforms you only have a PC on Windows and PS4. One fine day you decide to come to your senses and become a programmer, but wise people on the Internet told you that you cannot become a normal engineer without Linux. You cannot install Fedora as your main and only system, because Windows is still needed for games and VKontakte, and fear or lack of experience prevents you from installing Linux as a second system on your hard drive.

Or, let’s say, you have already grown up, now you are the head of servers in a large company, and one fine day you notice that most of the servers are not even half loaded. You cannot place more applications and data on servers for security reasons, and the costs of supporting and maintaining a growing server farm are rapidly increasing.

Or, let’s say, you already have a beard and glasses, you’re a technical director, and you’re not happy that it takes two months for developers to get a new server to deploy a new application. How to move forward quickly in such conditions?

Or maybe you are an architect who designed a new complex system for processing business analytics. Your system includes things like ElasticSearch, Kafka, Spark and much more, and each component must live separately, be configured intelligently and communicate with other components. As a good engineer, you understand that it is not enough to simply install this entire zoo directly on your system. You need to try to deploy an environment as close as possible to the future production environment, and preferably so that your developments will then work seamlessly on production servers.

And what to do in all these difficult situations? Correct: use virtualization.

Virtualization allows you to install many operating systems completely isolated from each other and running side by side on the same hardware.

A little history. The first virtualization technologies appeared already in the 60s, but the real need for them appeared only in the 90s, as the number of servers grew more and more. It was then that the problem arose of effectively recycling all the hardware, as well as optimizing the processes of updating, deploying applications, ensuring security and restoring systems in the event of a disaster.

Let's leave behind the scenes the long and painful history of the development of various technologies and methods of virtualization - for the curious reader, at the end of the article there will be additional materials on this topic. The important thing is what it all ultimately came to: three main approaches to virtualization.

Approaches to virtualization

Regardless of the approach and technology, when using virtualization there is always a host machine and a hypervisor installed on it that controls the guest machines.

Depending on the technology used, a hypervisor can be either separate software installed directly on the hardware, or part of the operating system.

An attentive reader who loves buzzwords will start muttering in a couple of paragraphs that his favorite Docker containers are also considered virtualization. We’ll talk about container technologies another time, but yes, you’re right, attentive reader, containers are also some kind of virtualization, only at the resource level of the same operating system.

There are three ways for virtual machines to interact with hardware:

Dynamic broadcast

In this case, the virtual machines have no idea that they are virtual. The hypervisor intercepts all commands from the virtual machine on the fly and processes them, replacing them with safe ones, and then returns them back to the virtual machine. This approach obviously suffers from some performance issues, but it allows you to virtualize any OS, since the guest OS does not need to be modified. Dynamic translation is used in VMWare products, the leader in commercial virtualization software.

Paravirtualization

In the case of paravirtualization, the source code of the guest OS is specifically modified so that all instructions are executed as efficiently and securely as possible. At the same time, the virtual woman is always aware that she is a virtual woman. One of the benefits is improved performance. The downside is that this way you cannot virtualize, for example, MacOS or Windows, or any other OS to which you do not have access to the source code. Paravirtualization in one form or another is used, for example, in Xen and KVM.

Hardware virtualization

Processor developers realized in time that the x86 architecture is poorly suited for virtualization, since it was initially designed for one OS at a time. Therefore, after dynamic translation from VMWare and paravirtualization from Xen appeared, Intel and AMD began to release processors with hardware support for virtualization.

At first, this did not provide much of a performance boost, since the main focus of the first releases was improving the processor architecture. However, now, more than 10 years after the advent of Intel VT-x and AMD-V, hardware virtualization is in no way inferior and even in some ways superior to other solutions.

Hardware virtualization uses and requires KVM (Kernel-based Virtual Machine), which we will use in the future.

Kernel-based Virtual Machine

KVM is a virtualization solution built right into the Linux kernel that is as functional as other solutions and superior in usability. Moreover, KVM is an open source technology, which, nevertheless, is moving forward at full speed (both in terms of writing code and in terms of marketing) and is being implemented into its products by Red Hat.

This, by the way, is one of the many reasons why we insist on Red Hat distributions.

The creators of KVM initially focused on supporting hardware virtualization and did not reinvent many things. A hypervisor, in essence, is a small operating system that must be able to work with memory, networking, etc. Linux is already very good at doing all this, so using the Linux kernel as a hypervisor is a logical and beautiful technical solution. Each KVM virtual machine is just a separate Linux process, security is provided using SELinux/sVirt, resources are managed using CGroups.

We'll talk more about SELinux and CGroups in another article, don't be alarmed if you don't know these words.

KVM doesn't just work as part of the Linux kernel: since kernel version 2.6.20, KVM has been a core component of Linux. In other words, if you have Linux, then you already have KVM. Convenient, right?

It is worth saying that in the field of public cloud platforms, Xen dominates a little more than completely. For example, AWS EC2 and Rackspace use Xen. This is due to the fact that Xen appeared earlier than everyone else and was the first to achieve a sufficient level of performance. But there is good news: in November 2017, which will gradually replace Xen for the largest cloud provider.

Although KVM uses hardware virtualization, for some I/O device drivers KVM can use paravirtualization, which provides performance gains for certain use cases.

libvirt

We have almost reached the practical part of the article, all that remains is to consider another open source tool: libvirt.

libvirt is a set of tools that provides a single API to many different virtualization technologies. Using libvirt, in principle, it doesn’t matter what the “backend” is: Xen, KVM, VirtualBox or anything else. Moreover, you can use libvirt inside Ruby (and also Python, C++ and much more) programs. You can also connect to virtual machines remotely via secure channels.

By the way, libvirt is being developed by Red Hat. Have you already installed Fedora Workstation as your main system?

Let's create a virtual machine

libvirt is just an API, but it is up to the user how to interact with it. There are a lot of options. We will use several standard utilities. We remind you: we insist on using Red Hat distributions (CentOS, Fedora, RHEL) and the commands below were tested on one of these systems. There may be slight differences for other Linux distributions.

First, let's check whether hardware virtualization is supported. In fact, it will work without its support, only much slower.

egrep --color = auto "vmx|svm|0xc0f" /proc/cpuinfo # if nothing is displayed, then there is no support :(

Since KVM is a Linux kernel module, you need to check whether it is already loaded, and if not, then load it.

lsmod | grep kvm # kvm, kvm_intel, kvm_amd. If nothing is displayed, then you need to load the necessary modules # If the module is not loaded modprobe kvm modprobe kvm_intel # or modprobe kvm_amd

It is possible that hardware virtualization is disabled in the BIOS. Therefore, if the kvm_intel/kvm_amd modules are not loaded, then check the BIOS settings.

Now let's install the necessary packages. The easiest way to do this is to install a group of packages at once:

yum group list "Virtual*"

The list of groups depends on the OS used. My group was called Virtualization. To manage virtual machines from the command line, use the virsh utility. Check if you have at least one virtual machine using the virsh list command. Most likely no.

If you don’t like the command line, then there is also virt-manager - a very convenient GUI for virtual machines.

virsh can create virtual machines only from XML files, the format of which can be studied in the libvirt documentation. Fortunately, there is also virt-manager and the virt-install command. You can figure out the GUI yourself, but here is an example of using virt-install:

sudo virt-install --name mkdev-vm-0 \ --location ~/Downloads/CentOS-7-x86_64-Minimal-1511.iso \ --memory = 1024 --vcpus = 1 \ --disk size = 8

Instead of specifying the disk size, you can create it in advance through virt-manager, or through virsh and an XML file. I used the above image from Centos 7 minimal, which is easy to find on the Centos website.

Now one important question remains: how to connect to the created machine? The easiest way to do this is through virt-manager - just double-click on the created machine and a window with a SPICE connection will open. The OS installation screen awaits you there.

By the way, KVM can nested virtualization: virtual machines inside a virtual machine. We need to go deeper!

After you install the OS manually, you will immediately wonder how this process can be automated. To do this, we need a utility called Kickstart, designed to automatically configure the OS for the first time. This is a simple text file in which you can specify the OS configuration, down to various scripts that are executed after installation.

But where can I get such a file? Why not write it from scratch? Of course not: since we have already installed Centos 7 inside our virtual machine, we just need to connect to it and find the file /root/anaconda-ks.cfg - this is the Kickstart config in order to create a copy of the already created OS. You just need to copy it and edit the contents.

But just copying a file is boring, so we'll add something else to it. The fact is that by default we will not be able to connect to the console of the created virtual machine from the command line of the host machine. To do this, you need to edit the GRUB bootloader config. Therefore, at the very end of the Kickstart file we will add the following section:

%post --log = /root/grubby.log /sbin/grubby --update-kernel = ALL --args = "console=ttyS0" %end

%post , as you might guess, will be executed after the OS is installed. The grubby command will update the GRUB config to add the ability to connect to the console.

By the way, you can also specify the ability to connect via the console right during the creation of the virtual machine. To do this, you need to pass one more argument to the virt-install command: --extra-args="console=ttyS0" . After this, you can install the OS itself in interactive text mode from the terminal of your host machine, connecting to the virtual machine via virsh console immediately after its creation. This is especially convenient when you create virtual machines on a remote hardware server.

Now you can apply the created config! virt-install allows you to pass additional arguments when creating a virtual machine, including the path to the Kickstart file.

sudo virt-install --name mkdev-vm-1 \ --location ~/Downloads/CentOS-7-x86_64-Minimal-1511.iso \ --initrd-inject /path/to/ks.cfg \ --extra- args ks = file:/ks.cfg \ --memory = 1024 --vcpus = 1 --disk size = 8

After the second virtual machine is created (fully automatically), you can connect to it from the command line using the virsh console vm_id command. vm_id You can find out from the list of all virtual machines using the virsh list command.

One of the benefits of using KVM/libvirt is the amazing documentation, including that produced by Red Hat. The dear reader is invited to study it with due curiosity.

Of course, creating virtual machines like this manually in the console, and then setting them up only using Kickstart is not the most convenient process. In future articles, we will look at many cool tools that make system configuration easier and completely automated.

What's next?

It is impossible to fit everything worth knowing about virtualization into one article. We looked at several options for using virtualization and its advantages, delved a little deeper into the details of its operation and got acquainted with the best, in our opinion, solution for these tasks (KVM), and even created and configured a virtual machine.

It is important to understand that virtual machines are the building blocks of modern cloud architectures. They allow applications to automatically grow to unlimited sizes, in the fastest possible way and with maximum utilization of all resources.

No matter how powerful and rich in services AWS is, its foundation is virtual machines on top of Xen. Every time you create a new droplet on DigitalOcean, you are creating a virtual machine. Almost all the sites you use are hosted on virtual machines. The simplicity and flexibility of virtual machines allows not only to build production systems, but also makes local development and testing ten times easier, especially when the system involves many components.

We learned how to create one single machine - not bad for testing one application. But what if we need several virtual machines at once? How will they communicate with each other? How will they find each other? To do this, we will need to understand how networks generally work, how they work in the context of virtualization, and which components are involved in this work and need to be configured - in the next article in the series.

Original: Welcome to KVM virtualization - Thorough introduction
Author: Igor Ljubuncic
Date of publication: May 4, 2011
Translation: A. Krivoshey
Translation date: July 2011

If you've read my articles on virtualization, you know that I used to mostly use VMware and VirtualBox, but now it's time to try something new. Today I would like to introduce a new series of notes about KVM. Next, perhaps I will switch to Xen or some other system, but now the hero of our topic is KVM.
In this guide, we will talk about KVM (Kernel-based Virtual Machine) technology, which was created by RedHat, and which is open source, being a free alternative to its commercial counterparts. We'll learn how to download, install, and configure KVM, what tools it has for managing virtual machines, how to work with KVM on the command line, write scripts, and much more. In addition, we will touch on creating advanced (including network) configurations, as well as other interesting things. Now let's begin.

KVM Glossary

First, let's talk a little about how KVM works. Nothing too fancy, just a quick introduction so you know the basic terminology.
KVM uses hardware virtualization technology supported by modern processors from Intel and AMD and known as Intel-VT and AMD-V. Using a loaded kernel module in memory, KVM, with the help of a user-mode driver (which is a modified driver from QEMU), emulates a layer of hardware on top of which virtual machines can be created and run. KVM can function without hardware virtualization (if it is not supported by the processor), but in this case it works in pure emulation mode using QUEMU and the performance of virtual machines is greatly reduced.
To manage KVM, you can use a graphical utility similar to products from VMware and VirtualBox, as well as the command line.
The most popular GUI is the Virtual Machine Manager (VMM), created by RedHat. It is also known by its package name as virt-manager and contains several utilities, including virt-install, virt-clone, virt-image and virt-viewer, for creating, cloning, installing and viewing virtual machines. VMM also supports Xen virtual machines.
The basic KVM command line interface is provided by the virsh utility. In certain cases, you can use support utilities such as virt-install to create your virtual machines. Ubuntu has a special utility ubuntu-vm-builder, developed by Canonical, with which you can create Ubuntu builds.
If you would like to learn more about KVM, further information can be found at the following addresses:

Advantages and Disadvantages of KVM

Do you need KVM? It depends on what you need it for.
If you haven't used virtual machines before, or have started them a few times just for fun, then mastering KVM can be difficult. This program is controlled primarily from the command line and is not as user friendly as VMware or VirtualBox. We can say that in terms of the graphical interface, KVM lags behind its competitors by several years, although in fact it is at least not inferior to them in terms of capabilities. KVM capabilities are most in demand when used for commercial purposes in a business environment.
Further, if your processor does not support hardware virtualization, then KVM will operate in a very slow and inefficient software emulation mode. In addition, it is known that KVM conflicts with VirtualBox, but this case will be discussed in a separate note.
Based on the above, we can conclude that KVM is more suitable for people who engage in virtualization for professional purposes. It is unlikely that it will become your favorite home toy, but if you decide to spend some effort to study it, the knowledge gained from this will allow you to be familiar with virtualization technologies. Unlike VMware and VirtualBox, which initially assume that the user will work with the program using a graphical interface, KVM is focused on using the command line and writing scripts.
To summarize, we can say that the advantages of KVM lie in the use of the latest virtualization technologies, the absence of any license restrictions in use, and a powerful command line interface. If your processor doesn't support hardware virtualization, you don't want to write scripts, and you prefer easier-to-administer systems like VMware Server, ESXi, or VirtualBox, then KVM is not for you.

Test platform

KVM can be used on any Linux distribution. However, the main developer and sponsor of KVM is RedHat. For example, RHEL comes out of the box with KVM, so you can find it on any RedHat-based distribution such as CentOS, Scientific Linux, or Fedora.
Since I mainly use Ubuntu at home, I will test KVM on this system, installed on my relatively new HP laptop, equipped with an i5 processor with support for hardware virtualization.
In this article, I will tell you how to install KVM on 64-bit Ubuntu Lucid (LTS).

Preparing for installation

First you need to check if your processor supports hardware virtualization. This is done using the following command:

$ egrep -c "(vmx|svm)" /proc/cpuinfo

If the output is a non-zero number, everything is fine. In addition, you need to check that virtualization technology is activated in the BIOS.
Naturally, after activating it, you must reboot the machine for the changes to take effect. To check, run the kvm-ok command:

Downloading and installing KVM

For KVM to work, you need to install the following packages (for distributions with apt):

$ apt-get install qemu-kvm libvirt-bin

$ apt-get install bridge-utils virt-manager python-virtinst

P.S. Packages may be named differently on different distributions. For example, virt-install might be called python-virt-install or python-virtinst. The dependencies for virt-clone, virt-image and virt-viewer should be installed automatically. Contrary to what is written in most manuals, the bridge utilities do not need to be installed. They are only needed if you are going to create a network bridge between virtual and physical network cards. Most manuals also state that most wireless network interfaces do not work with bridges. This may be true for some particular case, but for me the bridge works great with wireless adapters, so let’s hope that everything will work for you too.
I highly recommend VMM (virt-manager). Moreover, it is better to install all support utilities, including virt-viewer, virt-install, virt-image and virt-clone.
And one last thing. You may prefer ubuntu-vm-builder:

$ apt-get install ubuntu-vm-builder

In addition, there will likely be a large number of dependencies installed, so the download may take a significant amount of time.
P.S. On RedHat use yum install, on SUSE - zypper install.

Conflict with VirtualBox

I will again express a different opinion from what is stated in most guides: KVM and VirtualBox can be installed together on the same system. But you won’t be able to run them at the same time. In other words, the kernel module of one of the virtual machines must be unloaded from RAM. But this is not a reason to refuse installation. Just try to see if they work for you. If not, this problem can be fixed. Later I will post a separate guide dedicated to fixing this problem. I now have both virtual machines installed and running.

Using KVM

Well, now the most interesting part. We will begin our acquaintance with KVM with its graphical interface, which differs little from its analogues. such as the VMware console and especially VirtualBox.

Virtual Machine Manager (VMM)

When you first launch the program, you will see two categories, both not connected. These are links to standard KVM modules that are not yet working. To use them, right-click and select "connect".

To use them, right-click and select "connect". To add a new connection, select File > Add Connection from the menu. This will open a window in which you can set the hypervisor type and connection type. VMM can use both local and remote connections, including QUEMU/KVM and Xen. In addition, all authentication methods are supported.

You can also check the autoconnect box. The next time you start the program, these connections will be ready to use. This is similar to the VMware Server startup interface. Just for example:

Kernel vs Usermode

You may ask, what is the difference between normal/default and Usermode? When using Usermode, the virtual machine can be run without administrative privileges, but its network functionality will be limited.

We continue to study VMM

Let's briefly look at the remaining functions of the program.
Network functionality can be viewed or changed by opening Host Details. I plan to consider this issue in detail in a separate guide. There we will install utilities for the network bridge.

Similarly, you can change the parameters of the disk subsystem:

Changing Presets

VMM has a small set of options, changing which you can better customize the program to your needs. For example, you can activate the display of the VMM icon in the system tray, set the statistics collection interval, activate data collection for disk and network metrics, configure keyboard capture, console scaling, audio system options, etc.

You will then be able to view more detailed information about the virtual machine. For example, below is the CPU, disk, and network usage output for an Ubuntu guest.

The system tray icon looks like this:

Now we are ready to create a new virtual machine.

Creating a virtual machine

You can create a virtual machine from the command line, but first we’ll use VMM. The first step should be intuitive. Enter a name and specify the location of the installation disk. This can be either a local device in the form of a CD/DVD disk or ISO image, or an HTTP or FTP server, NFS or PXE.

We use local media. Now you need to set whether it will be a physical device or an image. In our case, ISO is used. Next you need to select the ISO type and version. This does not require such high precision, but the right choice will improve the performance of the virtual machine.

The fourth stage is a virtual disk. You can create a new image or use an existing one. You must select the disk size and specify whether the disk image will be created immediately at a given size, or its size will increase dynamically as needed. It should be noted that allocating all the space for a disk image at once improves performance and reduces file system fragmentation.

Next we will pay more attention to the disk subsystem. However, note that when running in Usermode, you will not have write access to /var, where virtual disk images are stored by default. Therefore, you will need to set a different location for the images. This issue will be covered in more detail in a separate article.
Stage 5 is the output of summary data with the ability to configure some advanced options. Here you can change the network type, set fixed MAC addresses, select the virtualization type and target architecture. If you are running in Usermode, your network configuration options will be limited; for example, you will not be able to create bridges between network interfaces. One last thing: if your processor does not support hardware virtualization, the Virt Type field will be QUEMU and it will not be possible to change it to KVM. Below we will look at the disadvantages of working in emulation mode. Now you can see what typical settings for an Ubuntu virtual machine look like:

Our machine is ready to use.

Setting up a virtual machine

The VM console also has some interesting options. You can send signals to the guest, switch between virtual consoles, reboot or shutdown the guest, clone, move, save the state of the virtual machine, take screenshots, and so on. Again everything is the same as the competitors.

Below are a couple of screenshots showing the options for cloning and moving a virtual machine. In the future we will consider this issue in detail.

Starting a virtual machine

Now comes the fun part. Below are some beautiful screenshots...
Let's start with the boot menu of the 32-bit version of Ubuntu 10.10 Maverick:

The Puppy Linux desktop is great as always:

Now Ubuntu running under NAT. Notice the low CPU usage. We'll talk about this later when we discuss emulation mode.

The console window can be resized to match the guest desktop resolution. The following screenshot shows Puppy and Ubuntu side by side:

Please note that the system load is low. With this emulation mode, you can run multiple virtual machines simultaneously.

If necessary, you can delete the virtual machine along with all its files:

Command line

Well, now let’s take a closer look at the unloved command line. For example, let's list all available virtual machines using virsh.

$virsh "list --all"

Below is a sequence of commands to create and run a virtual machine using virt-install.

The complete command looks like this:

$ virt-install --connect qemu://system -n puppy -r 512 -f puppy.img -c lupu-520.iso --vnc --noautoconsole --os-type linux --accelerate --network=network :default

--connect qemu:///system specifies the hypervisor type. The system option is used when running a machine on a bare kernel as a superuser. When running as a regular user, use the session option.
-n puppy is the unique name of the virtual machine. It can be changed using virsh.
-r 512 sets the RAM size.
-f specifies the disk image file. In my case it's puppy.img, which I created using the dd command.
-c specifies the CD-ROM, which can be either a physical device or an ISO image.
--vnc creates a guest console and exports it as a VNC server. Option --noautoconnect prevents automatic opening of the console when the virtual machine starts.
--os-type specifies the type of guest operating system.
--accelerate Allows KVM to use optimization features that improve guest performance.
--network determines the network type. In our case, the default connection is used.

There are many other functions that set parameters such as the number of processor cores, fixed MAC addresses, etc. All of them are described in detail in the man page. Despite the apparent complexity, control using the command line is mastered quite quickly.

Pure emulation mode

I already said that it is terribly ineffective. Now let's confirm this in practice. For example, in the screenshot below you can see that the system, when operating, consumes all the processor resources available to it, which in this case, with one core, constitutes 25% of the physical processor resources. This means that four virtual machines will completely load the quad-core host.

In addition, the performance of the guest system is below all criticism. If with hardware virtualization enabled, loading the guest Ubuntu took me about 1 minute, then after disabling it it took 20 minutes. It should be noted that without the use of hardware virtualization, the performance of QUEMU/KVM is much lower than that of its competitors.