Explore information related to kvm

Virtualization Restrictions in RedHat Linux with KVM

This article covers Virtualization Restrictions in RedHat Linux which are additional support and product restrictions of the virtualization packages.


The following notes apply to all versions of Red Hat Virtualization:

1. Supported limits reflect the current state of system testing by Red Hat and its partners. Systems exceeding these supported limits may be included in the Hardware Catalog after joint testing between Red Hat and its partners. If they exceed the supported limits posted here, entries in the Hardware Catalog are fully supported. In addition to supported limits reflecting hardware capability, there may be additional limits under the Red Hat Enterprise Linux subscription terms. Supported limits are subject to change based on ongoing testing activities.


2. These limits do not apply to Red Hat Enterprise Linux (RHEL) with KVM virtualization, which offers virtualization for low-density environments.


3. Guest operating systems have different minimum memory requirements. Virtual machine memory can be allocated as small as required.

Read More



Guest unable to reach host using macvtap interface - Fix it Now

This article covers how to fix the issue with guests unable to reach the host using macvtap interface.

This issue happens when A guest virtual machine can communicate with other guests, but cannot connect to the host machine after being configured to use a macvtap (also known as type='direct') network interface.


To resolve this error (guests unable to reach the host using macvtap interface), simply create an isolated network with libvirt:

1. Add and save the following XML in the /tmp/isolated.xml file. If the 192.168.254.0/24 network is already in use elsewhere on your network, you can choose a different network.

<network>

  <name>isolated</name>

  <ip address='192.168.254.1' netmask='255.255.255.0'>

    <dhcp>

      <range start='192.168.254.2' end='192.168.254.254' />

    </dhcp>

  </ip>

</network>

2. Create the network with this command: virsh net-define /tmp/isolated.xml

3. Set the network to autostart with the virsh net-autostart isolated command.

4. Start the network with the virsh net-start isolated command.

5. Using virsh edit name_of_guest, edit the configuration of each guest that uses macvtap for its network connection and add a new <interface> in the <devices> section similar to the following (note the <model type='virtio'/> line is optional to include):

<interface type='network'>

  <source network='isolated'/>

  <model type='virtio'/>

</interface>

6. Shut down, then restart each of these guests.

Since this new network is isolated to only the host and guests, all other communication from the guests will use the macvtap interface.

Read More



Boot a guest using PXE - Do it now

This article covers how to boot a guest using PXE. PXE booting is supported for Guest Operating Systems that are listed in the VMware Guest Operating System Compatibility list and whose operating system vendor supports PXE booting of the operating system.

The virtual machine must meet the following requirements:

1. Have a virtual disk without operating system software and with enough free disk space to store the intended system software.

2. Have a network adapter connected to the network where the PXE server resides.


A virtual machine is not complete until you install the guest operating system and VMware Tools. Installing a guest operating system in your virtual machine is essentially the same as installing it in a physical computer.


To use PXE with Virtual Machines:

You can start a virtual machine from a network device and remotely install a guest operating system using a Preboot Execution Environment (PXE). 

You do not need the operating system installation media. When you turn on the virtual machine, the virtual machine detects the PXE server.


To Install a Guest Operating System from Media:

You can install a guest operating system from a CD-ROM or from an ISO image. Installing from an ISO image is typically faster and more convenient than a CD-ROM installation. 


To Upload ISO Image Installation Media for a Guest Operating System:

You can upload an ISO image file to a datastore from your local computer. You can do this when a virtual machine, host, or cluster does not have access to a datastore or to a shared datastore that has the guest operating system installation media that you require.


How to Use a private libvirt network ?

1. Boot a guest virtual machine using libvirt with PXE booting enabled. You can use the virt-install command to create/install a new virtual machine using PXE:

virt-install --pxe --network network=default --prompt

2. Alternatively, ensure that the guest network is configured to use your private libvirt network, and that the XML guest configuration file has a <boot dev='network'/> element inside the <os> element, as shown in the following example:

<os>

   <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>

   <boot dev='network'/>

   <boot dev='hd'/>

</os>

3. Also ensure that the guest virtual machine is connected to the private network:

<interface type='network'>

   <mac address='52:54:00:66:79:14'/>

   <source network='default'/>

   <target dev='vnet0'/>

   <alias name='net0'/>

   <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

</interface>

Read More



Create CentOS Fedora RHEL VM Template on KVM - How to do it

This article covers how to create CentOS/Fedora/RHEL VM Templates on KVM. VM Templates are more useful when deploying high numbers of similar VMs that require consistency across deployments. If something goes wrong in an instance created from the Template, you can clone a fresh VM from the template with minimal effort.


To install KVM in your Linux system:

The KVM service (libvirtd) should be running and enabled to start at boot.

$ sudo systemctl start libvirtd

$ sudo systemctl enable libvirtd

Enable vhost-net kernel module on Ubuntu/Debian.

$ sudo modprobe vhost_net

# echo vhost_net | sudo tee -a /etc/modules


How to Prepare CentOS / Fedora / RHEL VM template ?

1. Update system

After you finish VM installation, login to the instance and update all system packages to the latest versions.

$ sudo yum -y update

2. Install standard basic packages missing:

$ sudo yum install -y epel-release vim bash-completion wget curl telnet net-tools unzip lvm2 

3. Install acpid and cloud-init packages.

$ sudo yum -y install acpid cloud-init cloud-utils-growpart

$ sudo sudo systemctl enable --now acpid

4. Disable the zeroconf route

$ echo "NOZEROCONF=yes" | sudo tee -a /etc/sysconfig/network

5. Configure GRUB_CMDLINE_LINUX – For Openstack usage.

If you plan on exporting template to Openstack Glance image service, edit the /etc/default/grub file and configure the GRUB_CMDLINE_LINUX option. Your line should look like below – remove rhgb quiet and add console=tty0 console=ttyS0,115200n8.

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap console=tty0 console=ttyS0,115200n8"

Generate grub configuration.

$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

6. Install other packages you need on your baseline template.

7. When done, power off the virtual machine.


How to Clean VM template ?

You need virt-sysprep tool for cleaning the instance.

$ sudo virt-sysprep -d centos7

Read More



Redirect FreeBSD Console To A Serial Port for KVM Virsh - How to do it

This article covers how to redirect FreeBSD in KVM to the serial port.

FreeBSD does support a dumb terminal on a serial port as a console.


This is useful for quick login or debug guest system problem without using ssh. 

1. First, login as root using ssh to your guest operating systems:

$ ssh ibmimedia@freebsd.ibmimedia.com

su -

2. Edit /boot/loader.conf, enter:

# vi /boot/loader.conf

3. Append the following entry:

console="comconsole"

4. Save and close the file. Edit /etc/ttys, enter:

# vi /etc/ttys

5. Find the line that read as follows:

ttyd0  "/usr/libexec/getty std.9600"   dialup  off secure

6. Update it as follows:

ttyd0   "/usr/libexec/getty std.9600"   vt100   on secure

7. Save and close the file. Reboot the guest, enter:

# reboot

8. After reboot, you can connect to FreeBSD guest as follows from host (first guest the list of running guest operating systems):

# virsh list

Sample outputs:


 Id Name                 State

----------------------------------

  3 ographics            running

  4 freebsd              running

9. Now, connect to Freebsd guest, enter:

virsh console 4

OR

virsh console freebsd

Read More



PXE Boot or DHCP Failure on Guest - Fix it now

This article covers how to fix PXE Boot (or DHCP) Failure on Guest.

Nature of this error:

A guest virtual machine starts successfully, but is then either unable to acquire an IP address from DHCP or boot using the PXE protocol, or both. There are two common causes of this error: having a long forward delay time set for the bridge, and when the iptables package and kernel do not support checksum mangling rules.


Cause of PXE BOOT (OR DHCP) ON GUEST FAILED:

Long forward delay time on bridge.

This is the most common cause of this error. If the guest network interface is connecting to a bridge device that has STP (Spanning Tree Protocol) enabled, as well as a long forward delay set, the bridge will not forward network packets from the guest virtual machine onto the bridge until at least that number of forward delay seconds have elapsed since the guest connected to the bridge. This delay allows the bridge time to watch traffic from the interface and determine the MAC addresses behind it, and prevent forwarding loops in the network topology. If the forward delay is longer than the timeout of the guest's PXE or DHCP client, then the client's operation will fail, and the guest will either fail to boot (in the case of PXE) or fail to acquire an IP address (in the case of DHCP).


Fix to PXE BOOT (OR DHCP) ON GUEST FAILED:

If this is the case, change the forward delay on the bridge to 0, or disable STP on the bridge.

This solution applies only if the bridge is not used to connect multiple networks, but just to connect multiple endpoints to a single network (the most common use case for bridges used by libvirt).


If the guest has interfaces connecting to a libvirt-managed virtual network, edit the definition for the network, and restart it. 

For example, edit the default network with the following command:

# virsh net-edit default

Add the following attributes to the <bridge> element:

<name_of_bridge='virbr0' delay='0' stp='on'/>

XML


If this problem is still not resolved, the issue may be due to a conflict between firewalld and the default libvirt network.

To fix this, stop firewalld with the service firewalld stop command, then restart libvirt with the service libvirtd restart command.

Read More



No guest machines present libvirtd - Fix it now

This article covers how to troubleshoot and fix No guest machines present libvirtd for our customers. 

The virsh program is the main interface for managing virsh guest domains. The program can be used to create, pause, and shutdown domains. 

It can also be used to list current domains. Libvirt is a C toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes).

The libvirt daemon is successfully started, but no guest virtual machines appear to be present.


There are various possible causes of this problem.

Performing these tests will help to determine the cause of this situation:

1. Verify KVM kernel modules

Verify that KVM kernel modules are inserted in the kernel:

$ lsmod | grep kvm

If you are using an AMD machine, verify the kvm_amd kernel modules are inserted in the kernel instead, using the similar command lsmod | grep kvm_amd in the root shell.

If the modules are not present, insert them using the modprobe <modulename> command.

Note: Although it is uncommon, KVM virtualization support may be compiled into the kernel. In this case, modules are not needed.

2. Verify virtualization extensions

Verify that virtualization extensions are supported and enabled on the host:

# egrep "(vmx|svm)" /proc/cpuinfo

Enable virtualization extensions in your hardware's firmware configuration within the BIOS setup.

3. Verify client URI configuration

Verify that the URI of the client is configured as desired:

# virsh uri


How to fix No guest machines present #libvirtd #error:

After performing these tests, use the following command to view a list of guest virtual machines:

# virsh list --all

Read More



Install OpenBSD As Guest Operating System using KVM virt-install

This article covers how to install OpenBSD as guest operating while using KVM. OpenBSD is well know for focus on security features such as Memory protection, cryptography, randomization and much more in default base installation.
virt-install provides the option of supporting graphics for the guest operating system installation. This is achieved through use of QEMU.

virt-install is a command line tool for creating new KVM, Xen, or Linux container guests using the libvirt hypervisor management library
The virt-install tool provides a number of options that can be passed on the command line.

To see a complete list of options run the following command:
# virt-install --help

Read More



Troubleshoot KVM Virtualization Problem

This article covers how to troubleshoot KVM virtualization problem.


Log file locations and tools used to track down #KVM #problems are:
1. $HOME/.virtinst/virt-install.log – virt-install tool log file.
2. $HOME/.virt-manager/virt-manager.log – virt-manager tool log file.
3. /var/log/libvirt/qemu/ – Log files for each running virtual machine. If centos is virtual machine name, than log file is /var/log/libvirt/qemu/centos.log.

You can use the grep and other Linux tools to view this files:
# tail -f /var/log/libvirt/qemu/freebsd.log
# grep something $HOME/.virtinst/virt-install.log
$ sudo tail -f /var/log/libvirt/qemu/openbsd.log

Hyper-V backups can fail for any number of reasons, but there are some things to look for when backups don’t work the way that they are supposed to.
When backups fail, the first thing that you should do is to check the backup logs in an effort to learn more about the problem.
Specifically, you need to determine if the problem is confined to a particular host, a particular virtual machine, or perhaps related to the backup target itself.

Read More



KVM live migration to resolve performance issues

This article covers how to use KVM live migration to achieve load balancing which is important in a server virtualization system to maintain server performance.
Migration enables an administrator to move a virtual machine instance from one compute host to another. A typical scenario is planned maintenance on the source host, but migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine.

Kernel-based Virtual Machine (KVM) is an open source virtualization technology built into Linux.
Specifically, KVM lets you turn #Linux into a #hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual machines (VMs).

Live migration of virtual machines is necessary when you need to achieve high-availability setups and load distribution.
The #KVM hypervisor has been a powerful alternative to Xen and VMware in the Linux world for several years.
To make the virtualization solution suitable for enterprise use, the developers are continually integrating new and useful features.
An example of this is live migration of virtual machines (VMs).

Live #migration involves:
The instance keeps running throughout the migration.
This is useful when it is not possible or desirable to stop the application running on the instance.
Live migrations can be classified further by the way they treat instance storage:
1. Shared storage-based live migration. The instance has ephemeral disks that are located on storage shared between the source and destination hosts.
2. Block live migration, or simply block migration. The instance has ephemeral disks that are not shared between the source and destination hosts. Block migration is incompatible with read-only devices such as CD-ROMs and Configuration Drive (config_drive).
3. Volume-backed live migration. Instances use volumes rather than ephemeral disks.

Block live migration requires copying disks from the source to the destination host.
It takes more time and puts more load on the network. Shared-storage and volume-backed live migration does not copy disks.

Read More



KVM hypervisor How it Works

This article will guide you on how the KVM #hypervisor works. Basically, KVM is a type-2 hypervisor (installed on top of another OS, in this case some flavor of #Linux). 

It runs, however, like a type-1 hypervisor and can provide the power and functionality of even the most complex and powerful type-1 hypervisors, depending on the tools that are used with the KVM package itself.

KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V).

Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images.

1. The main difference between Type 1 vs. Type 2 hypervisors is that Type 1 runs on bare metal and Type 2 runs on top of an operating system. 

2. Each hypervisor type also has its own pros and cons and specific use cases.

3. Xen is better than #KVM in terms of virtual storage support, high availability, enhanced security, virtual network support, power management, fault tolerance, real-time support, and virtual CPU scalability.

4. A Type 1 hypervisor takes the place of the host operating system. 

5. Type 1 hypervisors are highly efficient because they have direct access to physical hardware. 

6. This also increases their security, because there is nothing in between them and the CPU that an attacker could compromise.

Read More



Manage KVM guest virtual machines via virsh commands

This article will guide you on how to use to manage KVM guest virtual #machines using virsh #commands.

virsh is a command line utility for managing virsh guest domains/virtual machines and the #hypervisor.

Linux list a KVM vm guest using #virsh command.

The main command interface used to control both Solaris xVM and guest domains is the virsh command. virsh provides a generic and stable interface for controlling virtualized operating systems.

Many virsh commands act asynchronously. This means that the system prompt can return before the operation has completed.

#KVM lets you turn Linux into a hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual machines (VMs).

To log into VM with Virsh, simply:

1. Open a shell prompt or login using ssh. 

2. Login to a host server called server1. 

3. Use the virsh console command to log in to a running VM called 'centos7' type: virsh console centos7.

The virsh destroy #command initiates an immediate ungraceful shutdown and stops the specified guest virtual machine. 

Using virsh destroy can corrupt guest virtual machine file systems. 

Use the virsh destroy command only when the guest virtual machine is unresponsive.

Read More



Cloning existing KVM virtual machine images on Linux

This article will guide you on how to use the virt-clone command which provides a number of options to clone a #KVM #VM. You can use the virt-sysprep if you need to clone the VM and make/reset anything inside the guest #OS.

Read More



Selecting the number of vCPUs and Cores for a Virtual Machine

This articles will guide you when selecting the number of vCPUs and Cores for a Virtual Machine which depends on the operating system used and some other factors.

Basically, When Selecting the Number of #vCPUs and #Cores for a Virtual Machine, you can use all CPU #resources allocated to a virtual machine, it must see one 8 core #processor, 2 vCPUs with 4 cores each or 1 vCPU with 4 cores in two threads instead of 8 vCPUs.

Read More



Failed to initialize a valid firewall backend

This article will guide you on how to fix error ‘failed to initialize a valid firewall backend’ which is triggered in the process of creating Virtual Machines on KVM using Libvirt.

Read More



KVM Installation on CentOS 7

This article will guide you through the process of installing kernel-based virtual machine (KVM) on your CentOS 7 machine.

Read More