(Translated by https://www.hiragana.jp/)
GitHub - ChoHag/nbsvm: No bullshit VMs
Skip to content

ChoHag/nbsvm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nbsvm
=====

Create, start and interact with VMs from the command line without the bullshit.
Requires zfs or lvm, kvm and sudo. Socat and a vnc client are needed
to interact with (and stop) a running VM.



Install
-------

Place nbsvm in your $PATH, eg. /usr/local/bin.

Everything else is documentation.



Basic Usage
-----------


nbsvm doesn't manage VMs in any real sense of the word. It is a helper which
starts KVM in a consistent manner, attaching the hard drive images which
already exist in /dev whose name match that of the VM being started. It also
includes utilities to create devices which will match a named VM, either blank
or as a copy of an existing device or file, and locate the unix socket files
attached to a running VM.

KVM is started with a fairly conservative set of options:
  * 512 MB RAM
  * A VNC display, serial port and qemu monitor port.
  * USB and a tablet pointing device.
  * A network device connected to a tap.
  * Any drive images whose path prefix matched the VM's name.

The start command can take arguments which are appended to the kvm command
verbatim. nbsvm commands can be chained, so that:

  $ nbsvm foo newimg newimg newimg start -cdrom install.iso

will create 3 new images for foo then start it with a cdrom. Commands can be
included after the start command by terminating the kvm arguments with "--":

  $ nbsvm foo newimg start -cdrom installer -- vnc


Create a hard drive image named for the foo VM.

$ nbsvm foo newimg


Start a VM foo:

$ nbsvm foo start


Stop it:

$ nbsvm foo stop


Stop it nicely:

$ nbsvm foo shutdown



Connect to a running VM:

  Serial port:

  $ nbsvm foo console

  Qemu monitor port:

  $ nbsvm foo monitor

  VNC display:

  $ vnc_port=12345 nbsvm foo vnc

    ** vnc_port must be an unused tcp port.



Clones and snapshots:

The cloning/snapshot API is not very good and is liable to change when
I work out what it should be. LVM or ZFS is selected by setting the
$store variable (default: lvm). Each has a different concept of
snapshots and clones, which nbsvm follows:

  LVM (store=lvm):

    Create a new image as a COW snapshot of the existing device "master-image":

    $ gold=master-image nbsvm foo snapshot

    Create a normal new device and copy "slow-master" using dd:

    $ gold=slow-master nbsvm foo clone

    This trick is also handy if you have the PID of the dd process nbsvm spawns:

    $ while kill -USR1 $DD_PID; do sleep 2; done


  ZFS (store=zfs):

    Create a snapshot "foo-0@backup" of foo's first image "foo-0":

    $ snapshot_name=backup nbsvm foo snapshot

    Create a new image for foo as a clone of "bar@master":

    $ snapshot_vm=bar snapshot_name=master nbsvm foo clone

    I don't really like the ZFS implementation and it is subject to change at
    my whim. Suggestions welcome.


nbsvm doesn't make any attempt to rename or delete images, snapshots or clones.



Magic usage
-----------

Magic is accessed using environment variables, which override the defaults used
to launch kvm and locate/name devices. There is a full list of these available
in the Variables file. Some common uses are listed here.

The hypervisor binary, amount of memory, usb devices and display are overridden
by the $hypervisor, $mem, $usb and $display variables, respectively:

  hypervisor: Default: "kvm".

  mem:        Default: "512".

  display:    Default: "-display vnc=unix:$drawer/vnc.sock".

  usb:        Default: "-usb -usbdevice tablet".
    ** Disable usb entirely with "usb=' '". Leaving it unset, or set to "",
       will not work.


Other device defaults are a bit more involved:


Storage:

Variable "drives". Constructed on the fly by looking in /dev/$vg or
/dev/zvol/$zpool for any files matching the pattern nbsvm-$vm-* (or
nbsvm-$class-$vm-*) and including them in a list of "-drive" arguments.

"virtio" can be changed by setting $drive_if.

* Snapshots and clones:

newimage and clone (and with the lvm backend, snapshot) create a new hard drive
image for a named VM. The new device is named nbsvm-$vm-$id (if $class is set,
nbsvm-$class-$vm-$id) where $id is obtained by counting the existing devices,
with the first device numbered 0.

  LVM devices are created in the volume group named by $vg, or "vg0".
  ZFS devices are created under $zpool, or "tank".

newimage, and snapshot on lvm, create drives sized $drive_size (default: 10G).

Mostly, only $gold (default: gold) should be set to determine which device to
clone.

  LVM will make a clone or snapshot of "nbsvm-$gold" or "nbsvm-$class-$gold".
  ZFS snapshot will create a snapshot named "nbsvm-$vm-0@snapshot" or
  "nbsvm-$class-$vm-0@snapshot").
  ZFS clone will create a clone of "nbsvm@$gold" or "nbsvm-$class@$gold".

ZFS support is rather messy and will hopefully be improved soon.

  The device can be changed with these variables:

  source:
    The full VG and LV name, or ZFS volume name, to clone or snapshot.

    Default: $vg/$snapshot_source or $zpool/$snapshot_source

    $snapshot_source depends on the storage backend ($store) and command:

    LVM snapshot default: $vg/nbsvm-$gold or $vg/nbsvm-$class-$gold
    LVM clone uses the same scheme as LVM snapshot.
    ZFS snapshot default: $zpool/nbsvm-$gold or $zpool/nbsvm-$class-$gold
    ZFS clone default:    $zpool/nbsvm@$gold or $zpool/nbsvm-$class@$gold


Networking:

The default network setup is to create a single virtual ethernet device
attached to a tap device named "tap-$vm" on the host using this concoction:

  -device $net_if,netdev=net -netdev tap,id=net,ifname=tap-$vm,script=no,downscript=no

The default $net_if is "virtio-net-pci", the guest's MAC address can be set
with $mac, which appends ",mac=$mac" to the -device option. The entire thing
can be overridden by setting $network.


If $bridge is set (it has no default) then the "tap-$vm" device on the host is
added to the bridge by running "brctl addif $bridge tap-$vm". If the network
settings are changed by setting $network, this is not done.


Saved settings and Classes:

Because remembering which options go with which VMs is tedious, nbsvm first
parses files in /etc/nbsvm named after the vm being started. The files are
simply sourced into nbsvm's bash environment. Additional files can be parsed by
setting the $class variable, which also adjusts the names of image which are
used.

The files are sourced in the following order:

  /etc/nbsvm/default.class
  /etc/nbsvm/$class.class (if $class is set)
  /etc/nbsvm/$vm.vm (or $class-$vm.vm if $class is set)

Finally, $class can be set by creating a link to nbsvm named ${class}vm and
calling that instead of calling the nbsvm binary directly. If the $class
variable is set, it is used instead of the class name extracted from the
command name.

To set variables in the class or vm files which can still be overridden at
runtime, write them using bash's replacement syntax like so:

mem=${mem:-2048} # Can be overridden at runtime
net_if=e1000     # Is forced to e1000

The $args variable is also available in the saved settings files. This is an
*array*, not a plain string, of arguments to append to the kvm command. It is
set simply enough:

args=(-cdrom /srv/static-boot.iso -boot d)

A vm file can append to arguments set in the class file[s]:

args+=(-soundhw all)

Arguments passed to the start command are appended to the $args array which is
then used in the kvm command. I don't know I'd have command-line arguments
replace $args or even if I'd want to. Patches welcome.

Releases

No releases published

Packages

No packages published

Languages