[libvirt-users] Issues Connecting to Remote Host Through SSH
by T A
Hello,
I've been unable to connect to a remote host from within my network using
the following argument.
virsh/virt-manager -c qemu+ssh://user@host:port/system
I've tried libssh & libssh2 as well. Using ssh just prompts me for the host
password indefinitely. When using libssh2 the connection is rejected.
The host computer is using a custom ssh port which I've added to the above
argument. Neither box has an enabled root user.
Remote box
$ cat /etc/debian_version
9.3
$ ssh -V
OpenSSH_7.4p1 Debian-10+deb9u2, OpenSSL 1.0.2l 25 May 2017
$ virsh --version
3.0.0
Host box
$ cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)
$ ssh -V
OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017
$ virsh --version
3.2.0
Any help is greatly appreciated.
6 years, 10 months
[libvirt-users] debug kernel
by llilulu
libvirt 3.4.0
centos 7.4
Hi:
I want usr libvrit create vm to debug linux kernel. Previous , I usr qemu cmd(-gdb tcp::1234) directly. When I search on Internet, I found I can use libvrt xml configure kgdb.
<qemu:commandline>
<qemu:arg value='-gdb'/>
<qemu:arg value='tcp::1234'/>
</qemu:commandline>
But configure xml use kgdb is differnet on kernel boot. use qemu cmd, before I connect kernel use gdb , the kernel still not boot, When I use gdb connect the vm and put continue cmd on gdb, the kernel will start boot. But I usr libvirt xml configure kgdb, the kernel not stop, I want known that have some way I can use libvirt xml configure kgdb let kernel stop boot before I put continue on gdb?
Thanks
6 years, 10 months
[libvirt-users] debug kernel
by llilulu
libvirt 3.4.0
centos 7.4
Hi:
I want usr libvrit create vm to debug linux kernel. Previous , I usr qemu cmd(-gdb tcp::1234) directly. When I search on Internet, I found I can usr libvrt xml configure kgdb.
<qemu:commandline>
<qemu:arg value='-gdb'/>
<qemu:arg value='tcp::1234'/>
</qemu:commandline>
But configure xml usr kgdb is differnet on kernel boot. usr qemu cmd, before I connect kernel usr gdb , the kernel still not boot, When I usr gdb connect the vm and put continue cmd on gdb, the kernel will start boot. But I usr libvirt xml configure kgdb, the kernel not stop, I want known that have some way I can use libvirt xml configure kgdb let kernel stop boot before I put continue on gdb?
Thanks
6 years, 10 months
[libvirt-users] domain xml does not have <target dev='vnetX'/>
by Shashwat shagun
Hi I'm trying to monitor bandwidth usage of a KVM VM but couldn't find <target
dev='vnet0'/>
on virsh edit domain_name
all i get is this
<interface type='network'>
<mac address='52:54:00:a1:05:b6'/>
<source network='vmango'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
how do i determine which vnet is connected to which domain?
--
Regards,
Shashwat Shagun
6 years, 10 months
[libvirt-users] Limiting instructions for guest to help with migration to different host
by R
Hello,
I am migrating a suspended x86-64 guest (disk & state) across
different x86-64 hosts with small differences in the available CPU
instructions and when I try to resume the guest on the different host
libvirt reports an error like this "CPU feature XXX not found" and
fails. My question is, is there a way to limit the instructions that
are used on the "origin" host when creating the guest to avoid this
error? I am pretty sure there is a set of basic instructions that are
available across all hosts which could be used for this purpose.
Thank you!
/R
6 years, 10 months
Re: [libvirt-users] VM migration upon shutdown in centos 7
by Michal Privoznik
[Please keed the list CCed]
On 01/10/2018 04:43 PM, Raman Gupta wrote:
>> Does this command alone succeed?
> Yes. I have used this command to migrate VMs successfully, without even
> knowing that spelling has changed.
>
>
>> I don't know enough about systemd but maybe it's not waiting for virsh to
> finish?
> Yes I also think virsh or libvirtd or related service does not wait
> before /root/vm_migrate.sh
> is called and hence live migration fails.
> If I replace Live Migration with a simple ping to peer node in the
> vPreShutdownHook.service, then ping goes thru successfully thus indicating
> network service was UP at that time.
>
>> Can you try to get any logs to see what is going on actually?
> Jan 8 15:30:58 desktop4 systemd: Stopping Session c1 of user gdm.
> Jan 8 15:30:58 desktop4 gdm: Freeing conversation 'gdm-launch-environment'
> with active job
> Jan 8 15:30:58 desktop4 systemd: Stopped target Sound Card.
> Jan 8 15:30:58 desktop4 systemd: Stopping Sound Card.
> Jan 8 15:30:58 desktop4 systemd: Stopping LVM2 PV scan on device 8:3...
> Jan 8 15:30:58 desktop4 systemd: Removed slice system-getty.slice.
> Jan 8 15:30:58 desktop4 systemd: Stopping system-getty.slice.
> Jan 8 15:30:58 desktop4 systemd: Stopping Authorization Manager...
> Jan 8 15:30:58 desktop4 systemd: Stopping Virtual Machine and Container
> Registration Service...
> Jan 8 15:30:58 desktop4 systemd: Stopping GlusterFS brick processes
> (stopping only)...
> Jan 8 15:30:58 desktop4 systemd: Removed slice
> system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
> Jan 8 15:30:58 desktop4 systemd: Stopping
> system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
> Jan 8 15:30:58 desktop4 systemd: Stopping Availability of block devices...
> Jan 8 15:30:58 desktop4 systemd: Stopping Virtual Machine
> qemu-1-GsmController.
> Jan 8 15:30:58 desktop4 alsactl[837]: alsactl daemon stopped
> Jan 8 15:30:58 desktop4 systemd: Stopping Session 1 of user root.
> Jan 8 15:30:58 desktop4 systemd: Stopping Manage Sound Card State (restore
> and store)...
I don't see your service being called. Anyway, look at
libvirt-guests.service file. Looks like the following lines cause
systemd to wait for a command to finish:
[Service]
EnvironmentFile=-/etc/sysconfig/libvirt-guests
# Hack just call traditional service until we factor
# out the code
ExecStart=@libexecdir(a)/libvirt-guests.sh start
ExecStop=@libexecdir(a)/libvirt-guests.sh stop
Type=oneshot
RemainAfterExit=yes
StandardOutput=journal+console
TimeoutStopSec=0
>
>
>
> On Wed, Jan 10, 2018 at 8:46 PM, Michal Privoznik <mprivozn(a)redhat.com>
> wrote:
>
>> On 01/05/2018 12:00 PM, Raman Gupta wrote:
>>> Hi,
>>>
>>> I have CentOS 7, two node system which allows live VM migration between
>>> them. Live migration triggered from virsh is happily happening. I am
>> using
>>> GlusterFS for replicating VM disk files.
>>>
>>> Now I want to automatically do the live migration at the time of
>>> reboot/shutdown/halt of the host node and for this I have written a
>> systemd
>>> service unit [vPreShutdownHook.service] and placed the live migration
>>> command in a migrate script which is invoked from this service unit. The
>>> migrate script is invoked but migration does not happen.
>>> If someone has some idea please help me to migrate VM upon shutdown.
>>>
>>>
>>> ######## vPreShutdownHook.service ###############
>>>
>>> [Unit]
>>> Description=vPreShutdownHook
>>> Requires=network.target
>>> Requires=libvirtd.service
>>> Requires=dbus.service
>>> Requires=glusterd.service
>>> Requires=glusterfsd.service
>>> DefaultDependencies=no
>>> Before=shutdown.target reboot.target
>>>
>>> [Service]
>>> Type=oneshot
>>> RemainAfterExit=true
>>> ExecStart=/bin/true
>>> ExecStop=/root/vm_migrate.sh
>>>
>>> [Install]
>>> WantedBy=multi-user.target
>>>
>>>
>>> ########## Command to migrate ############
>>> /usr/bin/virsh migrate --verbose --p2p --tunneled --live --compressed
>>> --comp-methods "mt" --comp-mt-level 5 --comp-mt-threads 5
>>> --comp-mt-dthreads 5 MY_VM qemu+ssh://root@$node2/system
>>
>> Does this command alone succeed?
>> BTW: unless really needed --live will only make the migration take longer.
>>
>> I don't know enough about systemd but maybe it's not waiting for virsh
>> to finish? Can you try to get any logs to see what is going on actually?
>>
>> Michal
>>
>
Michal
6 years, 10 months
[libvirt-users] Whether libvirt can support all backing chain layer are iscsi network disk type
by Meina Li
Hi,
For backing chain that all images are iscsi network disk type , such as
iscsi://ip/iqn../0(base image) <-iscsi://ip/iqn../1(active image).
Currently, 'qemu-img info --backing-chain' command can display correct
backing file info, but after starting guest with the active image in
backing chain, it don't include the related <backingStore> element in
dumpxml.
So, Whether libvirt can support all backing chain layer are iscsi
network disk type?
Best Regards
Meina Li
On Wed, Jan 3, 2018 at 6:39 AM, Meina Li <meili(a)redhat.com> wrote:
> Hi,
>
> I am a libvirt qe, I am testing the function about the new location of
> disk auth element (as sub-element of the source element) in Backing Chain
> Management. And I have a questions:
>
> When all the backing chain are iscsi network disk type(no matter if the
> authentication is exist), start the guest , it will only has the top
> level disk in xml and there are no other backingStore in it. The test step
> is below.
>
> So can this function support all backing chain with network disk type
> actually, or only use network authentication as the base image for backing
> chain of file disk type?
>
> Can you help reviewing it? Thanks very much in advance!
>
> Test step:
> 1. iSCSI server:
> o- iscsi ............................................................
> ................................................ [Targets: 2]
> | o- iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.9cba196611e6
> ....................................................... [TPGs: 1]
> | | o- tpg1 .....................................................................................
> [gen-acls, tpg-auth, 1-way auth]
> | | o- acls ..............................
> ............................................................................
> [ACLs: 0]
> | | o- luns ..............................
> ............................................................................
> [LUNs: 4]
> | | | o- lun0 ..............................
> .................................. [fileio/file1 (/tmp/lun1.img)
> (default_tg_pt_gp)]
> | | o- portals ..............................
> ......................................................................
> [Portals: 1]
> | | o- 0.0.0.0:3260 ..............................
> .......................................................................
> [OK]
> | o- iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.a8d92ebb4ece
> ....................................................... [TPGs: 1]
> | o- tpg1 ............................................................
> ...................................... [gen-acls, no-auth]
> | o- acls ..............................
> ............................................................................
> [ACLs: 0]
> | o- luns ..............................
> ............................................................................
> [LUNs: 3]
> | | o- lun0 ..............................
> .................................. [fileio/file3 (/tmp/lun3.img)
> (default_tg_pt_gp)]
> | o- portals ..............................
> ......................................................................
> [Portals: 1]
> | o- 0.0.0.0:3260 ..............................
> .......................................................................
> [OK]
>
> 2. Set iscsi secret.
> # cat iscsi-secret.xml
> <secret ephemeral='no' private='yes'>
> <description>iSCSI secret</description>
> <usage type='iscsi'>
> <target>libvirtiscsi</target>
> </usage>
> </secret>
> # virsh secret-define iscsi-secret.xml
> Secret 47bd2f3e-023f-44ba-85a3-e8fa7f16ff23 created
> # MYSECRET=`printf %s "redhat" | base64`
> # virsh secret-set-value 47bd2f3e-023f-44ba-85a3-e8fa7f16ff23 $MYSECRET
> Secret value set
>
> 3. Create backing chain.
> # qemu-img create -f qcow2 -b iscsi://redhat:redhat@10.66.7.
> 27:3260/iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.9cba196611e6/0
> iscsi://10.66.7.27:3260/iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.
> a8d92ebb4ece/0 -o backing_fmt=qcow2
>
> 4. Start guest.
> # qemu-img info iscsi://10.66.7.27:3260/iqn.2003-01.org.linux-iscsi.
> localhost.x8664:sn.a8d92ebb4ece/0 --backing-chainimage: json:{"driver":
> "qcow2", "file": {"lun": "0", "portal": "10.66.7.27:3260", "driver":
> "iscsi", "transport": "tcp", "target": "iqn.2003-01.org.linux-iscsi.
> localhost.x8664:sn.a8d92ebb4ece"}}
> file format: qcow2
> virtual size: 5.0G (5368709120 <0536%20870%209120> bytes)
> disk size: unavailable
> cluster_size: 65536
> backing file: iscsi://redhat:redhat@10.66.7.27:3260/iqn.2003-01.org.linux-
> iscsi.localhost.x8664:sn.9cba196611e6/0
> backing file format: qcow2
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
> image: json:{"driver": "qcow2", "file": {"lun": "0", "portal": "
> 10.66.7.27:3260", "driver": "iscsi", "transport": "tcp", "user":
> "redhat", "password": "redhat", "target": "iqn.2003-01.org.linux-iscsi.
> localhost.x8664:sn.9cba196611e6"}}
> file format: qcow2
> virtual size: 5.0G (5368709120 <0536%20870%209120> bytes)
> disk size: unavailable
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
> # virsh dumpxml rhel7 | grep disk -A 9
> ...
> <disk type='network' device='disk'>
> <driver name='qemu' type='qcow2'/>
> <source protocol='iscsi' name='iqn.2003-01.org.linux-
> iscsi.localhost.x8664:sn.a8d92ebb4ece/0'>
> <host name='10.66.7.27' port='3260'/>
> </source>
> <target dev='vdb' bus='virtio'/>
> <alias name='virtio-disk1'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x09'
> function='0x0'/>
> </disk>
> ...
>
>
> Best Regards
> Meina Li
>
6 years, 10 months
[libvirt-users] Proxmox to libvirt conversion?
by Andre Goree
Hello all. I'm wondering if anyone on this list has come across the
need to convert a Proxmox VM to a libvirt VM. Given that they use the
same underlying technology (qemu/KVM), I'm pretty sure it's possible and
just a matter of hashing out the differences in configuration (i.e.
creating the necessary libvirt xml for the Proxmox disk), but figured
I'd ask here first in case anyone has dealt with this (probably will
find and ask on the Proxmox mailing list as well).
Thanks in advance!
--
Andre Goree
-=-=-=-=-=-
Email - andre at drenet.net
Website - http://blog.drenet.net
PGP key - http://www.drenet.net/pubkey.html
-=-=-=-=-=-
6 years, 10 months
[libvirt-users] VM migration upon shutdown in centos 7
by Raman Gupta
Hi,
I have CentOS 7, two node system which allows live VM migration between
them. Live migration triggered from virsh is happily happening. I am using
GlusterFS for replicating VM disk files.
Now I want to automatically do the live migration at the time of
reboot/shutdown/halt of the host node and for this I have written a systemd
service unit [vPreShutdownHook.service] and placed the live migration
command in a migrate script which is invoked from this service unit. The
migrate script is invoked but migration does not happen.
If someone has some idea please help me to migrate VM upon shutdown.
######## vPreShutdownHook.service ###############
[Unit]
Description=vPreShutdownHook
Requires=network.target
Requires=libvirtd.service
Requires=dbus.service
Requires=glusterd.service
Requires=glusterfsd.service
DefaultDependencies=no
Before=shutdown.target reboot.target
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/root/vm_migrate.sh
[Install]
WantedBy=multi-user.target
########## Command to migrate ############
/usr/bin/virsh migrate --verbose --p2p --tunneled --live --compressed
--comp-methods "mt" --comp-mt-level 5 --comp-mt-threads 5
--comp-mt-dthreads 5 MY_VM qemu+ssh://root@$node2/system
Thanks,
6 years, 10 months