virt-install iscsi direct - Target not found
by kgore4 une
When I try to use an iscsi direct pool in a "--disk" clause for
virt-install, I get error "iSCSI: Failed to connect to LUN : Failed to log
in to target. Status: Target not found(515)". I've seen that sort of error
before when the initiator name isn't used. The SAN returns different LUNS
depending on the initiator.
I've run out of ideas on what to try next. Any advice welcome. I've
included what I thought was relevent below.
klint.
The disk parameter to virt-install is (it's part of a script but the
variables are correct when executed)
[code]
--disk
vol=${poolName}/unit:0:0:${vLun1},xpath.set="./source/initiator/iqn/@name='iqn.2024-11.localdomain.agbu.agbuvh1:${vName}'"
\
[/code]
I added the xpath.set as I noticed that the initiator wasn't in the debug
output of the disk definition and it didn't work without it either.
The iscsi direct pool is defined and it appears to work - it's active and
vol-list shows the correct luns.
Using --debug on virt install, I can see the drive is detected early in the
process as it's got the size of the drive.
[code]
[Wed, 19 Feb 2025 16:43:33 virt-install 2174805] DEBUG (cli:3554) Parsed
--disk volume as: pool=agbu-ldap1 vol=unit:0:0:3
[Wed, 19 Feb 2025 16:43:33 virt-install 2174805] DEBUG (disk:648)
disk.set_vol_object: volxml=
<volume type='network'>
<name>unit:0:0:3</name>
<key>ip-10.1.4.3:3260-iscsi-iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3</key>
<capacity unit='bytes'>49996103168</capacity>
<allocation unit='bytes'>49996103168</allocation>
<target>
<path>ip-10.1.4.3:3260-iscsi-iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3</path>
</target>
</volume>
[Wed, 19 Feb 2025 16:43:33 virt-install 2174805] DEBUG (disk:650)
disk.set_vol_object: poolxml=
<pool type='iscsi-direct'>
<name>agbu-ldap1</name>
<uuid>1c4ae810-9bae-433c-a92f-7d3501b6ba80</uuid>
<capacity unit='bytes'>49996103168</capacity>
<allocation unit='bytes'>49996103168</allocation>
<available unit='bytes'>0</available>
<source>
<host name='10.1.4.3'/>
<device path='iqn.1992-09.com.seagate:01.array.00c0fff6c846'/>
<initiator>
<iqn name='iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap'/>
</initiator>
</source>
</pool>
[/code]
The generated initial_xml for the disk looks like
[code]
<disk type="network" device="disk">
<driver name="qemu" type="raw"/>
<source protocol="iscsi"
name="iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3">
<host name="10.1.4.3"/>
<initiator>
<iqn name="iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap"/>
</initiator>
</source>
<target dev="vda" bus="virtio"/>
</disk>
[/code]
The generated final_xml looks like
[code]
<disk type="network" device="disk">
<driver name="qemu" type="raw"/>
<source protocol="iscsi"
name="iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3">
<host name="10.1.4.3"/>
<initiator>
<iqn name="iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap"/>
</initiator>
</source>
<target dev="vda" bus="virtio"/>
</disk>
[/code]
The full error is
[code]
[Wed, 19 Feb 2025 16:43:35 virt-install 2174805] DEBUG (cli:256) File
"/usr/bin/virt-install", line 8, in <module>
virtinstall.runcli()
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 1233, in
runcli
sys.exit(main())
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 1226, in main
start_install(guest, installer, options)
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 974, in
start_install
fail(e, do_exit=False)
File "/usr/share/virt-manager/virtinst/cli.py", line 256, in fail
log.debug("".join(traceback.format_stack()))
[Wed, 19 Feb 2025 16:43:35 virt-install 2174805] ERROR (cli:257) internal
error: process exited while connecting to monitor:
2025-02-19T05:43:35.075695Z qemu-system-x86_64: -blockdev
{"driver":"iscsi","portal":"10.1.4.3:3260","target":"iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3","lun":0,"transport":"tcp","initiator-name":"iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}:
iSCSI: Failed to connect to LUN : Failed to log in to target. Status:
Target not found(515)
[Wed, 19 Feb 2025 16:43:35 virt-install 2174805] DEBUG (cli:259)
Traceback (most recent call last):
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 954, in
start_install
domain = installer.start_install(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/virt-manager/virtinst/install/installer.py", line 695,
in start_install
domain = self._create_guest(
^^^^^^^^^^^^^^^^^^^
File "/usr/share/virt-manager/virtinst/install/installer.py", line 637,
in _create_guest
domain = self.conn.createXML(initial_xml or final_xml, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/libvirt.py", line 4481, in createXML
raise libvirtError('virDomainCreateXML() failed')
libvirt.libvirtError: internal error: process exited while connecting to
monitor: 2025-02-19T05:43:35.075695Z qemu-system-x86_64: -blockdev
{"driver":"iscsi","portal":"10.1.4.3:3260","target":"iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3","lun":0,"transport":"tcp","initiator-name":"iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}:
iSCSI: Failed to connect to LUN : Failed to log in to target. Status:
Target not found(515)
[/code]
Things that could affect the answer
* What I'm calling a SAN is a seagate exos-x iscsi unit
* virtual host is debian 12
* virsh version 9.0.0
* iscsiadm version 2.1.8
1 day, 18 hours
libvirt KVM virtual NIC as ens160 for running a virtual appliance
designed for Vmware
by Sascha Frey
Hi,
I want to run a virtual appliance which was designed for Vmware on libvirt Linux/KVM.
Unfortunately, they hardcoded the kernel NIC name to ens160 inside that image.
I can create a libvirt XML which creates a NIC recognised as ensXX, but not ens160, because libvirt doesn’t allow PCI slots over 0x1f.
Is there any possible way to create a virtual NIC ens160 without modifying the image?
Thanks.
2 months, 1 week
Re: Capture pcap for each VM
by Laine Stump
(added users(a)lists.libvirt.org back into the Cc so that anyone else
asking the same question in the future will benefit from the answer)
On 3/28/25 8:12 AM, James Liu wrote:
> Hi Laine,
>
> I tried your first solution but somehow it didn't work(the network is
> NAT). I edited the xml in virt-manager gui. Whenever I added following
> options, and click "Apply", the virt-manager will revert(remove) the
> "alias" tag; consequently, the qemu command line couldn't find the
> device name('foo' in this example).
Sorry, I had meant to point this out but somehow forgot - user-specified
aliases must begin with "ua-", otherwise they will be ignored. So
instead of using "<alias name='foo'/>", use "<alias name='ua-foo'/>"
(and then in the <qemu:arg value> use "netdev=hostua-foo")
Also note that your guest should be shutdown while you're making these
changes.
>
> Appreciate your suggestions.
>
> <interface type="network">
> <mac address="52:54:00:3a:bf:52"/>
> <source network="default"/>
> <model type="e1000e"/>
> <alias name="*foo*"/> # virt-manager will always remove
> this tag
> <address type="pci" domain="0x0000" bus="0x01" slot="0x00"
> function="0x0"/>
> </interface>
>
> <qemu:commandline>
> <qemu:arg value="-object"/>
> <qemu:arg value="filter-dump,id=f1,netdev=*foo*,file=/tmp/xyzzy.pcap"/>
> </qemu:commandline>
2 months, 1 week
Clarifying isolated network rules in nftables
by Alexey Kashavkin
Hi,
I’m trying to understand how firewall filter works for isolated network in libvirt v11.1.0. When I start the network I can see following rules in nftables:
table ip libvirt_network {
chain forward {
type filter hook forward priority filter; policy accept;
counter packets 0 bytes 0 jump guest_cross
counter packets 0 bytes 0 jump guest_input
counter packets 0 bytes 0 jump guest_output
}
chain guest_output {
iif "virbr3" counter packets 0 bytes 0 reject
}
chain guest_input {
oif "virbr3" counter packets 0 bytes 0 reject
}
chain guest_cross {
iif "virbr3" oif "virbr3" counter packets 0 bytes 0 accept
}
chain guest_nat {
type nat hook postrouting priority srcnat; policy accept;
}
}
But when I start ping from one VM to another on the same isolated network, I don't see an increase in counters in either chain.
In the libvirt code, I found a comment in src/network/network_nftables.c:
/**
* nftablesAddForwardAllowCross:
*
* Add a rule to @fw to allow traffic to go across @iface (the virtual
* network's bridge) from one port to another. This allows all traffic
* between guests on the same virtual network.
*/
But it seems that these rules don't work and are not needed. If I delete this table or some chains, nothing happens. VMs have connectivity with each other on this network.
What are these rules for?
2 months, 2 weeks
Capture pcap for each VM
by icefrog1950@gmail.com
Hi,
Is possible to capture pcaps for each VM individually?
QEMU supports command line '-object filter-dump, file=test.pcap'. I'm not sure if Libvirt supports this features, or there are better ways to solve this.
Many thanks.
2 months, 2 weeks
Running DHCP server inside of the Guest VM instead of the host.
by gameplayer2019pl@tutamail.com
Hello,
I've recently tried to run KEA DHCP server inside of a Debian VM and the following host interface configuration, attached to that VM:
```
<network>
<name>netasn-dhcpv6</name>
<bridge name="netasn-dhcpv6" stp="on" delay="0"/>
<mtu size="1500"/>
<mac address="XX:XX:XX:XX:XX:XX"/>
<dns enable="no"/>
<ip family="ipv6" address="2a14:7581:feb::1" prefix="64">
</ip>
</network>
```
But whenever I'm trying to run dhclient on another Debian VM attached to the same network I couldn't obtain the IPv6 lease from KEA DHCP.
The host is as well running Debian, 12 version.
Is there anyway to make the DHCP work from Guest VM instead of using built-in libvirt DHCP?
2 months, 2 weeks
Integrating Open Virtual Network with Libvert Provisioned VMs
by odukoyaonline@gmail.com
I am having a little confusion trying to integrate OVN defined networks(I tried with Ovs and it worked fine) with the VMs I have provisioned with Libvirt. I want to ask if there are resource around this and if not, will appreciate input from anyone who has a prior experience with it no matter how little.
2 months, 2 weeks
General use of zstd instead of zlib compression
by Michael Niehren
Hi,
currently i use on all VM's qcow2-Images with zlib compression. If i do an Backup, the Backup-Image will
be compressed with zstd Level 3 to shrink the image for transfering it over not so fast internet connections.
So, why not directly using zstd compression on the images. Are there any reason's not to do that ?
As i always use virt-manager for administration, i patched qemu (V9.2.2) to create on default zstd compressed images
(only 1 change in line 3525). So newly created images do have compression type zstd, which work's (qemi-img info).
I see one unusual thing. If i do an qemi-img convert with compression_type=zstd the size of the converted image
is much smaller than the original file while "qemu-img info" shows on both as compression type zstd. Do they use
different compression levels, maybe ?
If i now do an virsh backup-begin <domain>, the backup-Image does also has a bigger size than the original, while
showing zstd as compression type (qemu-img info). If i convert it with the similar command as above, both converted images has
nearly the same size. Even if i copy the smaller converted image to the original and boot the vm from the smaller
image, the backup-image (after backup-begin) is bigger.
So, i am confused. Are there any explanations about the different image sizes or what's going on here ?
best regards
Michael
2 months, 3 weeks
live migration of SR-IOV vm
by Paul B. Henson
I have a vm using an sr-iov NIC that I'm testing live migration on (Debian
12, OS packages).
Per the documentation, I have the sr-iov link set as transient with a
pointer to the persistent virtio link:
<interface type='network'>
<mac address='52:54:00:a1:e0:38'/>
<source network='sr-iov-intel-10G-1'/>
<vlan>
<tag id='400'/>
</vlan>
<model type='virtio'/>
<teaming type='transient' persistent='ua-sr-iov-backup'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
and a persistent virto link down by default:
<interface type='direct'>
<mac address='52:54:00:a1:e0:38'/>
<source dev='eno5np0.400' mode='bridge'/>
<model type='virtio'/>
<teaming type='persistent'/>
<link state='down'/>
<alias name='ua-sr-iov-backup'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</interface>
The failover driver finds this in the vm:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default qlen 100
0
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group
default qlen 1000
10: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
eth0 state UP mode DEFAULT group default
qlen 1000
and the network works fine. However, during migration, the sr-iov
interface is removed, but the link on the virtio interface is *not*
brought up:
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN mode DEFAULT group default qlen
1000
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group
default qlen 1000
resulting in no network for part of the migration.
Once the box finished migrating, the replacement sv-iov link is plugged
back in, and all is well once again:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default qlen 1000
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group default qlen 1000
11: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
eth0 state UP mode DEFAULT group default qlen 1000
My understanding was that virsh migrate was supposed to automatically
bring up the virtio link when the sr-iov link is removed? Or do I need
to explicitly bring it up myself before the migration and take it down
after?
If I bring up the link manually before the migration, there are a few
packets lost at that time, but then none are lost during the migration
when the sr-iov link is pulled, or after the migration when I shut that
link down again. Ideally no packets would be lost :), but I realize
that's unlikely in practice...
Thanks...
2 months, 3 weeks
system cache dropped
by matus valentin
Hi,
I have a setup with multiple virtual machines (VMs), each with a saved
state. All VMs share the same parent, which is located on a shared drive.
Whenever I restore any VM using virsh restore, the parent is dropped from
the system cache, causing all data to be downloaded from the shared drive
again. This results in unnecessary network traffic, even though the parent
changes very rarely. However, if I create a child from the parent and
call virsh
create to create a new VM, the parent is never dropped from the system
cache.
Is this expected behavior? Should the parent be retained in the system
cache during a virsh restore operation? Are there any configurations or
settings that can prevent the parent from being dropped from the cache?
thanks
2 months, 4 weeks