[libvirt-users] guest memory monitor
by frankpersist@gmail.com
Hi, everyone,
My hypervisor is kvm, when I use libvirt-java api to get guest vm memory information, I found what libvirt can not get real memory(within the vm)statistic, libvirt can only get information from .xml file(such as vmName.xml <currentMemory> lable) in host.
My question is, does libvirt support memory monitor(within vm)? if it's true, how to get. if false, what is the difficulty?
Thank you.
frankpersist(a)gmail.com
8 years, 11 months
[libvirt-users] how to setup a watchdog?
by lejeczek
hi everybody
I'm testing Qemu's watchdog.
My understanding was that hardware (here qemu's watchdog)
would take
action(cold reboot) the system if there is no ping from the
OS watchdog, so I
thought stopping watchdog service in VM should be a quick test.
I have this in the guest:
<watchdog model='i6300esb' action='reset'>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x08' function='0x0'/>
</watchdog>
and I see /dev/watchdog in my guest. Yet nothing happens.
I must be missing something, an expert said it's config problem?
many thanks
8 years, 11 months
[libvirt-users] libvirt + ceph rbd will hang
by 王李明
hi all:
I use openstack icehouse
and libvirt 0.10 version
use the quem+ceph to save vm disk
when I do serval operation for example migrate vm、snapshot vm
the libvirtd will be hang
I guess whether the ceph rbd will cause this error
anyone can help me?
Wang Liming
8 years, 11 months
[libvirt-users] virsh attach-device : Bus 'pci.0' does not support hotplugging.
by Keyur Bhalerao
Hi ,
Trying to attach a disk to existing VM using virsh attach-device -
XML
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/guest.qcow2'/>
<target dev='vdc' bus='virtio'/>
</disk>
After executing the command I am getting -
[root@cent7 ~]# virsh attach-device lib-virt-man-001 newDisk.xml
error: Failed to attach device from newDisk.xml
error: internal error: unable to execute QEMU command 'device_add': Bus
'pci.0' does not support hotplugging.
[root@cent7 ~]# virsh version
Compiled against library: libvirt 1.2.17
Using library: libvirt 1.2.17
Using API: QEMU 1.2.17
Running hypervisor: QEMU 1.5.3
While searching on this issue found that some have suggested to install
following -
modprobe acpiphp
modprobe pci_hotplug
I am not sure if this is QEMU issue or libvirt issue.
Any help in this would be appreciated.
Thanks,
Keyur Bhalerao
8 years, 11 months
[libvirt-users] RBD snapshots
by Emmanuel Lacour
Dear libvirt users, I'm stuck on this subject and would appreciate a
working example as this looks a supported feature :)
After many tries and packages backports, here is my current setup:
qemu 2.5
libvirt 3.0
ceph hammer
on Debian jessie.
Here is the relevent domain xml parts:
<disk type='network' device='disk' snapshot='internal'>
<driver name='qemu' type='raw'/>
<auth username='libvirt'>
<secret type='ceph' uuid='xxxxxxxxxxxxxxxx'/>
</auth>
<source protocol='rbd' name='libvirt-pool/vm-test'>
<host name='192.168.253.1' port='6789'/>
<host name='192.168.253.3' port='6789'/>
<host name='192.168.253.254' port='6789'/>
<snapshot name='backup'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
After several tries, I discovered the snapshot option in source tag, and
that I need to pre-create this snapshot using rbd so the vm can start.
But then I tried:
# virsh snapshot-create-as vm-test
error: unsupported configuration: internal snapshot for disk vda
unsupported for storage type raw
# virsh snapshot-create-as vm-test snap1 --memspec file=/tmp/vm-test.mem
--diskspec vda,snapshot=internal
error: unsupported configuration: internal snapshot for disk vda
unsupported for storage type raw
# virsh snapshot-create-as vm-test snap1 --disk-only --atomic --diskspec
vda,snapshot=internal
error: unsupported configuration: active qemu domains require external
disk snapshots; disk vda requested internal
and some other test, without any success:(
What I just would like to achieve is to do a consistent disk snapshot of
the VM without VM shutdown (suspend is ok) and using virsh as this seems
supported (I can manually run rbd snap create, I know). And if possible
whol mem+disk consistent snapshot without downtime :)
Any hint to make this working, as anyone out ther having succes in such
process?
8 years, 11 months
[libvirt-users] vdsm hook issues
by Jean-Pierre Ribeauville
Hi,
1) is it enough to add a hook.py in /usr/libexec/vdsm/hooks/before_vm_start directory , and then shut down and reboot a guest to
see this hook.py invoked ?
2) when running manually my hook.py, I got following error :
ImportError: No module named hooking
Do I have to install anything to solve this issue ?
Thanks for help.
Regards,
J.P. Ribeauville
P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5 Bureau 4
jpribeauville(a)axway.com<mailto:jpribeauville@axway.com>
http://www.axway.com<http://www.axway.com/>
P Pensez à l'environnement avant d'imprimer.
8 years, 11 months
Re: [libvirt-users] libvirt-users Digest, Vol 73, Issue 12 ] Failure when attaching a device
by Jean-Pierre Ribeauville
Hi,
I think that issue is due to the fact that my Guest is a transient domain.
When shutting down it or migrating to another host , then virsh list --all doesn't show this Guest anymore.
How may I make this Guest a persistent one ?
( i.e. this Guest has been created via RHEV-M GUI)
Thx for hlep.
Regards,
J.P.
-----Message d'origine-----
De : libvirt-users-bounces(a)redhat.com [mailto:libvirt-users-bounces@redhat.com] De la part de libvirt-users-request(a)redhat.com
Envoyé : lundi 11 janvier 2016 21:25
À : libvirt-users(a)redhat.com
Objet : libvirt-users Digest, Vol 73, Issue 12
Send libvirt-users mailing list submissions to
libvirt-users(a)redhat.com
To subscribe or unsubscribe via the World Wide Web, visit
https://www.redhat.com/mailman/listinfo/libvirt-users
or, via email, send a message with subject or body 'help' to
libvirt-users-request(a)redhat.com
You can reach the person managing the list at
libvirt-users-owner(a)redhat.com
When replying, please edit your Subject line so it is more specific than "Re: Contents of libvirt-users digest..."
Today's Topics:
1. Networking with qemu/kvm+libvirt (Andre Goree)
2. Failure when attaching a device (Jean-Pierre Ribeauville)
3. Re: Networking with qemu/kvm+libvirt (Laine Stump)
4. Unable to validate doc against .... (Jean-Pierre Ribeauville)
----------------------------------------------------------------------
Message: 1
Date: Mon, 11 Jan 2016 14:25:21 -0500
From: Andre Goree <andre(a)drenet.net>
To: libvirt-users(a)redhat.com
Subject: [libvirt-users] Networking with qemu/kvm+libvirt
Message-ID: <100725aa681e75449b9da623e7a7cf1a(a)drenet.net>
Content-Type: text/plain; charset=UTF-8; format=flowed
I have some questions regarding the way that networking is handled via qemu/kvm+libvirt -- my apologies in advance if this is not the proper mailing list for such a question.
I am trying to determine how exactly I can manipulate traffic from a?_guest's_ NIC using iptables on the _host_. On the host, there is a bridged virtual NIC that corresponds to the guest's NIC. That interface does not have an IP setup on it on the host, however within the vm itself the IP is configured and everything works as expected.
During my testing, I've seemingly determined that traffic from the vm does NOT traverse iptables on the host, but I _can_ in fact see the traffic via tcpdump on the host. This seems odd to me, unless the traffic is passed on during interaction with the kernel, and thus never actually reaches iptables. I've gone as far as trying to log via iptables any and all traffic traversing the guest's interface on the host, but to no avail (iptables does not see any traffic from the guest's NIC on the host).
Is this the way it's supposed to work? And if so, is there any way I can do IP/port redirection silently on the _host_?
Thanks in advance for any insight that anyone can share :)
--
Andre Goree
-=-=-=-=-=-
Email - andre at drenet.net
Website - http://www.drenet.net
PGP key - http://www.drenet.net/pubkey.txt
-=-=-=-=-=-
------------------------------
Message: 2
Date: Mon, 11 Jan 2016 19:35:29 +0000
From: Jean-Pierre Ribeauville <jpribeauville(a)axway.com>
To: "libvirt-users(a)redhat.com" <libvirt-users(a)redhat.com>
Subject: [libvirt-users] Failure when attaching a device
Message-ID:
<1051EFB4D3A1704680C38CCAAC5836D292F0218E(a)WPTXMAIL2.ptx.axway.int>
Content-Type: text/plain; charset="iso-8859-1"
Hi,
I'm facing following issue ( or misunderstanding from my side)
I try to attach a device to a running guest ; I want to do it persistently and without having to restart the GUEST .
By using options "-live - persistent" , I got following error :
[root@ldc01omv01 data]# virsh attach-device VM_RHEL7-1 "../data/channel_omnivision_to_be_used.xml" --live --persistent
Please enter your authentication name: root@ldc01omv01
Please enter your password:
error: Failed to attach device from ../data/channel_omnivision_to_be_used.xml
error: Requested operation is not valid: cannot modify device on transient domain
[root@ldc01omv01 data]# virsh -r list --all
Id Name State
----------------------------------------------------
8 VM_RHEL7-2 running
11 W2008R2-2 running
12 VM_RHEL7-1 running
By using options "-live " , I got following error :
[root@ldc01omv01 data]# virsh attach-device VM_RHEL7-1 "../data/channel_omnivision_to_be_used.xml" --live
Please enter your authentication name: root@ldc01omv01
Please enter your password:
error: Failed to attach device from ../data/channel_omnivision_to_be_used.xml
error: Unable to read from monitor: Connection reset by peer
[root@ldc01omv01 data]# virsh -r list --all
Id Name State
----------------------------------------------------
8 VM_RHEL7-2 running
11 W2008R2-2 running
[root@ldc01omv01 data]#
And then the Guest is powered off !!
If I try to attach device when the Guest is off , then :
[root@ldc01omv01 data]# virsh attach-device VM_RHEL7-1 "../data/channel_omnivision_to_be_used_1.xml" --config --persistent
Please enter your authentication name: root@ldc01omv01
Please enter your password:
error: failed to get domain 'VM_RHEL7-1'
error: Domain not found: no domain with matching name 'VM_RHEL7-1'
FYI , xml file contents is :
<channel type='unix'>
<source mode='bind' path='//var/lib/libvirt/qemu/VM_RHEL7-1_omnivision_1.agent'/>
<target type='virtio' name='omnivision_1.agent'/>
</channel>
I'm using libvirt-1.2.17-13.el7.x86_64
Any help is welcome.
Thanks.
Regards,
J.P. Ribeauville
P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5 Bureau 4
jpribeauville(a)axway.com<mailto:jpribeauville@axway.com>
http://www.axway.com<http://www.axway.com/>
P Pensez ? l'environnement avant d'imprimer.
8 years, 11 months
[libvirt-users] Unable to validate doc against ....
by Jean-Pierre Ribeauville
Hi,
By trying to add this device :
<channel type='unix'>
<source mode='bind' path='//var/lib/libvirt/qemu/VM_RHEL7-1_omnivision_1.agent'/>
<target type='virtio' name='omnivision_1.agent'/>
</channel>
Within config file by issuing virsh edit , I got following error when to saving the config file :
Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
I'm using libvirt-1.2.17-13.el7.x86_64
Is there my mistake or a known issue ( quite sure it was working with former release)
Thx for help.
Regards,
J.P. Ribeauville
P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5 Bureau 4
jpribeauville(a)axway.com<mailto:jpribeauville@axway.com>
http://www.axway.com<http://www.axway.com/>
P Pensez à l'environnement avant d'imprimer.
8 years, 11 months
[libvirt-users] Software Offloading Performance Issue on increasing VM's (TSO enabled ) pushing traffic
by Piyush R Srivastava1
Hi,
Problem-
Offloading (in Software) for VM generated packets ( TSO enabled in VM's )
degrades severely with increase in VM's on a host.
On increasing VM's ( which are pushing traffic simultaneously ) on a
compute node-
- % offloaded packets coming out of VM's ( TSO enabled ) on tap port /
veth-pair decreases significantly
- Size of offloaded packets coming out of VM's (TSO enabled ) on tap
porty / veth pair decreases significantly
We are using OpenStack setup. Throughput for SNAT Test ( iperf client at VM
and server at external network machine ) is SIGNIFICANTLY less that DNAT
Test ( server at VM and client at external network machine). For 50 VM's
( 25 VM on each compute node on a 2 Compute Node setup ) SNAT throughput is
30% less than DNAT Throughput.
I was hoping to get community feedback on what is controlling the software
offloading of VM packets and how can we improve it ?
NOTE- This seems to be one of the bottlenecks in SNAT which is affecting
throughput at TX side on Compute Node. Improving this would help in
improving SNAT test network performance.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Description-
We have a testbed OpenStack deployment. We boot 1, 10 and 25 VM's on a
single compute node and start iperf traffic. ( VM's are iperf client ).
We then simultaneously do tcpdump at the veth-pair connecting the VM to the
OVS Bridges.
Tcpdump data shows that on increasing the VM's on a host, the % of
offloaded packets degrades severely
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Host configuration- 12 cores ( 24 vCPU ), 40 GB RAM
[root rhel7-25 ~]# uname -a
Linux rhel7-25.in.ibm.com 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38
EST 2015 x86_64 x86_64 x86_64 GNU/Linux
VM MTU is set to 1450
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Analysis-
Following is the % of non-offloaded packets observed at the tap ports /
veth pair ( connected to VM's TO OVS Bridge )
------------------------------------
| VMs on 1 Compute Node | % Non-Offloaded packets |
|-----------------------------------|
| 1 | 11.11% |
| 10 | 71.78% |
| 25 | 80.44% |
|----|--------- |
Thus we see significant degradation in offloaded packets when 10 and 25
VM's are sending iperf data simultaneously. ( TSO enabled VM's )
Non-Offloaded packets means Ethernet Frame of size 1464 ( VM MTU is 1450 ).
Thus the packets coming out of the VM's (TSO enabled ) are majority
non-offloaded as we increase VM's on a host.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Tcpdump details-
Iperf Server IP- 1.1.1.34
For 1 VM, we see majority offloaded packets and also large sized offloaded
frames-
[piyush rhel7-34 25]$ cat qvoed7aa38d-22.log | grep "> 1.1.1.34.5001" |
head -n 30
14:36:26.331073 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 74:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 0
14:36:26.331917 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 66:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 0
14:36:26.331946 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 90:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 24
14:36:26.331977 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 7056:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 6990
14:36:26.332018 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 5658:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 5592
14:36:26.332527 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 7056:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 6990
14:36:26.332560 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 9852:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 9786
14:36:26.333024 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 8454:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 8388
14:36:26.333054 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 7056:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 6990
14:36:26.333076 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 4260:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 4194
14:36:26.333530 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 16842:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 16776
14:36:26.333568 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 4260:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 4194
14:36:26.333886 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 21036:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 20970
14:36:26.333925 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 2862:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 2796
14:36:26.334303 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 21036:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 20970
14:36:26.334349 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 2862:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 2796
14:36:26.334741 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 22434:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 22368
14:36:26.335118 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 25230:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 25164
14:36:26.335566 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 25230:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 25164
14:36:26.336007 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 23832:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 23766
For 20 VM's, we see reduction is size of offloaded packets and also the
size of offloaded packets is reduced. Tcpdump for one of the 10 VM's
( similar characterization for all 10 VM's )-
[piyush rhel7-34 25]$ cat qvo255d8cdd-90.log | grep "> 1.1.1.34.5001" |
head -n 30
15:09:25.024790 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 74:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 0
15:09:25.026834 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 66:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 0
15:09:25.026870 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 90:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 24
15:09:25.027186 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.027213 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 5658:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 5592
15:09:25.032500 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 5658:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 5592
15:09:25.032539 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 1464:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 1398
15:09:25.032567 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.035122 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.035631 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.035661 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.038508 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.038904 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.039300 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
For 25 VM's, we see very less offloaded packets and also the size of
offloaded packets is reduced. Tcpdump for one of the 25 VM's ( similar
characterization for all 25 VM's )-
15:52:31.544316 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.544340 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545034 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545066 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 5658:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 5592
15:52:31.545474 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545501 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 2862:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 2796
15:52:31.545539 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 2862:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 2796
15:52:31.545572 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 7056:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 6990
15:52:31.545736 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545807 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545813 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545934 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545956 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545974 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.546012 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
Thanks and regards,
Piyush Raman
Mail: pirsriva(a)in.ibm.com
8 years, 11 months
[libvirt-users] Question regarding networking and qemu/kvm+libvirt
by Andre Goree
I have a question concerning the workings of networking in
qemu/kvm+libvirt -- my apologies in advance if this is the wrong mailing
list for such a question.
I have a host machine with which I'm trying to redirect network traffic
coming from a guest's NIC to a different IP. There is a bridged adapter
on the host (without an IP configured on it) that is used by my guest's
NIC -- the IP, etc. is configured within the guest. From what I can
tell, the traffic is not traversing iptables on the host, BUT I can see
traffic leaving the guest's NIC (on the host) using tcpdump. I've gone
as far as logging all traffic on the vm's NIC (on the host) using
iptables just to confirm that the host's iptables is not seeing the
traffic.
I'm wondering, is this the expected behavior? And if so, how then can
redirect specific traffic from the guest (transparently) to a different
IP?
--
Andre Goree
-=-=-=-=-=-
Email - andre at drenet.net
Website - http://www.drenet.net
PGP key - http://www.drenet.net/pubkey.txt
-=-=-=-=-=-
8 years, 11 months