[libvirt-users] Failed to reboot VM via virsh
by xuanmao_001
Hi, Eric:
I want to reboot vm using libvirt reboot api virDomainReboot(). but it occured error like following:
"error: Failed to reboot domain abc"
"error: Requested operation is not valid: Reboot is not supported without the JSON monitor".
I updated qemu version,such as "qemu-kvm-1.2.0", it still like that.
can you give me any ideas?
my libvirt version: 0.9.8.
thanks.
xuanmao_001
12 years, 2 months
[libvirt-users] network comanline to xml
by basti
hello,
i start a kvm guest with the following comand line:
kvm -hda myimage.img -m 1024 -smp 2 \
-net nic -net tap,ifname=tap0,script=no
now a plan to start this via libvirt.
I have try to set nat via the libvirt-manager but this doesn't connect.
So i try to use bridge network via the GUI and get a "non supported"
error when i use tap0 as "bridge device".
How can i "convert" the "-net nic -net tap,ifname=tap0,script=no" to a
valid libvirt-xml file?
The following example is also buggy:
<interface type='network'>
<mac address='52:54:00:46:3e:70'/>
<source network='default'/>
<target dev='tap0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'$
</interface>
thanks
12 years, 2 months
[libvirt-users] How to disable dnsmasq from starting automatically with libvirtd
by Marwan Tanager
Hi.
I have a machine with a local DHCP server and a couple of virtual networks and
I've configured the server for each virtual interface, so that I would be able
to install VMs on the corresponding subnets using PXE.
The problem is that the two DHCP servers (my local server and dnsmasq) are
conflicting with each other causing the boot process to either fails or takes
ages untill a VM can catch the PXE parameters.
Note this output upon starting two VMs on two different subnets:
-----------------------------------------------------------------------------
tail -n1 -f </var/log/syslog | egrep -i "dhcpd|dnsmasq-dhcp"
Sep 13 05:11:25 host dhcpd: DHCPDISCOVER from 52:54:00:72:f0:e2 via virbr0
Sep 13 05:11:25 host dhcpd: DHCPDISCOVER from 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:11:26 host dhcpd: DHCPOFFER on 192.168.122.194 to 52:54:00:72:f0:e2 via virbr0
Sep 13 05:11:26 host dhcpd: DHCPOFFER on 192.168.100.107 to 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:11:27 host dhcpd: DHCPREQUEST for 192.168.122.194 (192.168.122.1) from 52:54:00:72:f0:e2 via virbr0
Sep 13 05:11:27 host dhcpd: DHCPACK on 192.168.122.194 to 52:54:00:72:f0:e2 via virbr0
Sep 13 05:11:28 host dnsmasq-dhcp[1882]: DHCPDISCOVER(virbr1) 52:54:00:2a:e0:a6
Sep 13 05:11:28 host dnsmasq-dhcp[1882]: DHCPOFFER(virbr1) 192.168.100.251 52:54:00:2a:e0:a6
Sep 13 05:11:28 host dhcpd: DHCPREQUEST for 192.168.100.251 (192.168.100.1) from 52:54:00:2a:e0:a6 via virbr1: unknown lease 192.168.100.251.
Sep 13 05:11:28 host dnsmasq-dhcp[1882]: DHCPREQUEST(virbr1) 192.168.100.251 52:54:00:2a:e0:a6
Sep 13 05:11:28 host dnsmasq-dhcp[1882]: DHCPACK(virbr1) 192.168.100.251 52:54:00:2a:e0:a6
Sep 13 05:11:36 host dnsmasq-dhcp[1882]: DHCPDISCOVER(virbr1) 52:54:00:2a:e0:a6
Sep 13 05:11:36 host dnsmasq-dhcp[1882]: DHCPOFFER(virbr1) 192.168.100.251 52:54:00:2a:e0:a6
Sep 13 05:11:36 host dhcpd: DHCPDISCOVER from 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:11:36 host dhcpd: DHCPOFFER on 192.168.100.107 to 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:11:38 host dnsmasq-dhcp[1882]: DHCPREQUEST(virbr1) 192.168.100.107 52:54:00:2a:e0:a6
Sep 13 05:11:38 host dnsmasq-dhcp[1882]: DHCPNAK(virbr1) 192.168.100.107 52:54:00:2a:e0:a6 address not available
Sep 13 05:11:38 host dhcpd: DHCPREQUEST for 192.168.100.107 (192.168.100.1) from 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:11:38 host dhcpd: DHCPACK on 192.168.100.107 to 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:11:53 host dhcpd: DHCPDISCOVER from 52:54:00:72:f0:e2 via virbr0
Sep 13 05:11:53 host dhcpd: DHCPOFFER on 192.168.122.194 to 52:54:00:72:f0:e2 via virbr0
Sep 13 05:11:53 host dhcpd: DHCPREQUEST for 192.168.122.194 (192.168.122.1) from 52:54:00:72:f0:e2 via virbr0
Sep 13 05:11:53 host dhcpd: DHCPACK on 192.168.122.194 to 52:54:00:72:f0:e2 via virbr0
Sep 13 05:12:00 host dhcpd: DHCPDISCOVER from 52:54:00:72:f0:e2 via virbr0
Sep 13 05:12:00 host dhcpd: DHCPOFFER on 192.168.122.194 to 52:54:00:72:f0:e2 via virbr0
Sep 13 05:12:00 host dhcpd: DHCPREQUEST for 192.168.122.194 (192.168.122.1) from 52:54:00:72:f0:e2 via virbr0
Sep 13 05:12:00 host dhcpd: DHCPACK on 192.168.122.194 to 52:54:00:72:f0:e2 via virbr0
Sep 13 05:12:03 host dhcpd: DHCPDISCOVER from 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:12:03 host dhcpd: DHCPOFFER on 192.168.100.107 to 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:12:03 host dhcpd: DHCPREQUEST for 192.168.100.107 (192.168.100.1) from 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:12:03 host dhcpd: DHCPACK on 192.168.100.107 to 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:12:06 host dnsmasq-dhcp[1882]: DHCPDISCOVER(virbr1) 52:54:00:2a:e0:a6
Sep 13 05:12:06 host dnsmasq-dhcp[1882]: DHCPOFFER(virbr1) 192.168.100.251 52:54:00:2a:e0:a6
Sep 13 05:12:06 host dnsmasq-dhcp[1882]: DHCPREQUEST(virbr1) 192.168.100.107 52:54:00:2a:e0:a6
Sep 13 05:12:06 host dnsmasq-dhcp[1882]: DHCPNAK(virbr1) 192.168.100.107 52:54:00:2a:e0:a6 address not available
Sep 13 05:12:08 host dhcpd: DHCPDISCOVER from 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:12:08 host dnsmasq-dhcp[1882]: DHCPDISCOVER(virbr1) 52:54:00:2a:e0:a6
Sep 13 05:12:08 host dnsmasq-dhcp[1882]: DHCPOFFER(virbr1) 192.168.100.251 52:54:00:2a:e0:a6
Sep 13 05:12:08 host dhcpd: DHCPOFFER on 192.168.100.107 to 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:12:08 host dnsmasq-dhcp[1882]: DHCPREQUEST(virbr1) 192.168.100.107 52:54:00:2a:e0:a6
Sep 13 05:12:08 host dnsmasq-dhcp[1882]: DHCPNAK(virbr1) 192.168.100.107 52:54:00:2a:e0:a6 address not available
Sep 13 05:12:08 host dhcpd: DHCPREQUEST for 192.168.100.107 (192.168.100.1) from 52:54:00:2a:e0:a6 via virbr1
Sep 13 05:12:08 host dhcpd: DHCPACK on 192.168.100.107 to 52:54:00:2a:e0:a6 via virbr1
-----------------------------------------------------------------------------
So, is there a way to stop dnsmasq from starting automatically when starting
libvird? I haven't been able to see any trace of it in the init scripts, and my
guessing is that it's started not as a service, but in an ad-hoc manner by
libvirt-bin.
Checking the dependencies of the libvirt-bin package, I've noticed that
dnsmasq is among them. So, my question is whether, starting dnsmasq
automatically is something that is hard coded in libvirt-bin or is it just started
in some other way that I am not aware of?
Any suggestions would be really appreciated.
Thanks,
Marwan
12 years, 2 months
[libvirt-users] After a 'virsh blockpull', 'virsh snapshot-list --tree' o/p does not reflect reality
by Kashyap Chamarthy
Hi (Eric?),
A couple of questions while using the 'virsh blockpull'
Summary:
1] Created snapshots this way: base<-snap1<-snap2<-snap3 (online, external snapshot
--disk-only)
2] I did a 'virsh blockpull' from snap2 into snap3
3] Next, did another 'virsh blockpull' from snap1 into snap3
- Here, 'qemu-img info /path/to/snap3' shows its backing file correctly as snap1. But not
'virsh snapshot-list $domain --tree' . Any hints?
Detail:
#=========================================#
[root@moon ~]# virsh domblklist daisy --details
Type Device Target Source
------------------------------------------------
file disk vda /export/vmimgs/daisy.qcow2
[root@moon ~]#
#=========================================#
[root@moon ~]# virsh snapshot-create-as daisy snap1-daisy "snap1 description" --diskspec
vda,file=/export/vmimgs/snap1-daisy.qcow2 --disk-only --atomic
Domain snapshot snap1-daisy created
[root@moon ~]#
#=========================================#
[root@moon ~]# virsh snapshot-create-as daisy snap2-daisy "snap2 description" --diskspec
vda,file=/export/vmimgs/snap2-daisy.qcow2 --disk-only --atomic
Domain snapshot snap2-daisy created
[root@moon ~]#
#=========================================#
[root@moon ~]# virsh snapshot-create-as daisy snap3-daisy "snap3 description" --diskspec
vda,file=/export/vmimgs/snap3-daisy.qcow2 --disk-only --atomic
Domain snapshot snap3-daisy created
#=========================================#
[root@moon libvirt-0.10.1-3]# virsh snapshot-list daisy
Name Creation Time State
------------------------------------------------------------
clean-rawhide-f17 2011-12-08 14:34:55 +0530 shutoff
snap1-daisy 2012-09-12 14:58:18 +0530 disk-snapshot
snap2-daisy 2012-09-12 14:59:30 +0530 disk-snapshot
snap3-daisy 2012-09-12 15:00:36 +0530 disk-snapshot
#=========================================#
[root@moon libvirt-0.10.1-3]# virsh snapshot-list daisy --tree
clean-rawhide-f17
snap1-daisy
|
+- snap2-daisy
|
+- snap3-daisy
[root@moon libvirt-0.10.1-3]#
#=========================================#
=> For clarity, listing out the backing files of each image.
#=========================================#
[root@moon libvirt-0.10.1-3]# qemu-img info /export/vmimgs/snap3-daisy.qcow2
image: /export/vmimgs/snap3-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 129M
cluster_size: 65536
backing file: /export/vmimgs/snap2-daisy.qcow2
#=========================================#
[root@moon libvirt-0.10.1-3]# qemu-img info /export/vmimgs/snap2-daisy.qcow2
image: /export/vmimgs/snap2-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 3.6M
cluster_size: 65536
backing file: /export/vmimgs/snap1-daisy.qcow2
#=========================================#
[root@moon libvirt-0.10.1-3]# qemu-img info /export/vmimgs/snap1-daisy.qcow2
image: /export/vmimgs/snap1-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 2.5M
cluster_size: 65536
backing file: /export/vmimgs/daisy.qcow2
[root@moon libvirt-0.10.1-3]#
#=========================================#
=> NOTE: we're pulling snap2 data into snap3, by doing a 'virsh blockpull' <=
#=========================================#
[root@moon libvirt-0.10.1-3]# virsh blockpull --domain daisy --path
/export/vmimgs/snap3-daisy.qcow2 --base /export/vmimgs/snap2-daisy.qcow2 --wait --verbose
Block Pull: [100 %]
Pull complete
[root@moon libvirt-0.10.1-3]#
#=========================================#
[root@moon libvirt-0.10.1-3]# qemu-img info /export/vmimgs/snap3-daisy.qcow2
image: /export/vmimgs/snap3-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 143M
cluster_size: 65536
backing file: /export/vmimgs/snap2-daisy.qcow2
[root@moon libvirt-0.10.1-3]#
#=========================================#
[root@moon libvirt-0.10.1-3]# virsh snapshot-list daisy --tree
clean-rawhide-f17
snap1-daisy
|
+- snap2-daisy
|
+- snap3-daisy
[root@moon libvirt-0.10.1-3]#
#=========================================#
=> Now, let's pull the data from 'snap1' into 'snap3' (so that we can make 'snap2'
redundant, and reduce the snapshot chain.
#=========================================#
[root@moon libvirt-0.10.1-3]# virsh blockpull --domain daisy --path
/export/vmimgs/snap3-daisy.qcow2 --base /export/vmimgs/snap1-daisy.qcow2 --wait --verbose
Block Pull: [100 %]
Pull complete
#=========================================#
NOTE: now the snapshot-tree should flattened (as we pulled the data from snap1 into snap3).
Let's check by running 'qemu-img' . (so the backing-file is now pointing to snap1, as
expected)
#=========================================#
[root@moon libvirt-0.10.1-3]# qemu-img info /export/vmimgs/snap3-daisy.qcow2
image: /export/vmimgs/snap3-daisy.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 145M
cluster_size: 65536
backing file: /export/vmimgs/snap1-daisy.qcow2
[root@moon libvirt-0.10.1-3]#
#=========================================#
Here, shouldn't 'virsh snapshot-list' --tree be updated as well ?(since 'snap2' is no more
the backing file for 'snap3' ?)
#=========================================#
[root@moon libvirt-0.10.1-3]# virsh snapshot-list daisy --tree
clean-rawhide-f17
snap1-daisy
|
+- snap2-daisy
|
+- snap3-daisy
[root@moon libvirt-0.10.1-3]#
#=========================================#
Version Details:
#=========================================#
[root@moon ~]# rpm -q qemu-kvm libvirt; uname -r
qemu-kvm-1.2-0.2.20120806git3e430569.fc17.x86_64
libvirt-0.10.1-3.fc17.x86_64
3.5.2-3.fc17.x86_64
[root@moon ~]#
#=========================================#
Any ideas why the 'tree' isn't updated yet?
--
/kashyap
12 years, 2 months
[libvirt-users] libvirtd on MacOS X
by Waldemar Brodkorb
Hi,
I am trying to run libvirtd on MacOS X, but for now without success. My hypervisor is VirtualBox.
Libvirt is installed from ports system. Without libvirtd I can see my virtual machines:
LIBVIRT_LOG_FILTERS=1:vbox virsh -c vbox:///session
setlocale: No such file or directory
2012-09-13 08:56:43.937+0000: -1: info : libvirt version: 0.9.10
2012-09-13 08:56:43.937+0000: -1: debug : tryLoadOne:163 : Found VBoxXPCOMC.dylib in '/Applications/VirtualBox.app/Contents/MacOS'
2012-09-13 08:56:43.937+0000: -1: debug : vboxRegister:97 : VBoxCGlueInit found API version: 4.1.22 (4001022)
2012-09-13 08:56:43.937+0000: -1: debug : vboxRegister:129 : VirtualBox API version: 4.1
2012-09-13 08:56:43.953+0000: -1: debug : vboxOpen:1048 : in vboxOpen
2012-09-13 08:56:43.962+0000: -1: debug : vboxNetworkOpen:7319 : network initialized
2012-09-13 08:56:43.965+0000: -1: debug : vboxStorageOpen:8137 : vbox storage initialized
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # list
Id Name State
----------------------------------------------------
4 xen01 running
5 xen02 running
After starting libvirtd as root, I can connect, but no machines are displayed:
LIBVIRT_LOG_FILTERS=1:vbox virsh -c vbox+tcp://192.168.100.83/session
setlocale: No such file or directory
2012-09-13 09:02:19.399+0000: -1: info : libvirt version: 0.9.10
2012-09-13 09:02:19.399+0000: -1: debug : tryLoadOne:163 : Found VBoxXPCOMC.dylib in '/Applications/VirtualBox.app/Contents/MacOS'
2012-09-13 09:02:19.399+0000: -1: debug : vboxRegister:97 : VBoxCGlueInit found API version: 4.1.22 (4001022)
2012-09-13 09:02:19.399+0000: -1: debug : vboxRegister:129 : VirtualBox API version: 4.1
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # list
Id Name State
----------------------------------------------------
virsh #
When trying to start libvirtd as the user, which started the virtual machines, I get:
1|wbx@neon:~ $ libvirtd -v -l
libvirtd: initialization failed
Any idea what is wrong here? Do I have to start libvirtd in the context of the user, to see the virtual machines?
Thanks in advance for any advice,
Waldemar
12 years, 2 months
[libvirt-users] how make the pool to be inactive?
by zhijun liu
hi,all
I want to delete the storage pool,but libvirt tell me say:
> error: Failed to delete pool virtimages
>
> error: Requested operation is not valid: storage pool is still active
>
>
>
the question is how to make the pool's status to be inactive?
thanks
liuzhijun
12 years, 2 months
[libvirt-users] Network inoperable with QEMU arm example image
by Larry Brown
I am running Fedora 16 64bit and installed libvirt. I have the VM
running with arm emulation with this one issue I can't figure out. I
used Virtual Machine Manager to manage the VM and can access its console
there. The Ethernet appears to be eth1 and the guest can set an IP on
it etc. However, I cannot see any traffic from the Host when dumping
any of the interfaces. I've tried several combinations of network
setups using the GUI and none appear to work. Optimally I'd like to
bridge to my primary interface (em1) and be able to pull an address and
talk directly to my network so I can download packages etc while in the
VM. I tried setting up networking in that fashion with:
Source Device : em1 with macvtap
Device Model: Hypervisor Default
Source Mode: Bridge
but alas I cannot pull dhcp nor can I set a static address and ping
other interfaces on the network.
It also appears that every time I start the VM it creates another vnetX
interface.
One of the troubleshooting pages I came across listed all the tools but
nothing about what to look for:
1) virsh net-list --all
Name State Autostart
-----------------------------------------
default active yes
2) brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.52540029e6c7 yes virbr0-nic
vnet0
vnet1
vnet2
vnet3
vnet4
3) sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-iptables = 1
4) iptables -L -v -n
Chain INPUT (policy ACCEPT 767K packets, 189M bytes)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0
0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0
0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0
0.0.0.0/0 udp dpt:67
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0
0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT all -- * virbr0 0.0.0.0/0
192.168.122.0/24 state RELATED,ESTABLISHED
0 0 ACCEPT all -- virbr0 * 192.168.122.0/24
0.0.0.0/0
0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0
0.0.0.0/0
0 0 REJECT all -- * virbr0 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 * 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 532K packets, 79M bytes)
pkts bytes target prot opt in out source
destination
5) ps -ef | grep dnsmasq
nobody 12382 1 0 Sep11 ? 00:00:00 /usr/sbin/dnsmasq
--strict-order --bind-interfaces
--pid-file=/var/run/libvirt/network/default.pid --conf-file=
--except-interface lo --listen-address 192.168.122.1 --dhcp-range
192.168.122.2,192.168.122.254
--dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases
--dhcp-lease-max=253 --dhcp-no-override
6) ifconfig -a
em1 Link encap:Ethernet HWaddr 00:19:B9:48:2B:BA
inet addr:10.45.212.46 Bcast:10.45.212.255
Mask:255.255.255.0
inet6 addr: fe80::219:b9ff:fe48:2bba/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:19048593 errors:95686 dropped:154 overruns:0
frame:98437
TX packets:10619346 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12559170813 (11.6 GiB) TX bytes:1700214519 (1.5 GiB)
Interrupt:16
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2978074 errors:0 dropped:0 overruns:0 frame:0
TX packets:2978074 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:847201395 (807.9 MiB) TX bytes:847201395 (807.9 MiB)
macvtap0 Link encap:Ethernet HWaddr 52:54:00:AC:7F:0C
inet6 addr: fe80::5054:ff:feac:7f0c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:294814 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:42804350 (40.8 MiB) TX bytes:468 (468.0 b)
virbr0 Link encap:Ethernet HWaddr 52:54:00:29:E6:C7
inet addr:192.168.122.1 Bcast:192.168.122.255
Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:621 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:43217 (42.2 KiB)
virbr0-nic Link encap:Ethernet HWaddr 52:54:00:29:E6:C7
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
vnet0 Link encap:Ethernet HWaddr FE:54:00:AC:7F:0C
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:38159 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:1995337 (1.9 MiB)
vnet1 Link encap:Ethernet HWaddr FE:54:00:AC:7F:0C
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:37299 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:1949439 (1.8 MiB)
vnet2 Link encap:Ethernet HWaddr FE:54:00:AC:7F:0C
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:36154 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:1888745 (1.8 MiB)
vnet3 Link encap:Ethernet HWaddr FE:54:00:AC:7F:0C
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:35068 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:1832375 (1.7 MiB)
vnet4 Link encap:Ethernet HWaddr FE:54:00:AC:7F:0C
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:207 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:10896 (10.6 KiB)
7) cat /proc/sys/net/ipv4/ip_forward
1
Any help I can get to resolve this would be greatly appreciated. Very
frustrating...
Larry
12 years, 2 months
[libvirt-users] virtual networking - virbr0-nic interface
by Shantanu Pavgi
I need some help in understanding libvirt's virtual networking. I have configured bridged networking (shared physical device) on libvirt+KVM system which is working fine. Also, I am using default NAT network on with virbr0 bridge device and virbr0-nic. I would like to get better understanding on virbr0-nic works in this virtual network configuration. I understand that traffic from virbr0 bridge is forwarded through host system's physical interface eth0 using iptables rules, but I am not following virbr0-nic's working here. I see it attached to virbr0 bridge device though.
{{{
$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.14feb5dc4f06 no eth0
vnet1
virbr0 8000.525400f5a4ed yes virbr0-nic
vnet2
}}}
Any pointers on how virbr0 and virbr0-nic work would be really helpful.
--
Thanks,
Shantanu
12 years, 2 months
[libvirt-users] problem starting virt-manager
by Lentes, Bernd
Hi,
i try to run virt-manager on a SLES 11 SP1 box. I'm using kernel 2.6.32.12 and virt-manager 0.9.4-106.1.x86_64 .
The system is a 64bit box.
Here is the output:
=========================
pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/virt_manager/sles_11_sp1 # virt-manager &
[1] 9659
pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/virt_manager/sles_11_sp1 # Traceback (most recent call last):
File "/usr/share/virt-manager/virt-manager.py", line 386, in <module>
main()
File "/usr/share/virt-manager/virt-manager.py", line 247, in main
from virtManager import cli
File "/usr/share/virt-manager/virtManager/cli.py", line 29, in <module>
import libvirt
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 25, in <module>
raise lib_e
ImportError: /usr/lib64/libvirt.so.0: undefined symbol: selinux_virtual_domain_context_path
[1]+ Exit 1 virt-manager
=========================
As you see, virt-manager does not start.
Thanks for any hint.
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
Wir sollten nicht den Tod fürchten, sondern
das schlechte Leben
Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671
12 years, 2 months