[libvirt-users] some problem with snapshot by libvirt
by xingxing gao
Hi,all,i am using libvirt to manage my vm,in these days i am testing
the libvirt snapshot ,but meet some problem:
the snapshot was created from this command:
snapshot-create-as win7 --disk-only --diskspec
vda,snapshot=external --diskspec hda,snapshot=no
but when i tried to revert from the snapshot which created from the
above command ,i got error below:
virsh # snapshot-revert win7 1338041515 --force
error: unsupported configuration: revert to external disk snapshot not
supported yet
version:
virsh # version
Compiled against library: libvir 0.9.4
Using library: libvir 0.9.4
Using API: QEMU 0.9.4
Running hypervisor: QEMU 1.0.93
10 years, 1 month
[libvirt-users] virsh list not working with xen 4
by Rogério Vinhal Nunes
Hi, I'm having some trouble to get libvirt to show the correct power state
of my virtual machines. I'm using Ubuntu 10.04 + Xen 4.1.1 + libvirt 0.8.8.
virsh list --all only shows turned off machines registered in xend. If I
turn them on, they just "disappear", and when I start machines directly from
xml, they just doesn't appear at all.
Libvirt is correctly connecting to xen as I can use the other commands fine,
just the list option doesn't seem to work at all. What can I do to change
that?
# virsh version
Compiled against library: libvir 0.8.8
Using library: libvir 0.8.8
Using API: Xen 3.0.1
Running hypervisor: Xen 4.1.0
12 years, 3 months
[libvirt-users] qemu-kvm fails on RHEL6
by sumit sengupta
Hi,
When I'm trying to run qemu-kvm command on RHEL6(linux kernel 2.6.32) then I get following errors which I think related to tap devices in my setup. Any idea why is that?
bash$ LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-00000027 -uuid a93aeed9-15f7-4ded-b6b3-34c8d2c101a8 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000027.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -kernel /home/sumitsen/openstack/nova/instances/instance-00000027/kernel -initrd /home/sumitsen/openstack/nova/instances/instance-00000027/ramdisk -append root=/dev/vda console=ttyS0 -drive file=/home/sumitsen/openstack/nova/instances/instance-00000027/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=26,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:15:84:3e,bus=pci.0,addr=0x3 -chardev
file,id=charserial0,path=/home/sumitsen/openstack/nova/instances/instance-00000027/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/1
qemu-kvm: -netdev tap,fd=26,id=hostnet0: TUNGETIFF ioctl() failed: Bad file descriptor
TUNSETOFFLOAD ioctl() failed: Bad file descriptor
qemu-kvm: -append root=/dev/vda: could not open disk image console=ttyS0: No such file or directory
[sumitsen@sorrygate-dr ~]$ rpm -qa qemu-kvm
qemu-kvm-0.12.1.2-2.209.el6.x86_64
Let me know if you need any other info.
Thanks,
Sumit
12 years, 5 months
[libvirt-users] How does libvirt interaction with KVM to create a VM?
by Dennis Chen
All,
These days I am trying to understand the interaction relationship
between the libvirt and KVM kernel module, eg. kvm_intel.ko.
We know that KVM kernel module expose an entry in form of device file
"/dev/kvm" which can be accessed by user space application to control,
for example, create a VM using KVM_CREATE_VM with help of ioctl.
Now let's say the tool virsh based upon libvirt, we can create a guest
domain with the command looks like:
#virsh create guest.xml
Obviously, the above command will create a VM. But when I try to
investigate the libvirt code, I can't find any code play with the
"/dev/kvm" to send KVM_CREATE_VM ioctl code to KVM kernel module. But
I do found that the reference count of the kvm_intel.ko changed before
the virsh create command launched and after.
So my question is: how does the libvirt interaction with KVM to create a
VM? Anybody can give me some tips about that, eg, the corresponding
codes in libvirt?
BRs.
Dennis
12 years, 5 months
[libvirt-users] Query:Creating a Guest that requires memory and vCPUs across socket boundary
by sanjay
Hi! If a KVM-QEMU guest (spawned using libvirt) requires memory and CPU
cores that spans across socket boundaries and it wants to avoid memory
access across NUMA nodes, what is the best way to proceed ?
I came across the following statement during my search “If a guest
requires eight virtual CPUs, as each NUMA node only has four physical CPUs,
a better utilization may be obtained by running a pair of four virtual CPU
guests and splitting the work between them, rather than using a single 8
CPU guest.”
Is this statement still valid ? Can one use ‘numatune’ XML tag with ‘auto’
placement to create a guest, where vCPUs and memory is allocated in a
efficient manner across sockets based on available memory?
If one allocates memory across sockets and passes the topology information
to Guest using 'numa' tag, will it help in avoiding the NUMA penalty (ex:
<numa>/<cell cpus='0' memory=256000'><cell cpus='1' memory='512000') ?
Any advise on this subject will be appreciated.
--
Regards,
Sanjay
12 years, 5 months
[libvirt-users] How to start a storage pool using libvirt?
by Ananth
Hi what is the libvirt API call to start an inactive storage pool created
using the python binding storagePoolDefineXML() ?
There seems to be a virsh command which is pool-start <pool-name>, but I
am not sure what would be the call to start it. Or is there a way to start
the pool while it is being defined itself?
The autostart will be effective only when libvirtd service is restarted?
Thank you
--
Regards
Ananth
12 years, 5 months
[libvirt-users] Storage Pools & nodedev-create specific scsi_host#?
by mdecheser@comcast.net
Hello everyone,
Current host build is RHEL 6.2, soon to be upgrading.
I'm in the process of mapping out a KVM/RHEV topology. I have questions about the landscape of storage pools and the instantiation of specific scsi_host IDs using virsh nodedev-create and some magic XML definitions. I'm grouping these questions together because the answer to one may impact the other.
High-level goals:
- fence off I/O by providing NPIV paths to VMs individually
- provide each VM with a pair of paths to the same storage for redundancy
Problem 1 - Storage Pools:
I've successfully created a storage pool by starting with an initial SCSI host based off one of my virtual HBAs. I see my LUNs. I performed a virsh pool-dumpxml and captured the configuration of the storage pool using just one SCSI host. I then modified the XML file to include a 2nd SCSI host, stopped & deleted the original storage pool, then created & started a new storage pool. In the GUI, I am able to see the same LUNs, listed twice (once for each path).
Here's my XML file:
<pool type='scsi'>
<name>VM1POOL</name>
<uuid>f0465f3f-3c9c-766c-3c96-678a99ec81cb</uuid>
<capacity>144813588480</capacity>
<allocation>144813588480</allocation>
<available>0</available>
<source>
<adapter name='host5'/>
</source>
<source>
<adapter name='host6'/>
</source>
<target>
<path>/dev/disk/by-path</path>
<permissions>
<mode>0700</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>
Question 1 - Is this the correct approach for presenting a VM with SAN storage in a redundant fashion? Or should I instead use the mpath approach and find some way with the storage pool XML file to restrict which mpath devices are utilized (thereby restricting which LUNs the storage pool sees?).
It seems my current approach provides each LUN twice, however a LUN has two different volume identifiers based on its path:
UNIT:0:0:13
UNIT:0:1:13
Problem 2 - Specifying scsi_host# in a device XML file
As I mentioned, I've successfully created a virtual HBA using virsh nodedev-create and an XML config. Here's a sample config:
[root@host ~]# cat vm1vhba1.xml
<device>
<parent>scsi_host3</parent>
<capability type='scsi_host'>
<capability type='fc_host'>
<wwnn>a97a3b9220010002</wwnn>
<wwpn>a97a3b9220010001</wwpn>
</capability>
<capability type='vport_ops' />
</capability>
</device>
Pretty straightforward. The parent is physical SCSI host 3. The resulting SCSI host is 5. This is one of the SCSI hosts used in the storage pool above.
So, I wanted to test what would happen to my VM if I ran virsh nodedev-destroy scsi_host5. Essentially, testing the redundancy of the paths to the VM. The device was successfully deleted, however a few other things happened, both expected and unexpected.
- Since I only associated one of the two LUN units from my storage pool with the VM, I expected this to cause the VM to fail. It did fail, however in a headless fashion. Unexpected. The VM is still up and running in memory. Any effort to write out to the disks results in failures like "read-only filesystem". Expected.
- The Storage Pool entered into an inactive state. Unexpected. Efforts to restart the storage pool using virsh pool-start failed because one of the scsi_host devices was not present. Expected.
I tried to recreate the device using virsh nodedev-create vm1vhba1.xml, and the device was created successfully but as scsi_host9 (the next available scsi_host beyond the original scsi_host5).
So, unless I can recreate this scsi_host5, I won't be able to restart my storage pool and restore I/O to my VM.
Question 2: Is there a definition I can use in the XML config to specify the resulting scsi_host #?
I've been told that RHEL 6.3 is supposed to provide some methodology of auto-instantiating my virtual HBAs upon the reboot of a host so that the VMs on the will see their storage and can start automatically on boot. I have a feeling that the answer to my question may be brushing up against this functionality.
I am also assuming that if it's possible to specify the scsi_host# in my XML config, that I'll be able to restart my storage pool successfully and that will enable my VM to reestablish the connection to their storage.
Thanks in advance,
MD
12 years, 5 months
[libvirt-users] Speed of physical NIC?
by Marcel Müller
Hello everyone,
Im trying to use libvirt to calculate some statistics about the current
guest utilizations, but I do have some problems in regards of networking.
I can do sampling of network speed by using the interfaceStats-API over a
predefined period of time. But to calculate the percentage of the used
bandwidth I need to know to what kind of network the interface is connected
to 100 Mbit/s, 1000 Mbit/s,
? Is there any way to get this information
through Libvirt or is there any addition planned for that?
Thank you very much in advance,
Best Regards,
Marcel
12 years, 6 months
[libvirt-users] Trouble connecting to XenServer HyperVisor with Java bindings
by Nick Mathews
Hello,
I am trying to use the libvirt Java bindings (version 0.4.7) with libvirt
version 0.9.12 to connect to a XenServer hypervisor. Virsh is able to
connect to my XenServer, but when I try to do the same thing in Java, it
won't connect.
Here is the debug output:
2012-06-26 19:48:52.259+0000: 26051: info : libvirt version: 0.9.12,
package: 1.fc16 (Unknown, 2012-06-26-11:43:53, flynx)
2012-06-26 19:48:52.259+0000: 26051: warning : virLogParseOutputs:993 :
Ignoring invalid log output setting.
WARNING: no socket to connect to
2012-06-26 19:48:52.275+0000: 26051: debug : virInitialize:414 : register
drivers
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4dad5a0 name=Test
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterDriver:799 :
registering Test as driver 0
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterNetworkDriver:592 :
registering Test as network driver 0
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterInterfaceDriver:625
: registering Test as interface driver 0
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterStorageDriver:658 :
registering Test as storage driver 0
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterDeviceMonitor:691 :
registering Test as device driver 0
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterSecretDriver:724 :
registering Test as secret driver 0
2012-06-26 19:48:52.277+0000: 26051: debug : virRegisterNWFilterDriver:757
: registering Test as network filter driver 0
2012-06-26 19:48:52.279+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4dae100 name=Xen
2012-06-26 19:48:52.280+0000: 26051: debug : virRegisterDriver:799 :
registering Xen as driver 1
2012-06-26 19:48:52.283+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4dae960 name=OPENVZ
2012-06-26 19:48:52.283+0000: 26051: debug : virRegisterDriver:799 :
registering OPENVZ as driver 2
2012-06-26 19:48:52.283+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4daec00 name=VMWARE
2012-06-26 19:48:52.283+0000: 26051: debug : virRegisterDriver:799 :
registering VMWARE as driver 3
2012-06-26 19:48:52.283+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4dae5a0 name=PHYP
2012-06-26 19:48:52.284+0000: 26051: debug : virRegisterDriver:799 :
registering PHYP as driver 4
2012-06-26 19:48:52.284+0000: 26051: debug : virRegisterStorageDriver:658 :
registering PHYP as storage driver 1
2012-06-26 19:48:52.285+0000: 26051: debug : virRegisterInterfaceDriver:625
: registering PHYP as interface driver 1
2012-06-26 19:48:52.286+0000: 26051: debug : vboxRegister:137 :
VBoxCGlueInit failed, using dummy driver
2012-06-26 19:48:52.286+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4daeea0 name=VBOX
2012-06-26 19:48:52.287+0000: 26051: debug : virRegisterDriver:799 :
registering VBOX as driver 5
2012-06-26 19:48:52.287+0000: 26051: debug : virRegisterNetworkDriver:592 :
registering VBOX as network driver 1
2012-06-26 19:48:52.287+0000: 26051: debug : virRegisterStorageDriver:658 :
registering VBOX as storage driver 2
2012-06-26 19:48:52.290+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4db0a60 name=ESX
2012-06-26 19:48:52.290+0000: 26051: debug : virRegisterDriver:799 :
registering ESX as driver 6
2012-06-26 19:48:52.291+0000: 26051: debug : virRegisterInterfaceDriver:625
: registering ESX as interface driver 2
2012-06-26 19:48:52.292+0000: 26051: debug : virRegisterNetworkDriver:592 :
registering ESX as network driver 2
2012-06-26 19:48:52.293+0000: 26051: debug : virRegisterStorageDriver:658 :
registering ESX as storage driver 3
2012-06-26 19:48:52.294+0000: 26051: debug : virRegisterDeviceMonitor:691 :
registering ESX as device driver 1
2012-06-26 19:48:52.294+0000: 26051: debug : virRegisterSecretDriver:724 :
registering ESX as secret driver 1
2012-06-26 19:48:52.294+0000: 26051: debug : virRegisterNWFilterDriver:757
: registering ESX as network filter driver 1
2012-06-26 19:48:52.296+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4db0f40 name=Hyper-V
2012-06-26 19:48:52.297+0000: 26051: debug : virRegisterDriver:799 :
registering Hyper-V as driver 7
2012-06-26 19:48:52.297+0000: 26051: debug : virRegisterInterfaceDriver:625
: registering Hyper-V as interface driver 3
2012-06-26 19:48:52.298+0000: 26051: debug : virRegisterNetworkDriver:592 :
registering Hyper-V as network driver 3
2012-06-26 19:48:52.299+0000: 26051: debug : virRegisterStorageDriver:658 :
registering Hyper-V as storage driver 4
2012-06-26 19:48:52.299+0000: 26051: debug : virRegisterDeviceMonitor:691 :
registering Hyper-V as device driver 2
2012-06-26 19:48:52.299+0000: 26051: debug : virRegisterSecretDriver:724 :
registering Hyper-V as secret driver 2
2012-06-26 19:48:52.300+0000: 26051: debug : virRegisterNWFilterDriver:757
: registering Hyper-V as network filter driver 2
2012-06-26 19:48:52.300+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4db07c0 name=XenAPI
2012-06-26 19:48:52.301+0000: 26051: debug : virRegisterDriver:799 :
registering XenAPI as driver 8
2012-06-26 19:48:52.305+0000: 26051: debug : virRegisterDriver:775 :
driver=0x4dada80 name=remote
2012-06-26 19:48:52.305+0000: 26051: debug : virRegisterDriver:799 :
registering remote as driver 9
2012-06-26 19:48:52.305+0000: 26051: debug : virRegisterNetworkDriver:592 :
registering remote as network driver 4
2012-06-26 19:48:52.306+0000: 26051: debug : virRegisterInterfaceDriver:625
: registering remote as interface driver 4
2012-06-26 19:48:52.306+0000: 26051: debug : virRegisterStorageDriver:658 :
registering remote as storage driver 5
2012-06-26 19:48:52.306+0000: 26051: debug : virRegisterDeviceMonitor:691 :
registering remote as device driver 3
2012-06-26 19:48:52.306+0000: 26051: debug : virRegisterSecretDriver:724 :
registering remote as secret driver 3
2012-06-26 19:48:52.306+0000: 26051: debug : virRegisterNWFilterDriver:757
: registering remote as network filter driver 3
2012-06-26 19:48:52.395+0000: 26051: debug : virConnectOpenAuth:1455 :
name=xenapi://root@192.168.1.6?no_verify=1, auth=0x8beed7c0, flags=0
2012-06-26 19:48:52.643+0000: 26051: debug : virConnectGetConfigFile:1008 :
Loading config file '/etc/libvirt/libvirt.conf'
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1182 : name "xenapi://
root(a)192.168.1.6?no_verify=1" to URI components:
scheme xenapi
server 192.168.1.6
user root
port 0
path (null)
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1226 : trying driver 0
(Test) ...
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1232 : driver 0 Test
returned DECLINED
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1226 : trying driver 1
(Xen) ...
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1232 : driver 1 Xen
returned DECLINED
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1226 : trying driver 2
(OPENVZ) ...
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1232 : driver 2 OPENVZ
returned DECLINED
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1226 : trying driver 3
(VMWARE) ...
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1232 : driver 3 VMWARE
returned DECLINED
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1226 : trying driver 4
(PHYP) ...
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1232 : driver 4 PHYP
returned DECLINED
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1226 : trying driver 5
(VBOX) ...
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1232 : driver 5 VBOX
returned DECLINED
2012-06-26 19:48:52.647+0000: 26051: debug : do_open:1226 : trying driver 6
(ESX) ...
2012-06-26 19:48:52.648+0000: 26051: debug : do_open:1232 : driver 6 ESX
returned DECLINED
2012-06-26 19:48:52.648+0000: 26051: debug : do_open:1226 : trying driver 7
(Hyper-V) ...
2012-06-26 19:48:52.648+0000: 26051: debug : do_open:1232 : driver 7
Hyper-V returned DECLINED
2012-06-26 19:48:52.649+0000: 26051: debug : do_open:1226 : trying driver 8
(XenAPI) ...
2012-06-26 19:48:52.649+0000: 26051: debug : virAuthGetConfigFilePath:48 :
Determining auth config file path
2012-06-26 19:48:52.651+0000: 26051: debug : virAuthGetConfigFilePath:74 :
Checking for readability of '/root/.libvirt/auth.conf'
2012-06-26 19:48:52.652+0000: 26051: debug : virAuthGetConfigFilePath:83 :
Checking for readability of '/etc/libvirt/auth.conf'
2012-06-26 19:48:52.653+0000: 26051: debug : virAuthGetConfigFilePath:92 :
Using auth file '(null)'
2012-06-26 19:48:53.279+0000: 26051: debug : do_open:1232 : driver 8 XenAPI
returned SUCCESS
2012-06-26 19:48:53.279+0000: 26051: debug : do_open:1254 : network driver
0 Test returned DECLINED
2012-06-26 19:48:53.279+0000: 26051: debug : do_open:1254 : network driver
1 VBOX returned DECLINED
2012-06-26 19:48:53.279+0000: 26051: debug : do_open:1254 : network driver
2 ESX returned DECLINED
2012-06-26 19:48:53.279+0000: 26051: debug : do_open:1254 : network driver
3 Hyper-V returned DECLINED
2012-06-26 19:48:53.279+0000: 26051: debug : doRemoteOpen:542 : proceeding
with name = xenapi://
2012-06-26 19:48:53.280+0000: 26051: debug : doRemoteOpen:552 : Connecting
with transport 0
2012-06-26 19:48:53.280+0000: 26051: debug :
virNetTLSContextLocateCredentials:753 : pkipath=(null) isServer=0
tryUserPkiPath=0
2012-06-26 19:48:53.280+0000: 26051: debug :
virNetTLSContextLocateCredentials:825 : Using default TLS CA certificate
path
2012-06-26 19:48:53.280+0000: 26051: debug :
virNetTLSContextLocateCredentials:831 : Using default TLS CA revocation
list path
2012-06-26 19:48:53.280+0000: 26051: debug :
virNetTLSContextLocateCredentials:837 : Using default TLS key/certificate
path
2012-06-26 19:48:53.306+0000: 26051: debug : virNetClientClose:521 :
client=(nil)
2012-06-26 19:48:53.306+0000: 26051: debug : do_open:1254 : network driver
4 remote returned ERROR
2012-06-26 19:48:53.306+0000: 26051: debug : do_open:1269 : interface
driver 0 Test returned DECLINED
2012-06-26 19:48:53.306+0000: 26051: debug : do_open:1269 : interface
driver 1 PHYP returned DECLINED
2012-06-26 19:48:53.307+0000: 26051: debug : do_open:1269 : interface
driver 2 ESX returned DECLINED
2012-06-26 19:48:53.307+0000: 26051: debug : do_open:1269 : interface
driver 3 Hyper-V returned DECLINED
2012-06-26 19:48:53.307+0000: 26051: debug : doRemoteOpen:542 : proceeding
with name = xenapi://
2012-06-26 19:48:53.307+0000: 26051: debug : doRemoteOpen:552 : Connecting
with transport 0
2012-06-26 19:48:53.307+0000: 26051: debug :
virNetTLSContextLocateCredentials:753 : pkipath=(null) isServer=0
tryUserPkiPath=0
2012-06-26 19:48:53.307+0000: 26051: debug :
virNetTLSContextLocateCredentials:825 : Using default TLS CA certificate
path
2012-06-26 19:48:53.307+0000: 26051: debug :
virNetTLSContextLocateCredentials:831 : Using default TLS CA revocation
list path
2012-06-26 19:48:53.307+0000: 26051: debug :
virNetTLSContextLocateCredentials:837 : Using default TLS key/certificate
path
2012-06-26 19:48:53.308+0000: 26051: debug : virNetClientClose:521 :
client=(nil)
2012-06-26 19:48:53.308+0000: 26051: debug : do_open:1269 : interface
driver 4 remote returned ERROR
2012-06-26 19:48:53.308+0000: 26051: debug : do_open:1285 : storage driver
0 Test returned DECLINED
2012-06-26 19:48:53.308+0000: 26051: debug : do_open:1285 : storage driver
1 PHYP returned DECLINED
2012-06-26 19:48:53.308+0000: 26051: debug : do_open:1285 : storage driver
2 VBOX returned DECLINED
2012-06-26 19:48:53.308+0000: 26051: debug : do_open:1285 : storage driver
3 ESX returned DECLINED
2012-06-26 19:48:53.308+0000: 26051: debug : do_open:1285 : storage driver
4 Hyper-V returned DECLINED
2012-06-26 19:48:53.308+0000: 26051: debug : doRemoteOpen:542 : proceeding
with name = xenapi://
2012-06-26 19:48:53.308+0000: 26051: debug : doRemoteOpen:552 : Connecting
with transport 0
2012-06-26 19:48:53.308+0000: 26051: debug :
virNetTLSContextLocateCredentials:753 : pkipath=(null) isServer=0
tryUserPkiPath=0
2012-06-26 19:48:53.308+0000: 26051: debug :
virNetTLSContextLocateCredentials:825 : Using default TLS CA certificate
path
2012-06-26 19:48:53.308+0000: 26051: debug :
virNetTLSContextLocateCredentials:831 : Using default TLS CA revocation
list path
2012-06-26 19:48:53.308+0000: 26051: debug :
virNetTLSContextLocateCredentials:837 : Using default TLS key/certificate
path
2012-06-26 19:48:53.309+0000: 26051: debug : virNetClientClose:521 :
client=(nil)
2012-06-26 19:48:53.309+0000: 26051: debug : do_open:1285 : storage driver
5 remote returned ERROR
2012-06-26 19:48:53.309+0000: 26051: debug : do_open:1301 : node driver 0
Test returned DECLINED
2012-06-26 19:48:53.309+0000: 26051: debug : do_open:1301 : node driver 1
ESX returned DECLINED
2012-06-26 19:48:53.309+0000: 26051: debug : do_open:1301 : node driver 2
Hyper-V returned DECLINED
2012-06-26 19:48:53.309+0000: 26051: debug : doRemoteOpen:542 : proceeding
with name = xenapi://
2012-06-26 19:48:53.309+0000: 26051: debug : doRemoteOpen:552 : Connecting
with transport 0
2012-06-26 19:48:53.309+0000: 26051: debug :
virNetTLSContextLocateCredentials:753 : pkipath=(null) isServer=0
tryUserPkiPath=0
2012-06-26 19:48:53.309+0000: 26051: debug :
virNetTLSContextLocateCredentials:825 : Using default TLS CA certificate
path
2012-06-26 19:48:53.309+0000: 26051: debug :
virNetTLSContextLocateCredentials:831 : Using default TLS CA revocation
list path
2012-06-26 19:48:53.309+0000: 26051: debug :
virNetTLSContextLocateCredentials:837 : Using default TLS key/certificate
path
2012-06-26 19:48:53.309+0000: 26051: debug : virNetClientClose:521 :
client=(nil)
2012-06-26 19:48:53.310+0000: 26051: debug : do_open:1301 : node driver 3
remote returned ERROR
2012-06-26 19:48:53.310+0000: 26051: debug : do_open:1317 : secret driver 0
Test returned DECLINED
2012-06-26 19:48:53.310+0000: 26051: debug : do_open:1317 : secret driver 1
ESX returned DECLINED
2012-06-26 19:48:53.310+0000: 26051: debug : do_open:1317 : secret driver 2
Hyper-V returned DECLINED
2012-06-26 19:48:53.310+0000: 26051: debug : doRemoteOpen:542 : proceeding
with name = xenapi://
2012-06-26 19:48:53.310+0000: 26051: debug : doRemoteOpen:552 : Connecting
with transport 0
2012-06-26 19:48:53.310+0000: 26051: debug :
virNetTLSContextLocateCredentials:753 : pkipath=(null) isServer=0
tryUserPkiPath=0
2012-06-26 19:48:53.310+0000: 26051: debug :
virNetTLSContextLocateCredentials:825 : Using default TLS CA certificate
path
2012-06-26 19:48:53.310+0000: 26051: debug :
virNetTLSContextLocateCredentials:831 : Using default TLS CA revocation
list path
2012-06-26 19:48:53.311+0000: 26051: debug :
virNetTLSContextLocateCredentials:837 : Using default TLS key/certificate
path
2012-06-26 19:48:53.311+0000: 26051: debug : virNetClientClose:521 :
client=(nil)
2012-06-26 19:48:53.311+0000: 26051: debug : do_open:1317 : secret driver 3
remote returned ERROR
2012-06-26 19:48:53.311+0000: 26051: debug : do_open:1333 : nwfilter driver
0 Test returned DECLINED
2012-06-26 19:48:53.311+0000: 26051: debug : do_open:1333 : nwfilter driver
1 ESX returned DECLINED
2012-06-26 19:48:53.311+0000: 26051: debug : do_open:1333 : nwfilter driver
2 Hyper-V returned DECLINED
2012-06-26 19:48:53.311+0000: 26051: debug : doRemoteOpen:542 : proceeding
with name = xenapi://
2012-06-26 19:48:53.311+0000: 26051: debug : doRemoteOpen:552 : Connecting
with transport 0
2012-06-26 19:48:53.311+0000: 26051: debug :
virNetTLSContextLocateCredentials:753 : pkipath=(null) isServer=0
tryUserPkiPath=0
2012-06-26 19:48:53.311+0000: 26051: debug :
virNetTLSContextLocateCredentials:825 : Using default TLS CA certificate
path
2012-06-26 19:48:53.312+0000: 26051: debug :
virNetTLSContextLocateCredentials:831 : Using default TLS CA revocation
list path
2012-06-26 19:48:53.312+0000: 26051: debug :
virNetTLSContextLocateCredentials:837 : Using default TLS key/certificate
path
2012-06-26 19:48:53.313+0000: 26051: debug : virNetClientClose:521 :
client=(nil)
2012-06-26 19:48:53.313+0000: 26051: debug : do_open:1333 : nwfilter driver
3 remote returned ERROR
Unable to connect: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No
such file or directory
Even though the output mentions not finding a CA certificate, that's not
the problem.
I have a custom ConnectAuth class so the user can give the password before
trying to connect and isn't prompted for it. The problem occurs when trying
to create the Connect Object.
--
Thanks,
Nick Mathews
12 years, 6 months
[libvirt-users] manipulation extra features which hypervisors support
by Rashid Zamani
Hello everybody,
I am wondering if libvirt community implement features specefic to only one
supported hypervisor?
I am trying to set the host name and DNS from within the libvirt xml file
in user mode network option (SLiRP).
Right now I am using the <qemu:commandline>.
My concern is if I want to use Xen or other hypervisors, how I should do
the same.
To be honest, I have not used any other hypervisor, except QEMU yet, I am
trying to write a code which can be used for any hypervisor libvirt
supports, though.
And I am not even sure if it is possible to set DNS address and so on the
way they are settable in QEMU.
Therefore, I was thinking if there is any standard xml element in
<interface> which I may use to solve my problem (setting dns, host name ,
and so on...) with.
On the IRC channel I received a reply that this feature has not been
implemented yet, and you are going to support it soon.
I am interested to contribute and join the community to imlement this
feature if possible.
Appreciate pointing me into right direction and thank you in advance.
Zamani
12 years, 6 months