[libvirt] [PATCH 2/2] RPM spec file updated with glusterfs dependency
by Harshavardhana
Add new dependency for glusterfs rpm.
---
libvirt.spec.in | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index 1e8a8ef..ccef3a5 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -86,6 +86,8 @@ BuildRequires: util-linux
# For showmount in FS driver (netfs discovery)
BuildRequires: nfs-utils
Requires: nfs-utils
+# For glusterfs
+Requires: glusterfs-client >= 2.0.2
%endif
%if %{with_qemu}
# From QEMU RPMs
--
1.6.0.6
15 years, 4 months
[libvirt] [RFC/Experimental]: Tunnelled migration
by Chris Lalancette
All,
Attached is the current version of the tunnelled migration patch, based
upon danpb's generic datastream work. In order to use this work, you must first
grab danpb's data-streams git branch here:
http://gitorious.org/~berrange/libvirt/staging
and then apply this patch on top.
In some basic testing, this seems to work fine for me, although I have not given
it a difficult scenario nor measured CPU utilization with these patches in place.
DanB, these patches take a slightly different approach than you and I
discussed yesterday on IRC. Just to recap, you suggested a new version of
virMigratePrepare (called virMigratePrepareTunnel) that would take in as one of
the arguments a datastream, and during the prepare step properly setup the
datastream. Unless I'm missing something (which is entirely possible), this
would also require passing that same datastream into the perform and finish
stages, meaning that I'd essentially have an all new migration protocol version 3.
To try to avoid that, during the prepare I store the port that we used to
start the listening qemu in a new field in the virDomainObj structure. Then
during the perform step, I create a datastream on the destination and run a new
RPC function called virDomainMigratePrepareTunnel. This looks that port back
up, associates it with the current stream, and returns back to the caller. Then
the source side just does virStreamSend for all the data, and we have tunnelled
migration.
TODO:
- More testing, especially under worst-case scenarios (VM constantly
changing it's memory during migration)
- CPU utilization testing to make sure that we aren't using a lot of CPU
time doing this
- Wall-clock testing
- Switch over to using Unix Domain Sockets instead of localhost TCP
migration. With a patch I put into upstream qemu (and is now in F-12), we can
totally get rid of the scanning of localhost ports to find a free one, and just
use Unix Domain Sockets. That should make the whole thing more robust.
--
Chris Lalancette
15 years, 4 months
[libvirt] test driver and virtOpenAuth()
by Bryan Kearney
I am trying to test a call into virConnectOpenAuth. How can I configure
the test driver to accept auth calls on
"test+tcp://localhost/default"
Thanks!
-- bk
15 years, 4 months
[libvirt] [RFC] secrets API, was Re: [PATCH 0/9] Add support for (qcow*) volume encryption
by Miloslav Trmac
Hello,
based on your comments, here is a proposal for a "secret management" API.
Rather than add explicit accessors for attributes of secrets, and hard-code the "secrets are related to storage volumes" association in the API, the proposed uses XML to manipulate the association as well as other attributes, like it is used in other areas of libvirt.
The user would allocate an ID for the secret using virSecretAllocateID(), set attributes of the secret using XML, e.g.
<secret ephemeral='no' private='yes'>
<volume>/var/lib/libvirt/images/mail.img</volume>
<description>LUKS passphrase for the main hard drive of our mail server</description>
</secret>
Then, the secret value can be either generated and stored by libvirt, or supplied by the user using virSecretSetValue().
A simple API is provided for enumeration of all secrets. Very large deployments manage secret IDs automatically, so it probably is not necessary to provide specialized lookup functions (e.g. allowing the volume key -> secret ID lookup in less than O(number of secrets)).
The <encryption> element used in volume and domain specifications remains, but it never contains any secrets directly, only something like
<secret type='passphrase' secret_id='c1f11a6d-8c5d-4a3e-ac7a-4e171c5e0d4a' />
More detailed documentation is in the patch.
Does that look OK?
Thank you,
Mirek
15 years, 4 months
Re: [libvirt] [PATCH 0/9] Add support for (qcow*) volume encryption
by Miloslav Trmac
Hello,
----- "Daniel P. Berrange" <berrange(a)redhat.com> wrote:
> On Fri, Jul 24, 2009 at 07:25:54AM -0400, Miloslav Trmac wrote:
> > A client in this case is the central, fully trusted, management
> > system (e.g. oVirt), there is no need to protect against it.
> > A more likely flow is
> >
> > MGMT client (no knowledge of secrets)
> > |
> > v
> > MGMT server + key server (integrated or separate but cooperating)
> > |
> > v
> > libvirt daemon
> > |
> > v
> > qemu
> >
> > > What I am suggesting is that libvirt daemon should communicate
> > > with the key server directly in all cases, and take the client
> > > out of the loop. The client should merely indicate whether it
> > > wants encryption or not, and never be able to directly access
> > > any key material itself. With a direct trust relationship
> > > between the key server and each libvirtd daemon, you do now
> > > have a guarentee that keys are only ever used on the node for
> > > which they are intended. You also have the additional guarentee
> > > that no libvirt client can ever see any key secrets or passphrases
> > > since it has been taken completely out of the loop.
> >
> > As far as I understand it, the whole point of virtual machine
> > encryption is that the nodes are _not_ trusted, and different
> > encryption keys protect data on different nodes.
>
> I did not mean to imply that libvirtd on a node should have
> access to *all* secrets. Each libvirtd daemon has its own
> identity, and when talking to a keys server it would authenticate
> and the key server would only allow it access to some sub-set
> of keys. ie, only the keys neccessary for VMs it needs to run.
<snip>
> If you have each libvirtd requesting secrets directly from the
> keystore, at the time it starts a guest, then should an admin
> issue a migrate command manually, the destination libvirtd
> would still be unable to access the secrets of the incoming
> VM, since the keystore will not have been configured to allow
> it access.
>
> We also have to bear in mind how the MGMT server communicates
> with the libvirt daemon. One likely option is using a messaging
> service, which offers authenticity but not secrecy. ie, libvirt
> receiving a request off the message bus can be sure the request
> came from the mgmtm server, but cannot be sure it wasn't seen
> by other users during transport. Thus by including secrets in
> the XML, you could be exposing them to the message bus adminstrator.
Is this likely to happen? No such transport is currently implemented AFAICS, and just using TLS is much simpler - it is already implemented, and gives you authentication of both sides almost for free. Why would such a messaging service be necessary for the client<->libvirtd connection, but not the libvirtd<->key server connection?
> Taking your diagram, I think I would generalize it still further
> to allow for mgmt server to be optional separate from key server,
> and to introduce a "message bus" between MGMT server & libvirt.
> Really pushing the limits of ASCII art...
>
> MGMT client
> |
> V
> MGMT server <---> Key server
> | ^
> V |
> message bus |
> | |
> V |
> libvirt daemon <----/
> |
> V
> QEMU
This makes sense if you do not want to store any secrets locally at the node at any cost, but the strong coupling of the libvirtd daemon and the key server is a significant disadvantage: not only would the key server and libvirtd both have to implement the same protocol for transferring secrets, but the MGMT server, key server and libvirtd would all have to share the same concept of "identity" and the same method of authentication - and the MGMT server would have to manipulate the libvirt accounts on the key server.
There are a few products that offer a company-wide key server, but there AFAIK is no standardized protocol for transferring the secrets yet (the KMIP committee was formed only a few months ago), and not even a proposal for any account management on the key server. We don't know what the industry standard will look like - or whether there will be any; right now, any key server used in connection with libvirt-managed nodes would have to implement a libvirt-specified interface (in addition to any interface it may provide to other clients).
It is much simpler to use the already implemented libvirt remote interface, and give the MGMT server implementors a free hand in deciding which key server, if any, they will use, and to avoid the non-trivial effort of specifying the specialized libvirtd<->key server protocol and implementing both ends of it.
If secrets related to auto-start domains are stored locally on the node, all secrets needed by libvirtd will be anticipated by the MGMT server, who can provide them before starting the operation. This makes the libvirtd->key server connection unnecessary.
The API proposal that will follow adds a secret management interface to libvirtd. The API allows libvirtd to use a persistent local secret store, as well as an external key server as described above. In addition, it also allows a "fully managed" mode, where the storage and provision of secrets is the responsibility of the MGMT server. This is accomplished by giving each secret two attributes: "private", which prohibits libvirtd from divulging the secret to any client (this was done in the previous implementation in XML, but now that secrets are out-of-bound, making secrets private does not break any round-trip XML editing), and "ephemeral", which prohibits libvirt from storing the secret persistently. The MGMT server would send the necessary secrets before any operation that needs them, and delete them from the node when they are no longer necessary.
Looking at your use cases:
> - A node can only see secrets for VMs it is configured to run
Same in "fully managed" mode.
> - MGMT server does not need to ever see the secrets itself. It
> merely controls access for which nodes can see them, and can
> request generation of new secrets
The MGMT server needs to see the secrets in "fully managed mode". The MGMT server can actually see them even if libvirtd talks to the key server directly, because the key server is allowed to manipulate permissions of the secrets and it can give itself the permission to read them. The MGMT server can also delete any interface, or boot a "rescue image" that copies the plaintext out of the virtual machine. The MGMT server must be trusted in any case.
> - Messages between the MGMT server & libvirtd do not need to
> be encrypted, since they don't include secrets
I can't see why this is useful - the connection will use TLS anyway.
> - Other users who authenticate to libvirt on a node cannot
> move a VM to an unauthorized node, since it can't see secrets
Same in "fully managed" mode.
> - VM save memory images (which include the fully XML) do not ever
> expose the secrets. So VM save image cannot be restored on an
> unauthorized node.
Same in "fully managed" mode.
Mirek
15 years, 4 months
[libvirt] PATCH: Implement vCPU cpuTime + cpu fields for QEMU
by Daniel P. Berrange
The qemudDomainGetVcpus() method in QEMU driver has the following long
standing todo item
/* XXX cpu time, current pCPU mapping */
This has caused confusion for users, because they set affinity and then
wonder why 'virsh vcpuinfo' constantly reports their guest as running
on pCPU 0. This patch implements the missing bits, pulling it out of
/proc/$PID/task/$TID/stat
ie, the per-vCPU thread status file
Daniel
diff --git a/src/qemu_driver.c b/src/qemu_driver.c
index c594495..388843f 100644
--- a/src/qemu_driver.c
+++ b/src/qemu_driver.c
@@ -2465,24 +2465,44 @@ cleanup:
}
-static int qemudGetProcessInfo(unsigned long long *cpuTime, int pid) {
+static int qemudGetProcessInfo(unsigned long long *cpuTime, int *lastCpu, int pid, int tid) {
char proc[PATH_MAX];
FILE *pidinfo;
unsigned long long usertime, systime;
+ int cpu;
+ int ret;
- if (snprintf(proc, sizeof(proc), "/proc/%d/stat", pid) >= (int)sizeof(proc)) {
+ if (tid)
+ ret = snprintf(proc, sizeof(proc), "/proc/%d/task/%d/stat", pid, tid);
+ else
+ ret = snprintf(proc, sizeof(proc), "/proc/%d/stat", pid);
+ if (ret >= (int)sizeof(proc)) {
+ errno = E2BIG;
return -1;
}
if (!(pidinfo = fopen(proc, "r"))) {
/*printf("cannot read pid info");*/
/* VM probably shut down, so fake 0 */
- *cpuTime = 0;
+ if (cpuTime)
+ *cpuTime = 0;
+ if (lastCpu)
+ *lastCpu = 0;
return 0;
}
- if (fscanf(pidinfo, "%*d %*s %*c %*d %*d %*d %*d %*d %*u %*u %*u %*u %*u %llu %llu", &usertime, &systime) != 2) {
- qemudDebug("not enough arg");
+ /* See 'man proc' for information about what all these fields are. We're
+ * only interested in a very few of them */
+ if (fscanf(pidinfo,
+ /* pid -> stime */
+ "%*d %*s %*c %*d %*d %*d %*d %*d %*u %*u %*u %*u %*u %llu %llu"
+ /* cutime -> endcode */
+ "%*d %*d %*d %*d %*d %*u %*u %*d %*u %*u %*u %*u"
+ /* startstack -> processor */
+ "%*u %*u %*u %*u %*u %*u %*u %*u %*u %*u %*d %d",
+ &usertime, &systime, &cpu) != 3) {
+ VIR_WARN0("cannot parse process status data");
+ errno = -EINVAL;
return -1;
}
@@ -2491,9 +2511,14 @@ static int qemudGetProcessInfo(unsigned long long *cpuTime, int pid) {
* _SC_CLK_TCK is jiffies per second
* So calulate thus....
*/
- *cpuTime = 1000ull * 1000ull * 1000ull * (usertime + systime) / (unsigned long long)sysconf(_SC_CLK_TCK);
+ if (cpuTime)
+ *cpuTime = 1000ull * 1000ull * 1000ull * (usertime + systime) / (unsigned long long)sysconf(_SC_CLK_TCK);
+ if (lastCpu)
+ *lastCpu = cpu;
+
- qemudDebug("Got %llu %llu %llu", usertime, systime, *cpuTime);
+ VIR_DEBUG("Got status for %d/%d user=%llu sys=%llu cpu=%d",
+ pid, tid, usertime, systime, cpu);
fclose(pidinfo);
@@ -3133,7 +3158,7 @@ static int qemudDomainGetInfo(virDomainPtr dom,
if (!virDomainIsActive(vm)) {
info->cpuTime = 0;
} else {
- if (qemudGetProcessInfo(&(info->cpuTime), vm->pid) < 0) {
+ if (qemudGetProcessInfo(&(info->cpuTime), NULL, vm->pid, 0) < 0) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED, ("cannot read cputime for domain"));
goto cleanup;
}
@@ -3676,7 +3701,16 @@ qemudDomainGetVcpus(virDomainPtr dom,
for (i = 0 ; i < maxinfo ; i++) {
info[i].number = i;
info[i].state = VIR_VCPU_RUNNING;
- /* XXX cpu time, current pCPU mapping */
+
+ if (vm->vcpupids != NULL &&
+ qemudGetProcessInfo(&(info[i].cpuTime),
+ &(info[i].cpu),
+ vm->pid,
+ vm->vcpupids[i]) < 0) {
+ virReportSystemError(dom->conn, errno, "%s",
+ _("cannot get vCPU placement & pCPU time"));
+ goto cleanup;
+ }
}
}
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
15 years, 4 months
[libvirt] [PATCH] Canonicalize qemu machine types
by Mark McLoughlin
In qemu-0.11 there is a 'pc-0.10' machine type which allows you to run
guests with a machine which is compatible with the pc machine in
qemu-0.10 - e.g. using the original PCI class for virtio-blk and
virtio-console and disabling MSI support in virtio-net. The idea here
is that we don't want to surprise guests by changing the hardware when
qemu is updated.
I've just posted some patches for qemu-0.11 which allows libvirt to
canonicalize the 'pc' machine alias to the latest machine version.
This patches makes us use that so that when a guest is configured to
use the 'pc' machine type, we resolve that to 'pc-0.11' machine and
save that in the guest XML.
See also:
https://fedoraproject.org/wiki/Features/KVM_Stable_Guest_ABI
* src/qemu_conf.c: add qemudCanonicalizeMachine() to parse the output
of 'qemu -M ?'
* src/qemu_driver.c: canonicalize the machine type in qemudDomainDefine()
---
src/qemu_conf.c | 114 +++++++++++++++++++++++++++++++++++++++++++++++++++++
src/qemu_conf.h | 3 +
src/qemu_driver.c | 3 +
3 files changed, 120 insertions(+), 0 deletions(-)
diff --git a/src/qemu_conf.c b/src/qemu_conf.c
index 4043d70..3f4edfa 100644
--- a/src/qemu_conf.c
+++ b/src/qemu_conf.c
@@ -470,6 +470,120 @@ virCapsPtr qemudCapsInit(void) {
return NULL;
}
+/* Format is:
+ * <machine> <desc> [(alias of <machine>)]
+ */
+static int
+qemudParseMachineTypesStr(const char *machines ATTRIBUTE_UNUSED,
+ const char *machine ATTRIBUTE_UNUSED,
+ char **canonical)
+{
+ const char *p = machines;
+
+ *canonical = NULL;
+
+ do {
+ const char *eol;
+ char *s;
+
+ if (!(eol = strchr(p, '\n')))
+ return -1; /* eof file without finding @machine */
+
+ if (!STRPREFIX(p, machine)) {
+ p = eol + 1;
+ continue; /* doesn't match @machine */
+ }
+
+ p += strlen(machine);
+
+ if (*p != ' ') {
+ p = eol + 1;
+ continue; /* not a complete match of @machine */
+ }
+
+ do {
+ p++;
+ } while (*p == ' ');
+
+ p = strstr(p, "(alias of ");
+ if (!p || p > eol)
+ return 0; /* not an alias, name is canonical */
+
+ *canonical = strndup(p + strlen("(alias of "), eol - p);
+
+ s = strchr(*canonical, ')');
+ if (!s) {
+ VIR_FREE(*canonical);
+ *canonical = NULL;
+ return -1; /* output is screwed up */
+ }
+
+ *s = '\0';
+ break;
+ } while (1);
+
+ return 0;
+}
+
+int
+qemudCanonicalizeMachine(virConnectPtr conn ATTRIBUTE_UNUSED,
+ virDomainDefPtr def)
+{
+ const char *const qemuarg[] = { def->emulator, "-M", "?", NULL };
+ const char *const qemuenv[] = { "LC_ALL=C", NULL };
+ char *machines, *canonical;
+ enum { MAX_MACHINES_OUTPUT_SIZE = 1024*4 };
+ pid_t child;
+ int newstdout = -1, len;
+ int ret = -1, status;
+
+ if (virExec(NULL, qemuarg, qemuenv, NULL,
+ &child, -1, &newstdout, NULL, VIR_EXEC_CLEAR_CAPS) < 0)
+ return -1;
+
+ len = virFileReadLimFD(newstdout, MAX_MACHINES_OUTPUT_SIZE, &machines);
+ if (len < 0) {
+ virReportSystemError(NULL, errno, "%s",
+ _("Unable to read 'qemu -M ?' output"));
+ goto cleanup;
+ }
+
+ if (qemudParseMachineTypesStr(machines, def->os.machine, &canonical) < 0)
+ goto cleanup2;
+
+ if (canonical) {
+ VIR_FREE(def->os.machine);
+ def->os.machine = canonical;
+ }
+
+ ret = 0;
+
+cleanup2:
+ VIR_FREE(machines);
+cleanup:
+ if (close(newstdout) < 0)
+ ret = -1;
+
+rewait:
+ if (waitpid(child, &status, 0) != child) {
+ if (errno == EINTR)
+ goto rewait;
+
+ VIR_ERROR(_("Unexpected exit status from qemu %d pid %lu"),
+ WEXITSTATUS(status), (unsigned long)child);
+ ret = -1;
+ }
+ /* Check & log unexpected exit status, but don't fail,
+ * as there's really no need to throw an error if we did
+ * actually read a valid version number above */
+ if (WEXITSTATUS(status) != 0) {
+ VIR_WARN(_("Unexpected exit status '%d', qemu probably failed"),
+ WEXITSTATUS(status));
+ }
+
+ return ret;
+}
+
static unsigned int qemudComputeCmdFlags(const char *help,
unsigned int version,
unsigned int is_kvm,
diff --git a/src/qemu_conf.h b/src/qemu_conf.h
index fbf2ab9..b668669 100644
--- a/src/qemu_conf.h
+++ b/src/qemu_conf.h
@@ -123,6 +123,9 @@ int qemudExtractVersionInfo (const char *qemu,
unsigned int *version,
unsigned int *flags);
+int qemudCanonicalizeMachine (virConnectPtr conn,
+ virDomainDefPtr def);
+
int qemudParseHelpStr (const char *str,
unsigned int *flags,
unsigned int *version,
diff --git a/src/qemu_driver.c b/src/qemu_driver.c
index d2db1a2..98f8e95 100644
--- a/src/qemu_driver.c
+++ b/src/qemu_driver.c
@@ -4102,6 +4102,9 @@ static virDomainPtr qemudDomainDefine(virConnectPtr conn, const char *xml) {
}
}
+ if (qemudCanonicalizeMachine(conn, def) < 0)
+ goto cleanup;
+
if (!(vm = virDomainAssignDef(conn,
&driver->domains,
def))) {
--
1.6.2.5
15 years, 4 months
[libvirt] Unable to get libvirt working with KVM using direct kernel boot.
by Tiit Kaeeli
Hi,
I am trying to get libvirt to manage a KVM virtual machine on Debian squeeze
with multiple relevant parts included form unstable (because of need for gpt
partition tables, Intel 82576 nic etc).
Unfortunately the VM does not seem to boot. I will see a quemu prompt on
/dev/pts/1. (Tried pressing c on it, but no difference). Nothing will appear on
/dev/pts/2, /dev/pts/3 or when I type
virsh console vm1_storage.
If I grab the kvm command line generated by libvirt using ps and run it
manually, it works fine. (Starts booting after I press c on /dev/pts/1 or
remove the -S option from the command line)
Libvirt configuration file:
<domain type='kvm'>
<name>vm1_storage</name>
<uuid>29bfac01-b24d-e4ab-e741-f33f7e880d9d</uuid>
<memory>4096000</memory>
<currentMemory>4096000</currentMemory>
<vcpu>6</vcpu>
<os>
<type>hvm</type>
<kernel>/boot/vmlinuz-2.6.30-1-amd64</kernel>
<initrd>/boot/initrd.img-2.6.30-1-amd64</initrd>
<cmdline>"root=UUID=98d6d3d7-3782-4f6b-a94f-bc0272c0289d ro console=tty0
console=ttyS0,115200n8"</cmdline>
</os>
<features>
<acpi/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='block' device='disk'>
<source dev='/dev/vm_lvm/vm01_storage_root'/>
<target dev='vda' bus='virtio'/>
</disk>
<interface type='ethernet'>
<mac address='52:54:00:12:34:56'/>
<target dev='tap0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target port='0'/>
</console>
</devices>
</domain>
KVM command line that I see in ps after running
virsh define /etc/libvirt/qemu/vm1_storage.xml
virsh start vm1_storage
(Works fine after I remove the -S option or press 'c' on quemu monitor on
/dev/pts/1):
/usr/bin/kvm -S -M pc -m 4000 -smp 6 -name vm1_storage -uuid
29bfac01-b24d-e4ab-e741-f33f7e880d9d -nographic -monitor pty -boot c -kernel
/boot/vmlinuz-2.6.30-1-amd64 -initrd /boot/initrd.img-2.6.30-1-amd64 -append
"root=UUID=98d6d3d7-3782-4f6b-a94f-bc0272c0289d ro console=tty0
console=ttyS0,115200n8" -drive
file=/dev/vm_lvm/vm01_storage_root,if=virtio,index=0,boot=on -net
nic,macaddr=52:54:00:12:34:56,vlan=0 -net tap,ifname=tap0,vlan=0 -serial pty
-parallel none -usb
/var/log/libvirt/qemu/vm1_storage.log shows:
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/ /usr/bin/kvm -S -M pc -m 4000 -smp 6 -name vm1_storage -uuid
29bfac01-b24d-e4ab-e741-f33f7e880d9d -nographic -monitor pty -boot c -kernel
/boot/vmlinuz-2.6.30-1-amd64 -initrd /boot/initrd.img-2.6.30-1-amd64 -append
"root=UUID=98d6d3d7-3782-4f6b-a94f-bc0272c0289d ro console=tty0
console=ttyS0,115200n8" -drive
file=/dev/vm_lvm/vm01_storage_root,if=virtio,index=0,boot=on -net
nic,macaddr=52:54:00:12:34:56,vlan=0 -net tap,ifname=tap0,vlan=0 -serial pty
-parallel none -usb
char device redirected to /dev/pts/1
char device redirected to /dev/pts/2
qemu: loading initrd (0x76055b bytes) at 0x000000007f89f000
Versions in use:
kvm 85+dfsg-4
libvirt0 0.6.5-2
libvirt-bin 0.6.5-2
linux-image-2.6.30-1-amd64
Thanks for any ideas.
15 years, 4 months