Re: [Libvir] [patch 1/5] iptables: fix invalid free
by Daniel Veillard
On Wed, Mar 21, 2007 at 12:47:58PM +0000, Mark McLoughlin wrote:
> In iptablesContextNew(), make sure we don't try and free an invalid
> pointer if one of the iptRulesNew() fails.
>
> Signed-off-by: Mark McLoughlin <markmc(a)redhat.com>
>
> Index: libvirt/qemud/iptables.c
> ===================================================================
> --- libvirt.orig/qemud/iptables.c
> +++ libvirt/qemud/iptables.c
> @@ -496,7 +496,7 @@ iptablesContextNew(void)
> {
> iptablesContext *ctx;
>
> - if (!(ctx = (iptablesContext *) malloc(sizeof (iptablesContext))))
> + if (!(ctx = (iptablesContext *) calloc(1, sizeof (iptablesContext))))
> return NULL;
>
> if (!(ctx->input_filter = iptRulesNew("filter", IPTABLES_PREFIX "INPUT")))
I usually prefer malloc + memset( , 0, ) , but this probably comes from
libxml2 where I replaced malloc calls with specific wrappers (and I still
have a TODO for this in libvirt though some part of libvirt are not linked to
libxml2 I guess so that may make things a bit harder)
What's the policy w.r.t. error reporting in qemud and libvirt related daemons
in general ? I guess a failure to malloc or thisd kind of problems should be
logged somewhere, right ?
> @@ -518,9 +518,12 @@ iptablesContextNew(void)
> void
> iptablesContextFree(iptablesContext *ctx)
> {
> - iptRulesFree(ctx->input_filter);
> - iptRulesFree(ctx->forward_filter);
> - iptRulesFree(ctx->nat_postrouting);
> + if (ctx->input_filter)
> + iptRulesFree(ctx->input_filter);
> + if (ctx->forward_filter)
> + iptRulesFree(ctx->forward_filter);
> + if (ctx->nat_postrouting)
> + iptRulesFree(ctx->nat_postrouting);
> free(ctx);
> }
The patch does the right thing, sounds good to me :-)
Daniel
--
Red Hat Virtualization group http://redhat.com/virtualization/
Daniel Veillard | virtualization library http://libvirt.org/
veillard(a)redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/
17 years, 8 months
[Libvir] Fix buffer overflow in dumping XML
by Daniel P. Berrange
The new bufferContentAndFree() method used for the QEMU daemon rellocs the
buffer size down to release memory held by the buffer which was never used
for any data. Unfortunately it reallocs it 1 byte too small, so later uses
of strlen()/strcpy() either magically work, or randomly append gargage or
crash the daemon depending on the phase of the moon :-) Re-allocing the
buffer to relase a few bytes memory isn't really an optimization since the
caller is going to free the entire block a very short while later, so this
patch simply removes the realloc call.
As an aside, the virBuffer functions in src/xml.c and the buffer functions
in qemud/buf.c are both flawed wrt to the way they call the Grow method.
The method expects the len parameter to be extra bytes needed, but several
of the callers pass in the total desired length, so it allocates too much
memory. There are various other non-fatal flaws which need to be cleaned
up in this code, but the attached patch just focuses on the current fatal
buffer overflow for now.
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 8 months
[Libvir] [PATCH] virsh vcpupin guards CPU not existing.
by Masayuki Sunou
Hi
When CPU not existing is set to virsh vcpupin, CPU affinity is set to all CPU.
--------------------------------------------------------------------------------
# virsh vcpuinfo 0
VCPU: 0
CPU: 0
State: blocked
CPU time: 428.8s
CPU Affinity: y-
# virsh vcpupin 0 0 100
libvir: Xen error : failed Xen syscall ioctl -1208339792
# virsh vcpuinfo 0
VCPU: 0
CPU: 0
State: blocked
CPU time: 429.6s
CPU Affinity: yy
--------------------------------------------------------------------------------
This patch returns an error, when CPU not existing is set to virsh vcpupin.
--------------------------------------------------------------------------------
# virsh vcpupin 0 0 100
error: Physical CPU 100 doesn't exist.
--------------------------------------------------------------------------------
Signed-off-by: Masayuki Sunou <fj1826dm(a)aa.jp.fujitsu.com>
Thanks,
Masayuki Sunou.
--------------------------------------------------------------------------------
Index: src/virsh.c
===================================================================
RCS file: /data/cvs/libvirt/src/virsh.c,v
retrieving revision 1.67
diff -u -p -r1.67 virsh.c
--- src/virsh.c 20 Mar 2007 15:31:46 -0000 1.67
+++ src/virsh.c 22 Mar 2007 06:05:14 -0000
@@ -1358,6 +1358,11 @@ cmdVcpupin(vshControl * ctl, vshCmd * cm
if (cpu < VIR_NODEINFO_MAXCPUS(nodeinfo)) {
VIR_USE_CPU(cpumap, cpu);
+ } else {
+ vshError(ctl, FALSE, _("Physical CPU %d doesn't exist."), cpu);
+ free(cpumap);
+ virDomainFree(dom);
+ return FALSE;
}
cpulist = index(cpulist, ',');
if (cpulist)
--------------------------------------------------------------------------------
17 years, 8 months
[Libvir] [PATCH] Add Xen Interface Version Check
by Atsushi SAKAI
Add Xen Interface version Check
XEN_SYSCTL_INTERFACE_VERSION
XEN_DOMCTL_INTERFACE_VERSION
Currently following distribution is considered.
sys dom
RHEL5.0 2 3
Fedora 7 2 4
Xen-unstable 3 5
Please check virVcpuInfoPtr ipt; error message is collect or not
+ fprintf(stderr, "Memory allocation failed at xenHypervisorIniti()\n");
Signed-off-by: Atsushi SAKAI <sakaia(a)jp.fujitsu.com>
Thanks
Atsushi SAKAI
--- libvirt-0.2.1/src/xen_internal.c 2007-03-16 06:36:41.000000000 +0900
+++ libvirt-0.2.1.hyper/src/xen_internal.c 2007-03-19 19:09:08.000000000 +0900
@@ -655,7 +655,6 @@
return (0);
}
-#ifndef PROXY
/**
* xenHypervisorDoV2Dom:
* @handle: the handle to the Xen hypervisor
@@ -697,7 +696,6 @@
return (0);
}
-#endif /* PROXY */
/**
* virXen_getdomaininfolist:
@@ -1057,6 +1055,8 @@
}
return(ret);
}
+#endif /* !PROXY*/
+
/**
* virXen_getvcpusinfo:
* @handle: the hypervisor handle
@@ -1169,7 +1169,6 @@
}
return(ret);
}
-#endif /* !PROXY*/
/**
* xenHypervisorInit:
@@ -1184,6 +1183,7 @@
hypercall_t hc;
v0_hypercall_t v0_hc;
xen_getdomaininfo info;
+ virVcpuInfoPtr ipt;
if (initialized) {
if (hypervisor_version == -1)
@@ -1291,15 +1291,47 @@
* or the old ones
*/
hypervisor_version = 2;
- /* TODO: one probably will need to autodetect thse subversions too */
+
+ ipt = malloc(sizeof(virVcpuInfo));
+ if (ipt == NULL){
+#ifdef DEBUG
+ fprintf(stderr, "Memory allocation failed at xenHypervisorIniti()\n");
+#endif
+ return(-1);
+ }
+ /* Currently consider RHEL5.0 Fedora7 and xen-unstable */
sys_interface_version = 2; /* XEN_SYSCTL_INTERFACE_VERSION */
- dom_interface_version = 3; /* XEN_DOMCTL_INTERFACE_VERSION */
if (virXen_getdomaininfo(fd, 0, &info) == 1) {
+ /* RHEL 5.0 */
+ dom_interface_version = 3; /* XEN_DOMCTL_INTERFACE_VERSION */
+ if (virXen_getvcpusinfo(fd, 0, 0, ipt, NULL, 0) == 0){
#ifdef DEBUG
- fprintf(stderr, "Using hypervisor call v2, sys version 2\n");
+ fprintf(stderr, "Using hypervisor call v2, sys ver2 dom ver3\n");
#endif
- goto done;
+ goto done;
+ }
+ /* Fedora 7 */
+ dom_interface_version = 4; /* XEN_DOMCTL_INTERFACE_VERSION */
+ if (virXen_getvcpusinfo(fd, 0, 0, ipt, NULL, 0) == 0){
+#ifdef DEBUG
+ fprintf(stderr, "Using hypervisor call v2, sys ver2 dom ver4\n");
+#endif
+ goto done;
+ }
}
+
+ sys_interface_version = 3; /* XEN_SYSCTL_INTERFACE_VERSION */
+ if (virXen_getdomaininfo(fd, 0, &info) == 1) {
+ /* xen-unstable */
+ dom_interface_version = 5; /* XEN_DOMCTL_INTERFACE_VERSION */
+ if (virXen_getvcpusinfo(fd, 0, 0, ipt, NULL, 0) == 0){
+#ifdef DEBUG
+ fprintf(stderr, "Using hypervisor call v2, sys ver3 dom ver5\n");
+#endif
+ goto done;
+ }
+ }
+
hypervisor_version = 1;
sys_interface_version = -1;
if (virXen_getdomaininfo(fd, 0, &info) == 1) {
17 years, 8 months
[Libvir] QEMU networking XML docs summary
by Daniel P. Berrange
What follows is a summary of the ways in which you can setup QEMU networking
1. Userspace SLIRP stack
Provides a virtual LAN with NAT to the outside world. The virtual network
has DHCP & DNS services and will give the guest VM addresses starting
from 10.0.2.15. The default router will be 10.0.2.2 and the DNS server
will be 10.0.2.3. This networking is the only option for unprivileged
users who need their VMS to have outgoing access. Example configs are:
<interface type='user'/>
<interface type='user'>
<mac address="11:22:33:44:55:66:/>
</interface>
2. Virtual network
Provides a virtual network using a bridge device in the host. Depending
on the virtual network configuration, the network may be totally isolated,
NAT'ing to aan explicit network device, or NAT'ing to the default route.
DHCP and DNS are provided on the virtual network in all cases and the
IP range can be determined by examining the virtual network config with
'virsh net-dumpxml <network name>'. There is one virtual network called
'default' setup out of the box which does NAT'ing to the default route
and has an IP range of 192.168.22.0/255.255.255.0. Each guest will
have an associated tun device created with a name of vnetN, which can
also be overriden with the <target> element. Example configs are
<interface type='network'>
<source network='default'/>
</interface>
<interface type='network'>
<source network='default'/>
<target dev='vnet7'/>
<mac address="11:22:33:44:55:66:/>
</interface>
3. Bridge to to LAN
Provides a bridge from the VM directly onto the LAN. This assumes there
is a bridge device on the host which has one or more of the hosts
physical NICs enslaved. The guest VM will have an associated tun device
created with a name of vnetN, which can also be overriden with the
<target> element. The tun device will be enslaved to the bridge. The
IP range / network configuration is whatever is used on the LAN. This
provides the guest VM full incoming & outgoing net access just like
a physical machine. Examples include
<interface type='bridge'>
<source dev='br0'/>
</interface>
<interface type='bridge'>
<source dev='br0'/>
<target dev='vnet7'/>
<mac address="11:22:33:44:55:66:/>
</interface>
4. Generic connection to LAN
Provides a means for the administrator to execute an arbitrary script
to connect the guest's network to the LAN. The guest will have a tun
device created with a name of vnetN, which can also be overriden with the
<target> element. After creating the tun device a shell script will be
run which is expected to do whatever host network integration is
required. By default this script is called /etc/qemu-ifup but can be
overriden.
<interface type='ethernet'/>
<interface type='ethernet'>
<target dev='vnet7'/>
<script path='/etc/qemu-ifup-mynet'/>
</interface>
5. Multicast tunnel
A multicast group is setup to represent a virtual network. Any VMs whose
network devices are in the same multicast group can talk to each other
even across hosts. This mode is also available to unprivileged users.
There is no default DNS or DHCP support and no outgoing network access.
To provide outgoing network access, one of the VMs should have a 2nd
NIC which is connected to one of the first 4 network types and do the
appropriate routing. The multicast protocol is compatible with that
used by user mode linux guests too. The source address used must be
from the multicast address block
<interface type='mcast'>
<source address='230.0.0.1' port='5558'/>
</interface>
6. TCP tunnel
A TCP client/server architecture provides a virtual network. One VM
provides the server end of the netowrk, all other VMS are configured
as clients. All network traffic is routed between the VMs via the
server. This mode is also available to unprivileged users.
There is no default DNS or DHCP support and no outgoing network access.
To provide outgoing network access, one of the VMs should have a 2nd
NIC which is connected to one of the first 4 network types and do the
appropriate routing.
Example server config
<interface type='server'>
<source address='192.168.0.1' port='5558'/>
</interface>
Example client config
<interface type='client'>
<source address='192.168.0.1' port='5558'/>
</interface>
NB, options 2 -> 4 are also supported by Xen VMs, so it is possible to
use these configs to have networking with both Xen & QEMU/KVMs connected
to each other.
Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 8 months
[Libvir] Persistence / re-instate of iptables rules
by Daniel P. Berrange
With the virtual networking capability we have to add various rules to the
iptables chains to ensure that outgoing connections are forwarded + NATed
to the physical LAN. Now if the user does 'service iptables restart' these
rules are lost until you restart the VM. This obviously sucks.
We've been exploring the possibility of adapting the Fedora / RHEL iptables
scripts to allow user-defined chains which are automatically restored from
a 'safe' config file during a restart. This is not present in FC6 / RHEL5
or even F6 yet, nor does it help non-Fedora userrs.
We already have ability to add / remove rules from iptables, so I was
wondering how hard it would be to list existing rules. From whence we can
look at existing rules to see if our virtual network forwarding/NAT rules
were missing. The idea being that a simple 'killall -SIGHUP libvirt_qemud'
could trigger libvirt to check & re-add the iptables rules if missing.
Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 8 months
Re: [Libvir] [patch 2/5] iptables: use calloc() instead of malloc()/memset()
by Daniel Veillard
On Wed, Mar 21, 2007 at 12:47:59PM +0000, Mark McLoughlin wrote:
> Replace a few instances of malloc() followed by memset(0) with
> calloc()
Humpf, that's just the opposite of the other parts of libvirt I wrote.
Could you justify ?
It's hard to catch it's a calloc or a malloc call, and hence notice
if it was initialized or not. The extra memset line makes it clear, plus
it will allow me more easilly to plug in libxml2 memory wrapper.
I doubt the patch is related to other parts needed for the restart code
it's just coding policy and IMHO different from teh rest of libvirt.
Daniel
--
Red Hat Virtualization group http://redhat.com/virtualization/
Daniel Veillard | virtualization library http://libvirt.org/
veillard(a)redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/
17 years, 8 months
[Libvir] [patch 5/5] iptables: reload rules on SIGHUP
by Mark McLoughlin
As suggested by danpb, make libvirt_qemud handle SIGHUP by re-loading
the iptables rules.
Signed-off-by: Mark McLoughlin <markmc(a)redhat.com>
Index: libvirt/qemud/iptables.c
===================================================================
--- libvirt.orig/qemud/iptables.c
+++ libvirt/qemud/iptables.c
@@ -36,6 +36,8 @@
#include <sys/stat.h>
#include <sys/wait.h>
+#include "internal.h"
+
enum {
ADD = 0,
REMOVE
@@ -48,11 +50,18 @@ enum {
typedef struct
{
+ char *rule;
+ char **argv;
+ int flipflop;
+} iptRule;
+
+typedef struct
+{
char *table;
char *chain;
- int nrules;
- char **rules;
+ int nrules;
+ iptRule *rules;
#ifdef IPTABLES_DIR
@@ -73,7 +82,7 @@ struct _iptablesContext
#ifdef IPTABLES_DIR
static int
writeRules(const char *path,
- char * const *rules,
+ const iptRules *rules,
int nrules)
{
char tmp[PATH_MAX];
@@ -96,7 +105,7 @@ writeRules(const char *path,
}
for (i = 0; i < nrules; i++) {
- if (fputs(rules[i], f) == EOF ||
+ if (fputs(rules[i].rule, f) == EOF ||
fputc('\n', f) == EOF) {
fclose(f);
if (istmp)
@@ -173,19 +182,43 @@ buildPath(const char *table,
}
#endif /* IPTABLES_DIR */
-static int
-iptRulesAppend(iptRules *rules,
- const char *rule)
+static void
+iptRuleFree(iptRule *rule)
{
- char **r;
+ if (rule->rule)
+ free(rule->rule);
+ rule->rule = NULL;
+
+ if (rule->argv) {
+ int i = 0;
+ while (rule->argv[i])
+ free(rule->argv[i++]);
+ free(rule->argv);
+ rule->argv = NULL;
+ }
+}
- if (!(r = (char **)realloc(rules->rules, sizeof(char *) * (rules->nrules+1))))
+static int
+iptRulesAppend(iptRules *rules,
+ char *rule,
+ char **argv,
+ int flipflop)
+{
+ iptRule *r;
+
+ if (!(r = (iptRule *)realloc(rules->rules, sizeof(iptRule) * (rules->nrules+1)))) {
+ int i = 0;
+ while (argv[i])
+ free(argv[i++]);
+ free(argv);
return ENOMEM;
+ }
rules->rules = r;
- if (!(rules->rules[rules->nrules] = strdup(rule)))
- return ENOMEM;
+ rules->rules[rules->nrules].rule = rule;
+ rules->rules[rules->nrules].argv = argv;
+ rules->rules[rules->nrules].flipflop = flipflop;
rules->nrules++;
@@ -211,17 +244,17 @@ iptRulesRemove(iptRules *rules,
int i;
for (i = 0; i < rules->nrules; i++)
- if (!strcmp(rules->rules[i], strdup(rule)))
+ if (!strcmp(rules->rules[i].rule, strdup(rule)))
break;
if (i >= rules->nrules)
return EINVAL;
- free(rules->rules[i]);
+ iptRuleFree(&rules->rules[i]);
memmove(&rules->rules[i],
&rules->rules[i+1],
- (rules->nrules - i - 1) * sizeof (char *));
+ (rules->nrules - i - 1) * sizeof (iptRule));
rules->nrules--;
@@ -253,16 +286,14 @@ iptRulesFree(iptRules *rules)
}
- for (i = 0; i < rules->nrules; i++) {
- free(rules->rules[i]);
- rules->rules[i] = NULL;
- }
-
- rules->nrules = 0;
-
if (rules->rules) {
+ for (i = 0; i < rules->nrules; i++)
+ iptRuleFree(&rules->rules[i]);
+
free(rules->rules);
rules->rules = NULL;
+
+ rules->nrules = 0;
}
#ifdef IPTABLES_DIR
@@ -480,10 +511,13 @@ iptablesAddRemoveRule(iptRules *rules, i
(retval = iptablesAddRemoveChain(rules, action)))
goto error;
- if (action == ADD)
- retval = iptRulesAppend(rules, rule);
- else
+ if (action == ADD) {
+ retval = iptRulesAppend(rules, rule, argv, flipflop);
+ rule = NULL;
+ argv = NULL;
+ } else {
retval = iptRulesRemove(rules, rule);
+ }
error:
if (rule)
@@ -535,6 +569,45 @@ iptablesContextFree(iptablesContext *ctx
free(ctx);
}
+static void
+iptRulesReload(iptRules *rules)
+{
+ int i;
+ int retval;
+
+ for (i = 0; i < rules->nrules; i++) {
+ iptRule *rule = &rules->rules[i];
+ char *orig;
+
+ orig = rule->argv[rule->flipflop];
+ rule->argv[rule->flipflop] = (char *) "--delete";
+
+ if ((retval = iptablesSpawn(WITH_ERRORS, rule->argv)))
+ qemudLog(QEMUD_WARN, "Failed to remove iptables rule '%s' from chain '%s' in table '%s': %s",
+ rule->rule, rules->chain, rules->table, strerror(errno));
+
+ rule->argv[rule->flipflop] = orig;
+ }
+
+ if ((retval = iptablesAddRemoveChain(rules, REMOVE)) ||
+ (retval = iptablesAddRemoveChain(rules, ADD)))
+ qemudLog(QEMUD_WARN, "Failed to re-create chain '%s' in table '%s': %s",
+ rules->chain, rules->table, strerror(retval));
+
+ for (i = 0; i < rules->nrules; i++)
+ if ((retval = iptablesSpawn(WITH_ERRORS, rules->rules[i].argv)))
+ qemudLog(QEMUD_WARN, "Failed to add iptables rule '%s' to chain '%s' in table '%s': %s",
+ rules->rules[i].rule, rules->chain, rules->table, strerror(retval));
+}
+
+void
+iptablesReloadRules(iptablesContext *ctx)
+{
+ iptRulesReload(ctx->input_filter);
+ iptRulesReload(ctx->forward_filter);
+ iptRulesReload(ctx->nat_postrouting);
+}
+
static int
iptablesInput(iptablesContext *ctx,
const char *iface,
Index: libvirt/qemud/iptables.h
===================================================================
--- libvirt.orig/qemud/iptables.h
+++ libvirt/qemud/iptables.h
@@ -27,6 +27,8 @@ typedef struct _iptablesContext iptables
iptablesContext *iptablesContextNew (void);
void iptablesContextFree (iptablesContext *ctx);
+void iptablesReloadRules (iptablesContext *ctx);
+
int iptablesAddTcpInput (iptablesContext *ctx,
const char *iface,
int port);
Index: libvirt/qemud/qemud.c
===================================================================
--- libvirt.orig/qemud/qemud.c
+++ libvirt/qemud/qemud.c
@@ -88,6 +88,11 @@ static int qemudDispatchSignal(struct qe
case SIGHUP:
qemudLog(QEMUD_INFO, "Reloading configuration on SIGHUP");
ret = qemudScanConfigs(server);
+
+ if (server->iptables) {
+ qemudLog(QEMUD_INFO, "Reloading iptables rules");
+ iptablesReloadRules(server->iptables);
+ }
break;
case SIGINT:
--
17 years, 8 months