[Libvir] OpenVZ XML format and VPS properties get/set interface [long]

Hi, I started a discussion on OpenVZ XML format a while ago. But let me do it again with more explanation about OpenVZ this time, so that others can understand how it is different and how this can best fit into the libvirt model of doing things. Terminology: Virtual Private Server (VPS), Virtual Environment (VE) and Domain are all the same. OpenVZ is a lot about providing QoS to its users. About 20 carefully chosen parameters regarding various resources such as memory, CPU, disk and network are chosen. These are then used to provide minimum guarantee on any system running OpenVZ. Most of the time, these are limits that can be set per Virtual Private Server(VPS). In Xen or QEMU, if a disk image is available(Xen needs an additional kernel), it is possible to run the domain. Then forget all about it after the domain is shutoff. This is not possible in OpenVZ. When a new VPS/VE/Domain needs to be created, it needs a file system. This needs to be created along with its related configuration files in specific locations. Only after this can it be started. There is a "destroy" command available in OpenVZ, which is different from the destroy in libvirt. This will completely erase the file system and remove the related config file as well. Since there are many configurable parameters, the OpenVZ tools provides 2 sample templates or profiles on which newly created Virtual Environments(VEs) can be based. So, during VPS creation, rather than taking a million parameters, the name of the profile is taken as an argument and the variables in the file are used to create the VE. These values can later on be overridden and also be optionally stored in the VE's private config file to ensure persistence across reboots. Since there are many parameters needed during VE creation, using the profile name is practical. So, in the proposed XML file, I'm using the profile name. OpenVZ has its own config file format. We are storing the UUID there in a comment, since UUIDs are not used by OpenVZ. While a VE is created, the easiest way to do it is using a so called template cache. This is just a tar file of a Linux distro FS that is used to create a new file system for a VE. There are no disk images. The VE root fs resides on the host file system as a bunch of files and directories. A few template caches are usually available, say one based on Debian, one based on Fedora Core and another based on Suse. The user can choose which one to use while creating a new VE. However, the name of the template cache is not stored anywhere once the VE filesystem is created. I think one more comment is needed in the per-VE config file for this, just as we are storing the UUID. Here is a sample template. This one is called vps.basic, comes with the OpenVZ tools: ----------------------------------------------------------------- ONBOOT="no" # UBC parameters (in form of barrier:limit) # Primary parameters AVNUMPROC="40:40" NUMPROC="65:65" NUMTCPSOCK="80:80" NUMOTHERSOCK="80:80" VMGUARPAGES="6144:2147483647" # Secondary parameters KMEMSIZE="2752512:2936012" TCPSNDBUF="319488:524288" TCPRCVBUF="319488:524288" OTHERSOCKBUF="132096:336896" DGRAMRCVBUF="132096:132096" OOMGUARPAGES="6144:2147483647" # Auxiliary parameters LOCKEDPAGES="32:32" SHMPAGES="8192:8192" PRIVVMPAGES="49152:53575" NUMFILE="2048:2048" NUMFLOCK="100:110" NUMPTY="16:16" NUMSIGINFO="256:256" DCACHESIZE="1048576:1097728" PHYSPAGES="0:2147483647" NUMIPTENT="128:128" # Disk quota parameters (in form of softlimit:hardlimit) DISKSPACE="1048576:1153434" DISKINODES="200000:220000" QUOTATIME="0" # CPU fair sheduler parameter CPUUNITS="1000" ------------------------------------------------------------ Here is the proposed XML format: <domain type='openvz'> <name>105</name> <uuid>8509a1d4-1569-4467-8b37-4e433a1ac7b1</uuid> <filesystem> <template>gentoo-20060317-i686-stage3</template> <quota level='first'>10737418240</quota> <quota level='second' uid='500'>5368709120</quota> </filesystem> <profile>vps.basic</profile> <devices> <interface> <ipaddress>192.168.1.105</ipaddress> </interface> </devices> <nameserver>192.168.1.1</nameserver> <hostname>fedora105</hostname> </domain> I don't think the "filesystem" tag can fit logically into "devices", since it has quota and other information. The "template" is the name of the template cache used to create the VE. One of the main reasons many people(especially hosting providers) use OpenVZ is since it can be used to provide service level agreements. There must be a way to set/get various VPS parameters from libvirt. I understand concerns about driver specific code in libvirt based clients like virt-manager. The capabilities paradigm will not fit here, since this is simply about various properties of the VE/domain, not the hardware or the VM capabilities. Please correct me, if I am wrong. So, how to we do it? Thanks and Regards, -- Shuveb Hussain Unix is very user friendly. It is just a little choosy about who its friends are http://www.binarykarma.com

Shuveb Hussain wrote:
One of the main reasons many people(especially hosting providers) use OpenVZ is since it can be used to provide service level agreements. There must be a way to set/get various VPS parameters from libvirt. I understand concerns about driver specific code in libvirt based clients like virt-manager. The capabilities paradigm will not fit here, since this is simply about various properties of the VE/domain, not the hardware or the VM capabilities. Please correct me, if I am wrong. So, how to we do it?
Is the current virDomainGetSchedulerType / virDomainGetSchedulerParameters / virDomainSetSchedulerParameters API sufficient to express all (or at least most) of what OpenVZ requires? If not, could it be plausibly extended, or is another API needed? Rich.

Hi Richard,
Is the current virDomainGetSchedulerType / virDomainGetSchedulerParameters / virDomainSetSchedulerParameters API sufficient to express all (or at least most) of what OpenVZ requires? If not, could it be plausibly extended, or is another API needed?
I looked into these calls. Looks like this will solve the OpenVZ specific set and gets. And since the types are there in the data structures, any libvirt client should be able to build a GUI. Cool. Thanks! -- Shuveb Hussain Unix is very user friendly. It is just a little choosy about who its friends are http://www.binarykarma.com

On Mon, Jul 23, 2007 at 11:55:19AM +0100, Richard W.M. Jones wrote:
Shuveb Hussain wrote:
One of the main reasons many people(especially hosting providers) use OpenVZ is since it can be used to provide service level agreements. There must be a way to set/get various VPS parameters from libvirt. I understand concerns about driver specific code in libvirt based clients like virt-manager. The capabilities paradigm will not fit here, since this is simply about various properties of the VE/domain, not the hardware or the VM capabilities. Please correct me, if I am wrong. So, how to we do it?
Is the current virDomainGetSchedulerType / virDomainGetSchedulerParameters / virDomainSetSchedulerParameters API sufficient to express all (or at least most) of what OpenVZ requires? If not, could it be plausibly extended, or is another API needed?
I don't think we should use the virDomain*Scheduler APIs for anything other than schedular parameters. If we want to provide a means to also controlling the other various resouce utilization tuning parameters we should have a set of APIs explicitly for resource tuning parameters. Even if they turn out to look near identical to the schedular params, I think its important to keep things semantically separated. We'll eventually get ability to control various resource tunables on Xen and QEMU/KVM too - controlling throughput / QOS on virtual NICs, or the I/O elevator on disk backends, etc. So a formal 'resource tunables' API would be useful to more than just OpenVZ in the longer term. Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Shuveb Hussain wrote:
In Xen or QEMU, if a disk image is available(Xen needs an additional kernel), it is possible to run the domain. Then forget all about it after the domain is shutoff. This is not possible in OpenVZ. When a new VPS/VE/Domain needs to be created, it needs a file system. This needs to be created along with its related configuration files in specific locations. Only after this can it be started. There is a "destroy" command available in OpenVZ, which is different from the destroy in libvirt. This will completely erase the file system and remove the related config file as well.
It sounds like OpenVZ "destroy" goes beyond "virsh undefine". The latter just removes the config file, and that can be recreated fairly easily. Is it common to destroy OpenVZ domains? What happens to all the guest's local configuration (changes to /etc, /home)? [...]
Here is a sample template. This one is called vps.basic, comes with the OpenVZ tools: ----------------------------------------------------------------- ONBOOT="no"
# UBC parameters (in form of barrier:limit) # Primary parameters AVNUMPROC="40:40" NUMPROC="65:65" NUMTCPSOCK="80:80" NUMOTHERSOCK="80:80" VMGUARPAGES="6144:2147483647" # Secondary parameters KMEMSIZE="2752512:2936012" TCPSNDBUF="319488:524288" TCPRCVBUF="319488:524288" OTHERSOCKBUF="132096:336896" DGRAMRCVBUF="132096:132096" OOMGUARPAGES="6144:2147483647" # Auxiliary parameters LOCKEDPAGES="32:32" SHMPAGES="8192:8192" PRIVVMPAGES="49152:53575" NUMFILE="2048:2048" NUMFLOCK="100:110" NUMPTY="16:16" NUMSIGINFO="256:256" DCACHESIZE="1048576:1097728"
PHYSPAGES="0:2147483647" NUMIPTENT="128:128"
# Disk quota parameters (in form of softlimit:hardlimit) DISKSPACE="1048576:1153434" DISKINODES="200000:220000" QUOTATIME="0"
# CPU fair sheduler parameter CPUUNITS="1000"
So my understanding is this: the debate would be about whether we want these parameters to be visible in the XML file, or is it better to have them hidden in a OpenVZ-specific file. Also, how do these parameters relate to other systems (eg. Xen scheduler parameters). I don't have good answers to these ... I did notice that OpenVZ uses a replacement / enhancement of BSD resource limits called CONFIG_BEANCOUNTER. What's the status of just this patch w.r.t. upstream Linux kernel?
Here is the proposed XML format:
<domain type='openvz'> <name>105</name> <uuid>8509a1d4-1569-4467-8b37-4e433a1ac7b1</uuid> <filesystem> <template>gentoo-20060317-i686-stage3</template> <quota level='first'>10737418240</quota> <quota level='second' uid='500'>5368709120</quota> </filesystem> <profile>vps.basic</profile> <devices> <interface> <ipaddress>192.168.1.105</ipaddress> </interface> </devices> <nameserver>192.168.1.1</nameserver> <hostname>fedora105</hostname> </domain>
I don't think the "filesystem" tag can fit logically into "devices", since it has quota and other information. The "template" is the name of the template cache used to create the VE.
This is the first virtualisation system I've seen that uses direct access to a chrooted filesystem on the host. All the ones we've considered before have used disk images or partitions. I'm guessing however that things like BSD jails & OpenSolaris containers are similar to OpenVZ? So it's worth considering in some detail how this is going to work. Should we simply specify in the XML file the location of the filesystem, and assume that something else creates it? (I'm sure that will be complicated in the OpenVZ case, but it may allow admins to use, for example, debootstrap to set up root filesystems by hand). Something like: <filesystems> <filesystem root="/mnt/guest1" /> </filesystems> Where we want guest creation to create the filesystem as well: <filesystems> <filesystem root="/mnt/guest1" template="/templates/gentoo-xyz-stage3" /> </filesystems> (I notice that the OpenVZ description doesn't say either (1) where the filesystem will be created, nor (2) where the template file is located. Just thoughts ... sorry that I don't have good answers here :-( Rich.

On Mon, 2007-07-23 at 12:33 +0100, Richard W.M. Jones wrote:
Shuveb Hussain wrote:
In Xen or QEMU, if a disk image is available(Xen needs an additional kernel), it is possible to run the domain. Then forget all about it after the domain is shutoff. This is not possible in OpenVZ. When a new VPS/VE/Domain needs to be created, it needs a file system. This needs to be created along with its related configuration files in specific locations. Only after this can it be started. There is a "destroy" command available in OpenVZ, which is different from the destroy in libvirt. This will completely erase the file system and remove the related config file as well.
It sounds like OpenVZ "destroy" goes beyond "virsh undefine". The latter just removes the config file, and that can be recreated fairly easily.
Is it common to destroy OpenVZ domains? What happens to all the guest's local configuration (changes to /etc, /home)?
The whole guest FS is destroyed, nothing is backed up! [...]
So my understanding is this: the debate would be about whether we want these parameters to be visible in the XML file, or is it better to have them hidden in a OpenVZ-specific file. Also, how do these parameters relate to other systems (eg. Xen scheduler parameters).
Yeah Richard, this is what I am wondering about.
I don't have good answers to these ...
:-(
I did notice that OpenVZ uses a replacement / enhancement of BSD resource limits called CONFIG_BEANCOUNTER. What's the status of just this patch w.r.t. upstream Linux kernel?
User Beancounters have been proposed by SWSoft for the Linux kernel, but AFAIK, nothing is sitting in the mainline. And in the meantime a lot of activity is going on in the Containers area, especially by Rohith Seth and Paul Manage of Google. It is difficult to predict what will get merged. OpenVZ as such definitely won't get merged, due to its style, size and treatment. The OpenVZ hackers are OK with this, I guess. [...]
This is the first virtualisation system I've seen that uses direct access to a chrooted filesystem on the host. All the ones we've considered before have used disk images or partitions. I'm guessing however that things like BSD jails & OpenSolaris containers are similar to OpenVZ? So it's worth considering in some detail how this is going to work.
May be. I haven't looked into BSD Jails and OpenSolaris containers. I wonder what management interfaces they provide. Depends where OpenVZ was inspired from ;-)
Should we simply specify in the XML file the location of the filesystem, and assume that something else creates it? (I'm sure that will be complicated in the OpenVZ case, but it may allow admins to use, for example, debootstrap to set up root filesystems by hand).
Something like: <filesystems> <filesystem root="/mnt/guest1" /> </filesystems>
Where we want guest creation to create the filesystem as well:
<filesystems> <filesystem root="/mnt/guest1" template="/templates/gentoo-xyz-stage3" /> </filesystems>
(I notice that the OpenVZ description doesn't say either (1) where the filesystem will be created, nor (2) where the template file is located.
There is no way we can specify where we want the new root file system to get created. There is a specific location where all VE file systems get created, for example: /vz/private/101 -> root fs base for VPS 101 /vz/private/102 -> root fs base for VPS 102 /vz/private/103 -> root fs base for VPS 103 ... The templates caches are in the location /vz/template/cache. The base /vz itself maybe in other locations on some distros. But when you specify the template name to the OpenVZ tools to create a VM, it will pick the template cache archive file from the correct location. For example, this command creates a new VPS with ID 105: # vzctl create 105 --ostemplate fedora-core-4 -–config vps.basic So, I guess we'll need to keep the "filesystem" tag out of the "devices" section. Or, are there other thoughts? Thanks, -- Shuveb Hussain Unix is very user friendly. It is just a little choosy about who its friends are http://www.binarykarma.com

On Mon, Jul 23, 2007 at 04:06:11PM +0530, Shuveb Hussain wrote:
Hi,
I started a discussion on OpenVZ XML format a while ago. But let me do it again with more explanation about OpenVZ this time, so that others can understand how it is different and how this can best fit into the libvirt model of doing things.
Terminology: Virtual Private Server (VPS), Virtual Environment (VE) and Domain are all the same.
OpenVZ is a lot about providing QoS to its users. About 20 carefully chosen parameters regarding various resources such as memory, CPU, disk and network are chosen. These are then used to provide minimum guarantee on any system running OpenVZ. Most of the time, these are limits that can be set per Virtual Private Server(VPS).
In Xen or QEMU, if a disk image is available(Xen needs an additional kernel), it is possible to run the domain. Then forget all about it after the domain is shutoff. This is not possible in OpenVZ. When a new VPS/VE/Domain needs to be created, it needs a file system. This needs to be created along with its related configuration files in specific locations. Only after this can it be started. There is a "destroy" command available in OpenVZ, which is different from the destroy in libvirt. This will completely erase the file system and remove the related config file as well.
Since there are many configurable parameters, the OpenVZ tools provides 2 sample templates or profiles on which newly created Virtual Environments(VEs) can be based. So, during VPS creation, rather than taking a million parameters, the name of the profile is taken as an argument and the variables in the file are used to create the VE. These values can later on be overridden and also be optionally stored in the VE's private config file to ensure persistence across reboots.
Since there are many parameters needed during VE creation, using the profile name is practical. So, in the proposed XML file, I'm using the profile name.
yes that sounds adequate to me. I think that with both the template and the profile we have something sufficient to get this to work. If needed we may later expand the format, as long as the parsing code is well done it whould be backward and forward compatible, just that the new elements may not be used by older tools.
OpenVZ has its own config file format. We are storing the UUID there in a comment, since UUIDs are not used by OpenVZ. While a VE is created, the easiest way to do it is using a so called template cache. This is just a tar file of a Linux distro FS that is used to create a new file system for a VE. There are no disk images. The VE root fs resides on the host file system as a bunch of files and directories. A few template caches are usually available, say one based on Debian, one based on Fedora Core and another based on Suse. The user can choose which one to use while creating a new VE. However, the name of the template cache is not stored anywhere once the VE filesystem is created. I think one more comment is needed in the per-VE config file for this, just as we are storing the UUID. [...] Here is the proposed XML format:
<domain type='openvz'> <name>105</name> <uuid>8509a1d4-1569-4467-8b37-4e433a1ac7b1</uuid> <filesystem> <template>gentoo-20060317-i686-stage3</template> <quota level='first'>10737418240</quota> <quota level='second' uid='500'>5368709120</quota> </filesystem> <profile>vps.basic</profile> <devices> <interface> <ipaddress>192.168.1.105</ipaddress> </interface> </devices> <nameserver>192.168.1.1</nameserver> <hostname>fedora105</hostname> </domain>
I don't think the "filesystem" tag can fit logically into "devices", since it has quota and other information. The "template" is the name of the template cache used to create the VE.
Hum, yes that is different from all other implementations so far. But nameserver and hostname feels a bit misplaced. To me nameserver should go somewhere else, it's kind of a duplicate of the networking stuff. And what would happen if you have also IPv6, suddenly nameserver structure breaks. I don't know yet how to best fix this but those two are problematic as-is.
One of the main reasons many people(especially hosting providers) use OpenVZ is since it can be used to provide service level agreements. There must be a way to set/get various VPS parameters from libvirt. I understand concerns about driver specific code in libvirt based clients like virt-manager. The capabilities paradigm will not fit here, since this is simply about various properties of the VE/domain, not the hardware or the VM capabilities. Please correct me, if I am wrong. So, how to we do it?
piggy-back on virDomainGetSchedulerParameters/virDomainSetSchedulerParameters that looks like the API flexible enough and closest in spirit. Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

Hi Daniel, [...]
Hum, yes that is different from all other implementations so far.
But nameserver and hostname feels a bit misplaced. To me nameserver should go somewhere else, it's kind of a duplicate of the networking stuff. And what would happen if you have also IPv6, suddenly nameserver structure breaks. I don't know yet how to best fix this but those two are problematic as-is.
OpenVZ doesn't deal with any kind of devices anyways and since it is a container system, I don't think it will do in the future either. There only one kernel and the host and the guests and thus no device based interfaces between them. Why not do away with the "devices" tag for OpenVZ and rather do something like this: <network> <ipaddress>192.168.1.101</ipaddress> <hostname>fc7-openvz</hostname> <gateway>192.168.1.1</gateway> </network> What do you feel?
One of the main reasons many people(especially hosting providers) use OpenVZ is since it can be used to provide service level agreements. There must be a way to set/get various VPS parameters from libvirt. I understand concerns about driver specific code in libvirt based clients like virt-manager. The capabilities paradigm will not fit here, since this is simply about various properties of the VE/domain, not the hardware or the VM capabilities. Please correct me, if I am wrong. So, how to we do it?
piggy-back on virDomainGetSchedulerParameters/virDomainSetSchedulerParameters that looks like the API flexible enough and closest in spirit.
Yeah, I will do this. -- Shuveb Hussain Unix is very user friendly. It is just a little choosy about who its friends are http://www.binarykarma.com

On Mon, Jul 23, 2007 at 06:47:32PM +0530, Shuveb Hussain wrote:
Hi Daniel,
[...]
Hum, yes that is different from all other implementations so far.
But nameserver and hostname feels a bit misplaced. To me nameserver should go somewhere else, it's kind of a duplicate of the networking stuff. And what would happen if you have also IPv6, suddenly nameserver structure breaks. I don't know yet how to best fix this but those two are problematic as-is.
OpenVZ doesn't deal with any kind of devices anyways and since it is a container system, I don't think it will do in the future either. There only one kernel and the host and the guests and thus no device based interfaces between them. Why not do away with the "devices" tag for OpenVZ and rather do something like this:
<network> <ipaddress>192.168.1.101</ipaddress> <hostname>fc7-openvz</hostname> <gateway>192.168.1.1</gateway> </network>
What do you feel?
Right there is no devices so forget about reusing that structure block. But then we should define another block similar in spirit something like <domain> .... common stuff ... <container> <filesystem> ... </filesystem> <network> <ipaddress>192.168.1.101</ipaddress> <hostname>fc7-openvz</hostname> <gateway>192.168.1.1</gateway> </network> </container> </domain> The kind of descriptions are gonna be different from a device oriented set since it's an user view and not an OS view anymore. One could argue about the 'container' term but I guess it's adequate, it would fit to chroot'ed kind of setting, zones, VZ, basically virtualization techologies where the Node (in libvirt terminology) exports the users ressources to the domain, and not (just) devices. That would not prevent mixed approaches either. Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

On Mon, Jul 23, 2007 at 09:40:30AM -0400, Daniel Veillard wrote:
On Mon, Jul 23, 2007 at 06:47:32PM +0530, Shuveb Hussain wrote:
Hi Daniel,
[...]
Hum, yes that is different from all other implementations so far.
But nameserver and hostname feels a bit misplaced. To me nameserver should go somewhere else, it's kind of a duplicate of the networking stuff. And what would happen if you have also IPv6, suddenly nameserver structure breaks. I don't know yet how to best fix this but those two are problematic as-is.
OpenVZ doesn't deal with any kind of devices anyways and since it is a container system, I don't think it will do in the future either. There only one kernel and the host and the guests and thus no device based interfaces between them. Why not do away with the "devices" tag for OpenVZ and rather do something like this:
<network> <ipaddress>192.168.1.101</ipaddress> <hostname>fc7-openvz</hostname> <gateway>192.168.1.1</gateway> </network>
What do you feel?
Right there is no devices so forget about reusing that structure block. But then we should define another block similar in spirit something like
<domain> .... common stuff ... <container> <filesystem> ... </filesystem> <network> <ipaddress>192.168.1.101</ipaddress> <hostname>fc7-openvz</hostname> <gateway>192.168.1.1</gateway> </network> </container> </domain>
The distinction of a 'container' element makes sense since container based virtualization does have very different metadata than that used for hypervisor based virt. For data with it though, can we stick to the same style & syntax used elsewhere in the XML. eg something closer to <network hostname='fc7-openvz'> <ip address='192.168.1.101'/> <gateway address='192.168.1.101'/> </network> Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Mon, Jul 23, 2007 at 02:49:30PM +0100, Daniel P. Berrange wrote:
On Mon, Jul 23, 2007 at 09:40:30AM -0400, Daniel Veillard wrote:
On Mon, Jul 23, 2007 at 06:47:32PM +0530, Shuveb Hussain wrote:
Hi Daniel,
[...]
Hum, yes that is different from all other implementations so far.
But nameserver and hostname feels a bit misplaced. To me nameserver should go somewhere else, it's kind of a duplicate of the networking stuff. And what would happen if you have also IPv6, suddenly nameserver structure breaks. I don't know yet how to best fix this but those two are problematic as-is.
OpenVZ doesn't deal with any kind of devices anyways and since it is a container system, I don't think it will do in the future either. There only one kernel and the host and the guests and thus no device based interfaces between them. Why not do away with the "devices" tag for OpenVZ and rather do something like this:
<network> <ipaddress>192.168.1.101</ipaddress> <hostname>fc7-openvz</hostname> <gateway>192.168.1.1</gateway> </network>
What do you feel?
Right there is no devices so forget about reusing that structure block. But then we should define another block similar in spirit something like
<domain> .... common stuff ... <container> <filesystem> ... </filesystem> <network> <ipaddress>192.168.1.101</ipaddress> <hostname>fc7-openvz</hostname> <gateway>192.168.1.1</gateway> </network> </container> </domain>
The distinction of a 'container' element makes sense since container based virtualization does have very different metadata than that used for hypervisor based virt.
For data with it though, can we stick to the same style & syntax used elsewhere in the XML. eg something closer to
<network hostname='fc7-openvz'> <ip address='192.168.1.101'/> <gateway address='192.168.1.101'/> </network>
Right but for some reason I could not find the place where we discribe the network syntax when writing my mail. This describes the domain side http://libvirt.org/format.html#Net1 but I can't find the network XML description ... there is no description of gateway in the format page, and ip is described once but there is no example, but yes we should follow this. Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

Right but for some reason I could not find the place where we discribe the network syntax when writing my mail. This describes the domain side http://libvirt.org/format.html#Net1 but I can't find the network XML description ... there is no description of gateway in the format page, and ip is described once but there is no example, but yes we should follow this.
Yeah, I was looking for it in the format page but couldn't find anything relevant. I'll implement the <container> tag. Also, <network> tag as described by Daniel. B. Thanks, -- Shuveb Hussain Unix is very user friendly. It is just a little choosy about who its friends are http://www.binarykarma.com

On Mon, Jul 23, 2007 at 10:11:12AM -0400, Daniel Veillard wrote:
On Mon, Jul 23, 2007 at 02:49:30PM +0100, Daniel P. Berrange wrote:
On Mon, Jul 23, 2007 at 09:40:30AM -0400, Daniel Veillard wrote:
The distinction of a 'container' element makes sense since container based virtualization does have very different metadata than that used for hypervisor based virt.
For data with it though, can we stick to the same style & syntax used elsewhere in the XML. eg something closer to
<network hostname='fc7-openvz'> <ip address='192.168.1.101'/> <gateway address='192.168.1.101'/> </network>
Right but for some reason I could not find the place where we discribe the network syntax when writing my mail. This describes the domain side http://libvirt.org/format.html#Net1 but I can't find the network XML description ... there is no description of gateway in the format page, and ip is described once but there is no example, but yes we should follow this.
Yes, the networking XML is a missing bit of the websites. The following snippet illustrates all the important constructs: <network> <name>default</name> <bridge name="virbr0" /> <forward/> <ip address="192.168.122.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.122.2" end="192.168.122.254" /> </dhcp> </ip> </network> Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Tue, 2007-07-24 at 01:10 +0100, Daniel P. Berrange wrote:
Yes, the networking XML is a missing bit of the websites. The following snippet illustrates all the important constructs:
<network> <name>default</name> <bridge name="virbr0" /> <forward/> <ip address="192.168.122.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.122.2" end="192.168.122.254" /> </dhcp> </ip> </network>
A whiel ago, I wrote a very basic, but commented, relax-ng schema[1] for the network XML; seems that somehow fell through the cracks. David [1] https://www.redhat.com/archives/libvir-list/2007-April/msg00181.html

On Tue, Jul 24, 2007 at 01:48:37AM +0000, David Lutterkort wrote:
On Tue, 2007-07-24 at 01:10 +0100, Daniel P. Berrange wrote:
Yes, the networking XML is a missing bit of the websites. The following snippet illustrates all the important constructs:
<network> <name>default</name> <bridge name="virbr0" /> <forward/> <ip address="192.168.122.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.122.2" end="192.168.122.254" /> </dhcp> </ip> </network>
A whiel ago, I wrote a very basic, but commented, relax-ng schema[1] for the network XML; seems that somehow fell through the cracks.
David
[1] https://www.redhat.com/archives/libvir-list/2007-April/msg00181.html
Oops, okay, sorry. I added it to the docs in CVS, it needs an update though when using it against the instance before: paphio:~/libvirt/docs -> xmllint --noout --relaxng network.rng network.xml network.xml:5: element ip: Relax-NG validity error : Did not expect element ip there network.xml fails to validate paphio:~/libvirt/docs -> some of the typed defined in libvirt.rng could be copied for example to verify UUIDs, MAC or IP addresses. At least it's in now ! thanks, Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/
participants (5)
-
Daniel P. Berrange
-
Daniel Veillard
-
David Lutterkort
-
Richard W.M. Jones
-
Shuveb Hussain