Hi,
> The only thing I haven't done from above is to use ionice on
mail
> processes. I'm using RAID5 across three 1TB SATA3 disks,
RAID 5 is another major bottleneck there. The commonly cited write
penalty for RAID 5 appears to be 4 IOPS, while RAID 1/10 is 2 IOPS.
Due to the typical load on an email server with more writes than read,
this become rather punishing.
So you might see a major improvement simply by adding another 1TB and
going for RAID 10.
I thought RAID10 still involved RAID1 on all disks, so really the only
improvement would be the lack of the parity write, correct? The
wikipedia entry seems to indicate it's not all that much faster:
http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
Am I just expecting more from kvm/qemu than is realistic at this
point? Are there no high-volume mail servers that use libvirt?
> <disk type='file' device='disk'>
> <driver name='qemu' type='raw'/>
> <source file='/var/lib/libvirt/images/mail02.img'/>
> <target dev='vda' bus='virtio'/>
> <address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
> function='0x0'/>
I meant raw disk devices rather than files e.g.
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/server02_vg1_2tb/lv_vm03_swap'/>
<target dev='vdb' bus='virtio'/>
</disk>
This eliminates one layer of filesystem overheads.
Can this type change be done without modifying the image, or must some
conversion be done prior to making this change?
I also found this cool doc on monitoring and improving performance:
http://www.ufsdump.org/papers/io-tuning.pdf
Thanks again,
Alex