On 4/4/2012 4:45 AM, Richard W.M. Jones wrote:
> Then I created my blank 'disk' file and tried to
> run virt-rescue on it. It crashed out with an
> error from febootstrap.
First of all, debug this properly:
(1) What is the full error message?
(2) What is the complete, unedited output of 'libguestfs-test-tool'?
(3) What version of libguestfs& febootstrap are using and where did
you get them from?
You can post the details on our mailing list libguestfs(a)redhat.com
(no need to subscribe if you don't want to).
> After finding nothing terribly
> useful or current on this in searching, I tried
> guestfish instead. After some fiddling I got it
> to attach my blank disk. However I cannot find a
> reasonable way to partition it with the part-add which
> seems to want me to count sectors. All I want is
> a 9G linux and a 2G swap.
Assuming the filesystem was in /tmp/root.tar.gz, the following code
will do this:
guestfish<<EOF
sparse /tmp/test.img 11G
run
part-init /dev/sda mbr
# 9GB sda1
part-add /dev/sda p 64 $(( 9*1024*1024*2 ))
# remainder in sda2
part-add /dev/sda p $(( 9*1024*1024*2 + 1 )) -64
mkfs ext4 /dev/sda1
mkswap /dev/sda2
mount /dev/sda1 /
tgz-in /tmp/root.tar.gz /
EOF
Example:
$ sh test.sh
$ virt-df -a test.img -h
Filesystem Size Used Available Use%
test.img:/dev/sda1 9.0G 276M 8.3G 4%
$ ll -h test.img
-rw-rw-r--. 1 rjones rjones 11G Apr 4 09:39 test.img
Whether this would actually boot is another question: you may also
need to add some grub commands to set up the bootloader, *or* (better
and easier IMHO) set up libvirt so that it boots from an external
kernel + initrd.
> It also lacks access to rsync.
It's not the first time that someone has asked for rsync, and it
wouldn't be too hard to add. However note that rsync really gives you
no benefit when you're creating a filesystem from scratch, because
there's no original to rsync against. If you are updating a
filesystem image then rsync makes sense.
Rich.
Also rsync does not do anything magical/differential when dealing with
local copies. Even an update is still a full normal copy because there
is no benefit to the differential processing (the same disk(s) have to
be read and written just as much and it all has to pass through the same
cpu and busses. You can force it by doing a network transfer over
localhost, but "ya canna change the laws o physics" And, not only is
there no differential magic benefit, there is still a major speed loss.
rsync is usually a lot slower than the same tar/cpio/cp doing full
copies, even over networks, even using -W and --inplace.
Definitely the point of using rsync locally is not speed of the data
transfer itself, but speed of the the user.
The main benefit is just familiarity and convenience with the rsync
command line arguments and behavior. You spend a lot of time learning
how to get various common tasks done with rsync and it is really quite a
swiss army knife. And you develop trust and confidence with it. You know
exactly what will happen when you run a given command because you do the
same thing routinely. Those features are handy enough to be worth taking
the speed hit in return for not having to develop complicated, entirely
different, find/cp/cpio commands that you use less often and are less
sure about what they'll actually do when you press enter.
You could argue the find/cpio is better for writing into scripts and
apps since they are a more universal dependency as well as the
performance reason, but you could also argue that it's an advantage to
use a command that works equally between any to points, be they local or
remote, making your script/app more useful in more unpredicted
situations for free.
--
bkw