guest-fsfreeze-freeze freezes all mounted block devices

I wondered if anyone here can confirm that virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}' Freezes all mounted block devices filesystems. So if I use 4 block devices they are all frozen for snapshotting. Or just the root fs?

On Fri, Feb 14, 2020 at 22:14:55 +0100, Marc Roos wrote:
I wondered if anyone here can confirm that
virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}'
Note that libvirt implements this directly via 'virsh domfsfreeze'. This is the corresponding man page entry: domfsfreeze domain [[--mountpoint] mountpoint...] Freeze mounted filesystems within a running domain to prepare for consistent snapshots. The --mountpoint option takes a parameter mountpoint, which is a mount point path of the filesystem to be frozen. This option can occur multiple times. If this is not specified, every mounted filesystem is frozen.
Freezes all mounted block devices filesystems. So if I use 4 block devices they are all frozen for snapshotting. Or just the root fs?
Since you are using agent passthrough where libvirt doesn't do any interpretation of what happens please refer to the appropriate QEMU agent documentation for the semantics of the command you've used.

Hi Peter, Should I assume that the virsh domfsfreeze, does not require the qemu-agent service in the guest? PS. I couldn't find the result. Afaik it looks like it is returning the amount of frozen/thawed filesystem's -----Original Message----- Cc: libvirt-users Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices On Fri, Feb 14, 2020 at 22:14:55 +0100, Marc Roos wrote:
I wondered if anyone here can confirm that
virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}'
Note that libvirt implements this directly via 'virsh domfsfreeze'. This is the corresponding man page entry: domfsfreeze domain [[--mountpoint] mountpoint...] Freeze mounted filesystems within a running domain to prepare for consistent snapshots. The --mountpoint option takes a parameter mountpoint, which is a mount point path of the filesystem to be frozen. This option can occur multiple times. If this is not specified, every mounted filesystem is frozen.
Freezes all mounted block devices filesystems. So if I use 4 block devices they are all frozen for snapshotting. Or just the root fs?
Since you are using agent passthrough where libvirt doesn't do any interpretation of what happens please refer to the appropriate QEMU agent documentation for the semantics of the command you've used.

On Mon, Feb 17, 2020 at 10:03:27 +0100, Marc Roos wrote:
Hi Peter,
Should I assume that the virsh domfsfreeze, does not require the qemu-agent service in the guest?
No. That's the official way how to drive the "guest-fsfreeze-freeze" agent command via libvirt, thus you must have the guest agent the same way as you used it before. Using qemu-agent-command is a backdoor for testing arbitrary commands and thus you can break things. You are on your own if stuff breaks using that approach.
PS. I couldn't find the result. Afaik it looks like it is returning the amount of frozen/thawed filesystem's
qemu.git/gqa/qapi-schema.json says: ## # @guest-fsfreeze-freeze: # # Sync and freeze all freezable, local guest filesystems. If this # command succeeded, you may call @guest-fsfreeze-thaw later to # unfreeze. # # Note: On Windows, the command is implemented with the help of a # Volume Shadow-copy Service DLL helper. The frozen state is limited # for up to 10 seconds by VSS. # # Returns: Number of file systems currently frozen. On error, all filesystems # will be thawed. If no filesystems are frozen as a result of this call, # then @guest-fsfreeze-status will remain "thawed" and calling # @guest-fsfreeze-thaw is not necessary. # # Since: 0.15.0 You might also want to have a look at 'guest-fsfreeze-freeze-list'.
-----Original Message----- Cc: libvirt-users Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices
On Fri, Feb 14, 2020 at 22:14:55 +0100, Marc Roos wrote:
I wondered if anyone here can confirm that
virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}'
Note that libvirt implements this directly via 'virsh domfsfreeze'. This is the corresponding man page entry:
domfsfreeze domain [[--mountpoint] mountpoint...] Freeze mounted filesystems within a running domain to prepare for consistent snapshots.
The --mountpoint option takes a parameter mountpoint, which is a mount point path of the filesystem to be frozen. This option can occur multiple times. If this is not specified, every mounted filesystem is frozen.
Freezes all mounted block devices filesystems. So if I use 4 block devices they are all frozen for snapshotting. Or just the root fs?
As a side note. For snapshotting via virsh snapshot-create use the --quiesce option which does what you want or the virsh command with arguments if you don't want to freeze everything.
Since you are using agent passthrough where libvirt doesn't do any interpretation of what happens please refer to the appropriate QEMU agent documentation for the semantics of the command you've used.

Hmmm, using 'virsh domfsinfo testdom' gives me a crash in win2008r2 (using software from virtio-win-0.1.171.iso) Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: qemu-ga.exe P2: 100.0.0.0 P3: 5c473543 P4: KERNELBASE.dll P5: 6.1.7601.24545 P6: 5e0eb6bd P7: c0000005 P8: 000000000000c4d2 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_qemu-ga.exe_bd 2e6535bdb93328680e0285e89e08f2866db83_0b0deada Analysis symbol: Rechecking for solution: 0 Report Id: 3d82596e-517c-11ea-b213-525400e83365 Report Status: 0 -----Original Message----- Cc: libvirt-users Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices On Mon, Feb 17, 2020 at 10:03:27 +0100, Marc Roos wrote:
Hi Peter,
Should I assume that the virsh domfsfreeze, does not require the qemu-agent service in the guest?
No. That's the official way how to drive the "guest-fsfreeze-freeze" agent command via libvirt, thus you must have the guest agent the same way as you used it before. Using qemu-agent-command is a backdoor for testing arbitrary commands and thus you can break things. You are on your own if stuff breaks using that approach.
PS. I couldn't find the result. Afaik it looks like it is returning the amount of frozen/thawed filesystem's
qemu.git/gqa/qapi-schema.json says: ## # @guest-fsfreeze-freeze: # # Sync and freeze all freezable, local guest filesystems. If this # command succeeded, you may call @guest-fsfreeze-thaw later to # unfreeze. # # Note: On Windows, the command is implemented with the help of a # Volume Shadow-copy Service DLL helper. The frozen state is limited # for up to 10 seconds by VSS. # # Returns: Number of file systems currently frozen. On error, all filesystems # will be thawed. If no filesystems are frozen as a result of this call, # then @guest-fsfreeze-status will remain "thawed" and calling # @guest-fsfreeze-thaw is not necessary. # # Since: 0.15.0 You might also want to have a look at 'guest-fsfreeze-freeze-list'.
-----Original Message----- Cc: libvirt-users Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices
On Fri, Feb 14, 2020 at 22:14:55 +0100, Marc Roos wrote:
I wondered if anyone here can confirm that
virsh qemu-agent-command domain
'{"execute":"guest-fsfreeze-freeze"}'
Note that libvirt implements this directly via 'virsh domfsfreeze'. This is the corresponding man page entry:
domfsfreeze domain [[--mountpoint] mountpoint...] Freeze mounted filesystems within a running domain to prepare for consistent snapshots.
The --mountpoint option takes a parameter mountpoint, which is a
mount point path of the filesystem to be frozen. This option can occur multiple times. If this is not specified, every mounted filesystem is frozen.
Freezes all mounted block devices filesystems. So if I use 4 block devices they are all frozen for snapshotting. Or just the root fs?
As a side note. For snapshotting via virsh snapshot-create use the --quiesce option which does what you want or the virsh command with arguments if you don't want to freeze everything.
Since you are using agent passthrough where libvirt doesn't do any interpretation of what happens please refer to the appropriate QEMU agent documentation for the semantics of the command you've used.

On Mon, Feb 17, 2020 at 01:52:02PM +0100, Marc Roos wrote:
Hmmm, using 'virsh domfsinfo testdom' gives me a crash in win2008r2 (using software from virtio-win-0.1.171.iso)
Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0
Problem signature: P1: qemu-ga.exe P2: 100.0.0.0 P3: 5c473543 P4: KERNELBASE.dll P5: 6.1.7601.24545 P6: 5e0eb6bd P7: c0000005 P8: 000000000000c4d2 P9: P10:
Attached files:
These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_qemu-ga.exe_bd 2e6535bdb93328680e0285e89e08f2866db83_0b0deada
Analysis symbol: Rechecking for solution: 0 Report Id: 3d82596e-517c-11ea-b213-525400e83365 Report Status: 0
That's not good! Could you report this problem to the QEMU bug tracker. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Link? -----Original Message----- Sent: 21 February 2020 11:50 To: Marc Roos Cc: pkrempa; libvirt-users Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices On Mon, Feb 17, 2020 at 01:52:02PM +0100, Marc Roos wrote:
Hmmm, using 'virsh domfsinfo testdom' gives me a crash in win2008r2 (using software from virtio-win-0.1.171.iso)
Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0
Problem signature: P1: qemu-ga.exe P2: 100.0.0.0 P3: 5c473543 P4: KERNELBASE.dll P5: 6.1.7601.24545 P6: 5e0eb6bd P7: c0000005 P8: 000000000000c4d2 P9: P10:
Attached files:
These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_qemu-ga.exe_ bd 2e6535bdb93328680e0285e89e08f2866db83_0b0deada
Analysis symbol: Rechecking for solution: 0 Report Id: 3d82596e-517c-11ea-b213-525400e83365 Report Status: 0
That's not good! Could you report this problem to the QEMU bug tracker. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, Feb 21, 2020 at 04:35:46PM +0100, Marc Roos wrote:
Link?
That's not good! Could you report this problem to the QEMU bug tracker.
https://www.qemu.org/support/ Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

I have been asking at technet about if I could snapshot an exchange db volume[1]. But it looks like that windows server backup is sort of doing the same. Creating vs image, and backup that one. However they truncated the exchange db log files after a succesful procedure. I was wondering if someone is doing this already. How does the guest freeze work exactly in windows (In linux it is quite clear no io is done). I assume that if the guest is invoking the vss service, and then you snapshot on the host environment. You have a snapshot with a vss image? So if you want to recover, you first need to 'roll back' to that internal vss image on the snapshot? [1] https://social.technet.microsoft.com/Forums/office/en-US/7818e7a5-3c09-4f36-... https://social.technet.microsoft.com/Forums/office/en-US/922ff22a-73d6-4b86-...

Anyone knows how the agent is using the vss service? Where this process is documented? -----Original Message----- Cc: libvirt-users Subject: snapshotting disk images with exchange db / volumes I have been asking at technet about if I could snapshot an exchange db volume[1]. But it looks like that windows server backup is sort of doing the same. Creating vs image, and backup that one. However they truncated the exchange db log files after a succesful procedure. I was wondering if someone is doing this already. How does the guest freeze work exactly in windows (In linux it is quite clear no io is done). I assume that if the guest is invoking the vss service, and then you snapshot on the host environment. You have a snapshot with a vss image? So if you want to recover, you first need to 'roll back' to that internal vss image on the snapshot? [1] https://social.technet.microsoft.com/Forums/office/en-US/7818e7a5-3c09-4f36-... https://social.technet.microsoft.com/Forums/office/en-US/922ff22a-73d6-4b86-...

I am doing a virsh detach-interface and an attach-interface. Is it possible to automatically bring the interface up after attaching it?

On 8/8/20 9:42 AM, Marc Roos wrote:
I am doing a virsh detach-interface and an attach-interface. Is it possible to automatically bring the interface up after attaching it?
By coincidence, I was just playing with this with the python API. The interface being brought up automatically, if I understand your question correctly, depends on the OS of the guest having hotplug support for the NIC you have selected for it. It takes a couple seconds, but I can generally detach and re-attach an interface in CentOS Linux, for instance, with only about a 2-5 second hiccup in network traffic. -- brent saner https://square-r00t.net/ GPG info: https://square-r00t.net/gpg-info

On 8/8/20 11:26 PM, brent s. wrote:
On 8/8/20 9:42 AM, Marc Roos wrote:
I am doing a virsh detach-interface and an attach-interface. Is it possible to automatically bring the interface up after attaching it?
By coincidence, I was just playing with this with the python API.
The interface being brought up automatically, if I understand your question correctly, depends on the OS of the guest having hotplug support for the NIC you have selected for it.
It takes a couple seconds, but I can generally detach and re-attach an interface in CentOS Linux, for instance, with only about a 2-5 second hiccup in network traffic.
Yes, by default any hotplugged interface will be online when it's attached. You need to modify the XML of the interface to have it plugged in with an offline status. If it's not coming up in the guest, then that's something in the guest OS, not the emulated interface's online status, and will need to be taken care of in the guest OS.

Hi Brent, you are right! This has been removed from centos7[1] pfff I have managed to get this working by copying these two files from CentOS6 /etc/sysconfig/network-scripts/net.hotplug /lib/udev/rules.d/60-net.rules And running udevadm control --reload-rules && udevadm trigger https://access.redhat.com/solutions/429653 -----Original Message----- To: libvirt-users@redhat.com Subject: Re: virsh attach-interface auto up On 8/8/20 9:42 AM, Marc Roos wrote:
I am doing a virsh detach-interface and an attach-interface. Is it possible to automatically bring the interface up after attaching it?
By coincidence, I was just playing with this with the python API. The interface being brought up automatically, if I understand your question correctly, depends on the OS of the guest having hotplug support for the NIC you have selected for it. It takes a couple seconds, but I can generally detach and re-attach an interface in CentOS Linux, for instance, with only about a 2-5 second hiccup in network traffic. -- brent saner https://square-r00t.net/ GPG info: https://square-r00t.net/gpg-info

On 8/9/20 5:07 AM, Marc Roos wrote:
Hi Brent, you are right! This has been removed from centos7[1] pfff
I have managed to get this working by copying these two files from CentOS6
/etc/sysconfig/network-scripts/net.hotplug /lib/udev/rules.d/60-net.rules
And running udevadm control --reload-rules && udevadm trigger
Huh! It seemed to work fine for my CentOS 7 guests, too. Most of my testing was done in CentOS 8, but it seemed to work fine in both. What network driver are you using for the virtualized NIC? I'm using virtio for both the 7.x and 8.x guests. -- brent saner https://square-r00t.net/ GPG info: https://square-r00t.net/gpg-info

I think it used to work also, maybe one of these latest releases of 7 it has been removed in favour of using networkmanager. (which I do not have) -----Original Message----- To: Marc Roos; libvirt-users Subject: Re: virsh attach-interface auto up Huh! It seemed to work fine for my CentOS 7 guests, too. Most of my testing was done in CentOS 8, but it seemed to work fine in both. What network driver are you using for the virtualized NIC? I'm using virtio for both the 7.x and 8.x guests.
participants (5)
-
brent s.
-
Daniel P. Berrangé
-
Laine Stump
-
Marc Roos
-
Peter Krempa