Re: [PATCH] migration: support file: uri for source migration
On Mon, Sep 12, 2022 at 07:30:50PM +0300, Nikolay Borisov wrote: > > > On 12.09.22 г. 18:41 ч., Daniel P. Berrangé wrote: > > On Thu, Sep 08, 2022 at 01:26:32PM +0300, Nikolay Borisov wrote: > > > This is a prototype of supporting a 'file:' based uri protocol for > > > writing out the migration stream of qemu. Currently the code always > > > opens the file in DIO mode and adheres to an alignment of 64k to be > > > generic enough. However this comes with a problem - it requires copying > > > all data that we are writing (qemu metadata + guest ram pages) to a > > > bounce buffer so that we adhere to this alignment. > > > > The adhoc device metadata clearly needs bounce buffers since it > > is splattered all over RAM with no concern of alignemnt. THe use > > of bounce buffers for this shouldn't be a performance issue though > > as metadata is small relative to the size of the snapshot as a whole. > > Bounce buffers can be eliminated altogether so long as we simply switch > between buffered/DIO mode via fcntl. > > > > > The guest RAM pages should not need bounce buffers at all when using > > huge pages, as alignment will already be way larger than we required. > > Guests with huge pages are the ones which are likely to have huge > > RAM sizes and thus need the DIO mode, so we should be sorted for that. > > > > When using small pages for guest RAM, if it is not already allocated > > with suitable alignment, I feel like we should be able to make it > > so that we allocate the RAM block with good alignemnt to avoid the > > need for bounce buffers. This would address the less common case of > > a guest with huge RAM size but not huge pages. > > Ram blocks are generally allocated with good alignment due to them being > mmaped(), however as I was toying with eliminating bounce buffers for ram I > hit an issue where the page headers being written (8 bytes each) aren't > aligned (naturally). Imo I think the on-disk format can be changed the > following way: > > > , each subsequent page > is then written at an offset from the base address of the ramblock, that is > it's index would be : > > page_offset = page_addr - ramblock_base, Then the page is written at > ramblock_base (in the file) + page_offset. This would eliminate the page > headers altogether. This leaves aligning the initial ramblock header > initially. However, this would lead to us potentially having to issue 1 > lseek per page to write - to adjust the the file position, which might not > be a problem in itself but who knows. How dooes that sound to you? Yes, definitely. We don't want the headers on front of each page, just one single large block. Looking forward to multi-fd, we don't want to be using lseek at all, because that changes the file offset for all threads using the FD. Instead we need to be able to use pread/pwrite for writing the RAM blocks. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH] migration: support file: uri for source migration
On 12.09.22 г. 18:41 ч., Daniel P. Berrangé wrote: On Thu, Sep 08, 2022 at 01:26:32PM +0300, Nikolay Borisov wrote: This is a prototype of supporting a 'file:' based uri protocol for writing out the migration stream of qemu. Currently the code always opens the file in DIO mode and adheres to an alignment of 64k to be generic enough. However this comes with a problem - it requires copying all data that we are writing (qemu metadata + guest ram pages) to a bounce buffer so that we adhere to this alignment. The adhoc device metadata clearly needs bounce buffers since it is splattered all over RAM with no concern of alignemnt. THe use of bounce buffers for this shouldn't be a performance issue though as metadata is small relative to the size of the snapshot as a whole. Bounce buffers can be eliminated altogether so long as we simply switch between buffered/DIO mode via fcntl. The guest RAM pages should not need bounce buffers at all when using huge pages, as alignment will already be way larger than we required. Guests with huge pages are the ones which are likely to have huge RAM sizes and thus need the DIO mode, so we should be sorted for that. When using small pages for guest RAM, if it is not already allocated with suitable alignment, I feel like we should be able to make it so that we allocate the RAM block with good alignemnt to avoid the need for bounce buffers. This would address the less common case of a guest with huge RAM size but not huge pages. Ram blocks are generally allocated with good alignment due to them being mmaped(), however as I was toying with eliminating bounce buffers for ram I hit an issue where the page headers being written (8 bytes each) aren't aligned (naturally). Imo I think the on-disk format can be changed the following way: , each subsequent page is then written at an offset from the base address of the ramblock, that is it's index would be : page_offset = page_addr - ramblock_base, Then the page is written at ramblock_base (in the file) + page_offset. This would eliminate the page headers altogether. This leaves aligning the initial ramblock header initially. However, this would lead to us potentially having to issue 1 lseek per page to write - to adjust the the file position, which might not be a problem in itself but who knows. How dooes that sound to you? Thus if we assume guest RAM is suitably aligned, then we can avoid bounce buffers for RAM pages, while still using bounce buffers for the metadata. With this code I get the following performance results: DIO exec: cat > file virsh --bypass-cache 82 77 81 82 78 80 80 80 82 82 82 77 77 79 77 AVG: 80.6 79.2 79.4 stddev: 1.959 1.720 2.05 All numbers are in seconds. Those results are somewhat surprising to me as I'd expected doing the writeout directly within qemu and avoiding copying between qemu and virsh's iohelper process would result in a speed up. Clearly that's not the case, I attribute this to the fact that all memory pages have to be copied into the bounce buffer. There are more measurements/profiling work that I'd have to do in order to (dis)prove this hypotheses and will report back when I have the data. When using the libvirt iohelper we have mutliple CPUs involved. IOW the bounce buffer copy is taking place on a separate CPU from the QEMU migration loop. This ability to use multiple CPUs may well have balanced out any benefit from doing DIO on the QEMU side. If you eliminate bounce buffers for guest RAM and write it directly to the fixed location on disk, then we should see the benefit - and if not then something is really wrong in our thoughts. However I'm sending the code now as I'd like to facilitate a discussion as to whether this is an approach that would be acceptable to upstream merging. Any ideas/comments would be much appreciated. AFAICT this impl is still using the existing on-disk format, where RAM pages are just written inline to the stream. For DIO benefit to be maximised we need the on-disk format to be changed, so that the guest RAM regions can be directly associated with fixed locations on disk. This also means that if guest dirties RAM while its saving, then we overwrite the existing content on disk, such that restore only ever needs to restore each RAM page once, instead of restoring every dirtied version. With regard
Re: [PATCH] migration: support file: uri for source migration
On Thu, Sep 08, 2022 at 01:26:32PM +0300, Nikolay Borisov wrote: > This is a prototype of supporting a 'file:' based uri protocol for > writing out the migration stream of qemu. Currently the code always > opens the file in DIO mode and adheres to an alignment of 64k to be > generic enough. However this comes with a problem - it requires copying > all data that we are writing (qemu metadata + guest ram pages) to a > bounce buffer so that we adhere to this alignment. The adhoc device metadata clearly needs bounce buffers since it is splattered all over RAM with no concern of alignemnt. THe use of bounce buffers for this shouldn't be a performance issue though as metadata is small relative to the size of the snapshot as a whole. The guest RAM pages should not need bounce buffers at all when using huge pages, as alignment will already be way larger than we required. Guests with huge pages are the ones which are likely to have huge RAM sizes and thus need the DIO mode, so we should be sorted for that. When using small pages for guest RAM, if it is not already allocated with suitable alignment, I feel like we should be able to make it so that we allocate the RAM block with good alignemnt to avoid the need for bounce buffers. This would address the less common case of a guest with huge RAM size but not huge pages. Thus if we assume guest RAM is suitably aligned, then we can avoid bounce buffers for RAM pages, while still using bounce buffers for the metadata. >With this code I get > the following performance results: > > DIO exec: cat > file virsh --bypass-cache > 82 77 > 81 > 82 78 > 80 > 80 80 > 82 > 82 82 > 77 > 77 79 > 77 > > AVG: 80.679.2 > 79.4 > stddev: 1.959 1.720 > 2.05 > > All numbers are in seconds. > > Those results are somewhat surprising to me as I'd expected doing the > writeout directly within qemu and avoiding copying between qemu and > virsh's iohelper process would result in a speed up. Clearly that's not > the case, I attribute this to the fact that all memory pages have to be > copied into the bounce buffer. There are more measurements/profiling > work that I'd have to do in order to (dis)prove this hypotheses and will > report back when I have the data. When using the libvirt iohelper we have mutliple CPUs involved. IOW the bounce buffer copy is taking place on a separate CPU from the QEMU migration loop. This ability to use multiple CPUs may well have balanced out any benefit from doing DIO on the QEMU side. If you eliminate bounce buffers for guest RAM and write it directly to the fixed location on disk, then we should see the benefit - and if not then something is really wrong in our thoughts. > However I'm sending the code now as I'd like to facilitate a discussion > as to whether this is an approach that would be acceptable to upstream > merging. Any ideas/comments would be much appreciated. AFAICT this impl is still using the existing on-disk format, where RAM pages are just written inline to the stream. For DIO benefit to be maximised we need the on-disk format to be changed, so that the guest RAM regions can be directly associated with fixed locations on disk. This also means that if guest dirties RAM while its saving, then we overwrite the existing content on disk, such that restore only ever needs to restore each RAM page once, instead of restoring every dirtied version. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
[PATCH] migration: support file: uri for source migration
This is a prototype of supporting a 'file:' based uri protocol for writing out the migration stream of qemu. Currently the code always opens the file in DIO mode and adheres to an alignment of 64k to be generic enough. However this comes with a problem - it requires copying all data that we are writing (qemu metadata + guest ram pages) to a bounce buffer so that we adhere to this alignment. With this code I get the following performance results: DIO exec: cat > file virsh --bypass-cache 8277 81 8278 80 8080 82 8282 77 7779 77 AVG: 80.6 79.2 79.4 stddev: 1.959 1.720 2.05 All numbers are in seconds. Those results are somewhat surprising to me as I'd expected doing the writeout directly within qemu and avoiding copying between qemu and virsh's iohelper process would result in a speed up. Clearly that's not the case, I attribute this to the fact that all memory pages have to be copied into the bounce buffer. There are more measurements/profiling work that I'd have to do in order to (dis)prove this hypotheses and will report back when I have the data. However I'm sending the code now as I'd like to facilitate a discussion as to whether this is an approach that would be acceptable to upstream merging. Any ideas/comments would be much appreciated. Signed-off-by: Nikolay Borisov --- include/io/channel-file.h | 1 + include/io/channel.h | 1 + io/channel-file.c | 17 +++ migration/meson.build | 1 + migration/migration.c | 4 ++ migration/migration.h | 2 + migration/qemu-file.c | 104 +- 7 files changed, 129 insertions(+), 1 deletion(-) diff --git a/include/io/channel-file.h b/include/io/channel-file.h index 50e8eb113868..6cb0b698c62c 100644 --- a/include/io/channel-file.h +++ b/include/io/channel-file.h @@ -89,4 +89,5 @@ qio_channel_file_new_path(const char *path, mode_t mode, Error **errp); +void qio_channel_file_disable_dio(QIOChannelFile *ioc); #endif /* QIO_CHANNEL_FILE_H */ diff --git a/include/io/channel.h b/include/io/channel.h index c680ee748021..6127ff6c0626 100644 --- a/include/io/channel.h +++ b/include/io/channel.h @@ -41,6 +41,7 @@ enum QIOChannelFeature { QIO_CHANNEL_FEATURE_SHUTDOWN, QIO_CHANNEL_FEATURE_LISTEN, QIO_CHANNEL_FEATURE_WRITE_ZERO_COPY, +QIO_CHANNEL_FEATURE_DIO, }; diff --git a/io/channel-file.c b/io/channel-file.c index b67687c2aa64..5c7211b128f1 100644 --- a/io/channel-file.c +++ b/io/channel-file.c @@ -59,6 +59,10 @@ qio_channel_file_new_path(const char *path, return NULL; } +if (flags & O_DIRECT) { + qio_channel_set_feature(QIO_CHANNEL(ioc), QIO_CHANNEL_FEATURE_DIO); +} + trace_qio_channel_file_new_path(ioc, path, flags, mode, ioc->fd); return ioc; @@ -109,6 +113,19 @@ static ssize_t qio_channel_file_readv(QIOChannel *ioc, return ret; } + +void qio_channel_file_disable_dio(QIOChannelFile *ioc) +{ + int flags = fcntl(ioc->fd, F_GETFL); + if (flags == -1) { + //handle failure + } + + if (fcntl(ioc->fd, F_SETFL, (flags & ~O_DIRECT)) == -1) { + error_report("Can't disable O_DIRECT"); + } +} + static ssize_t qio_channel_file_writev(QIOChannel *ioc, const struct iovec *iov, size_t niov, diff --git a/migration/meson.build b/migration/meson.build index 690487cf1a81..30a8392701c3 100644 --- a/migration/meson.build +++ b/migration/meson.build @@ -17,6 +17,7 @@ softmmu_ss.add(files( 'colo.c', 'exec.c', 'fd.c', + 'file.c', 'global_state.c', 'migration.c', 'multifd.c', diff --git a/migration/migration.c b/migration/migration.c index bb8bbddfe467..e7e84ae12066 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -20,6 +20,7 @@ #include "migration/blocker.h" #include "exec.h" #include "fd.h" +#include "file.h" #include "socket.h" #include "sysemu/runstate.h" #include "sysemu/sysemu.h" @@ -2414,6 +2415,8 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk, exec_start_outgoing_migration(s, p, &local_err); } else if (strstart(uri, "fd:", &p)) { fd_start_outgoing_migration(s, p, &local_err); +} else if (strstart(uri, "file:", &p)) { + file_start_outgoing_migration(s, p, &local_err); } else {