Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores

2013-08-05 Thread Laszlo Ersek
On 08/01/13 16:31, Luiz Capitulino wrote:
 On Thu, 1 Aug 2013 09:41:07 -0400
 Luiz Capitulino lcapitul...@redhat.com wrote:
 
 Applied to the qmp branch, thanks.
 
 Hmm, it brakes the build. Dropping it from the queue for now:
 
 /home/lcapitulino/work/src/upstream/qmp-unstable/target-s390x/arch_dump.c:179:5:
  error: conflicting types for ‘cpu_get_dump_info’
  int cpu_get_dump_info(ArchDumpInfo *info)
  ^
 In file included from 
 /home/lcapitulino/work/src/upstream/qmp-unstable/target-s390x/arch_dump.c:17:0:
 /home/lcapitulino/work/src/upstream/qmp-unstable/include/sysemu/dump.h:24:5: 
 note: previous declaration of ‘cpu_get_dump_info’ was here
  int cpu_get_dump_info(ArchDumpInfo *info,
  ^
 make[1]: *** [target-s390x/arch_dump.o] Error 1
 make: *** [subdir-s390x-softmmu] Error 2
 make: *** Waiting for unfinished jobs
 

My series was based on

  Author: Aurelien Jarno aurel...@aurel32.net  2013-07-29 09:03:23
  Committer: Aurelien Jarno aurel...@aurel32.net  2013-07-29 09:03:23

  Merge branch 'trivial-patches' of git://git.corpit.ru/qemu

and it compiled then just fine.

target-s390x/arch_dump.c was added in

  Author: Ekaterina Tumanova tuman...@linux.vnet.ibm.com  2013-07-10 15:26:46
  Committer: Christian Borntraeger borntrae...@de.ibm.com  2013-07-30 16:12:25

  s390: Implement dump-guest-memory support for target s390x

See the commit date: 2013-07-30.

I'll refresh the series.

Laszlo



Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores

2013-08-01 Thread Luiz Capitulino
On Thu, 1 Aug 2013 09:41:07 -0400
Luiz Capitulino lcapitul...@redhat.com wrote:

 Applied to the qmp branch, thanks.

Hmm, it brakes the build. Dropping it from the queue for now:

/home/lcapitulino/work/src/upstream/qmp-unstable/target-s390x/arch_dump.c:179:5:
 error: conflicting types for ‘cpu_get_dump_info’
 int cpu_get_dump_info(ArchDumpInfo *info)
 ^
In file included from 
/home/lcapitulino/work/src/upstream/qmp-unstable/target-s390x/arch_dump.c:17:0:
/home/lcapitulino/work/src/upstream/qmp-unstable/include/sysemu/dump.h:24:5: 
note: previous declaration of ‘cpu_get_dump_info’ was here
 int cpu_get_dump_info(ArchDumpInfo *info,
 ^
make[1]: *** [target-s390x/arch_dump.o] Error 1
make: *** [subdir-s390x-softmmu] Error 2
make: *** Waiting for unfinished jobs



Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores

2013-08-01 Thread Luiz Capitulino
On Mon, 29 Jul 2013 16:37:12 +0200
Laszlo Ersek ler...@redhat.com wrote:

 (Apologies for the long To: list, I'm including everyone who
 participated in
 https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg02607.html).
 
 Conceptually, the dump-guest-memory command works as follows:
 (a) pause the guest,
 (b) get a snapshot of the guest's physical memory map, as provided by
 qemu,
 (c) retrieve the guest's virtual mappings, as seen by the guest (this is
 where paging=true vs. paging=false makes a difference),
 (d) filter (c) as requested by the QMP caller,
 (e) write ELF headers, keying off (b) -- the guest's physmap -- and (d)
 -- the filtered guest mappings.
 (f) dump RAM contents, keying off the same (b) and (d),
 (g) unpause the guest (if necessary).
 
 Patch #1 affects step (e); specifically, how (d) is matched against (b),
 when paging is true, and the guest kernel maps more guest-physical
 RAM than it actually has.
 
 This can be done by non-malicious, clean-state guests (eg. a pristine
 RHEL-6.4 guest), and may cause libbfd errors due to PT_LOAD entries
 (coming directly from the guest page tables) exceeding the vmcore file's
 size.
 
 Patches #2 to #4 are independent of the paging option (or, more
 precisely, affect them equally); they affect (b). Currently input
 parameter (b), that is, the guest's physical memory map as provided by
 qemu, is implicitly represented by ram_list.blocks. As a result, steps
 and outputs dependent on (b) will refer to qemu-internal offsets.
 
 Unfortunately, this breaks when the guest-visible physical addresses
 diverge from the qemu-internal, RAMBlock based representation. This can
 happen eg. for guests  3.5 GB, due to the 32-bit PCI hole; see patch #4
 for a diagram.
 
 Patch #2 introduces input parameter (b) explicitly, as a reasonably
 minimal map of guest-physical address ranges. (Minimality is not a hard
 requirement here, it just decreases the number of PT_LOAD entries
 written to the vmcore header.) Patch #3 populates this map. Patch #4
 rebases the dump-guest-memory command to it, so that steps (e) and (f)
 work with guest-phys addresses.
 
 As a result, the crash utility can parse vmcores dumped for big x86_64
 guests (paging=false).

Applied to the qmp branch, thanks.

 
 Please refer to Red Hat Bugzilla 981582
 https://bugzilla.redhat.com/show_bug.cgi?id=981582.
 
 Disclaimer: as you can tell from my progress in the RHBZ, I'm new to the
 memory API. The way I'm using it might be retarded.
 
 Laszlo Ersek (4):
   dump: clamp guest-provided mapping lengths to ramblock sizes
   dump: introduce GuestPhysBlockList
   dump: populate guest_phys_blocks
   dump: rebase from host-private RAMBlock offsets to guest-physical
 addresses
 
  include/sysemu/dump.h   |4 +-
  include/sysemu/memory_mapping.h |   30 ++-
  dump.c  |  171 +-
  memory_mapping.c|  174 
 +--
  stubs/dump.c|3 +-
  target-i386/arch_dump.c |   10 ++-
  6 files changed, 300 insertions(+), 92 deletions(-)
 




Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores

2013-07-30 Thread Luiz Capitulino
On Mon, 29 Jul 2013 16:37:12 +0200
Laszlo Ersek ler...@redhat.com wrote:

 (Apologies for the long To: list, I'm including everyone who
 participated in
 https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg02607.html).
 
 Conceptually, the dump-guest-memory command works as follows:
 (a) pause the guest,
 (b) get a snapshot of the guest's physical memory map, as provided by
 qemu,
 (c) retrieve the guest's virtual mappings, as seen by the guest (this is
 where paging=true vs. paging=false makes a difference),
 (d) filter (c) as requested by the QMP caller,
 (e) write ELF headers, keying off (b) -- the guest's physmap -- and (d)
 -- the filtered guest mappings.
 (f) dump RAM contents, keying off the same (b) and (d),
 (g) unpause the guest (if necessary).
 
 Patch #1 affects step (e); specifically, how (d) is matched against (b),
 when paging is true, and the guest kernel maps more guest-physical
 RAM than it actually has.
 
 This can be done by non-malicious, clean-state guests (eg. a pristine
 RHEL-6.4 guest), and may cause libbfd errors due to PT_LOAD entries
 (coming directly from the guest page tables) exceeding the vmcore file's
 size.
 
 Patches #2 to #4 are independent of the paging option (or, more
 precisely, affect them equally); they affect (b). Currently input
 parameter (b), that is, the guest's physical memory map as provided by
 qemu, is implicitly represented by ram_list.blocks. As a result, steps
 and outputs dependent on (b) will refer to qemu-internal offsets.
 
 Unfortunately, this breaks when the guest-visible physical addresses
 diverge from the qemu-internal, RAMBlock based representation. This can
 happen eg. for guests  3.5 GB, due to the 32-bit PCI hole; see patch #4
 for a diagram.
 
 Patch #2 introduces input parameter (b) explicitly, as a reasonably
 minimal map of guest-physical address ranges. (Minimality is not a hard
 requirement here, it just decreases the number of PT_LOAD entries
 written to the vmcore header.) Patch #3 populates this map. Patch #4
 rebases the dump-guest-memory command to it, so that steps (e) and (f)
 work with guest-phys addresses.
 
 As a result, the crash utility can parse vmcores dumped for big x86_64
 guests (paging=false).
 
 Please refer to Red Hat Bugzilla 981582
 https://bugzilla.redhat.com/show_bug.cgi?id=981582.
 
 Disclaimer: as you can tell from my progress in the RHBZ, I'm new to the
 memory API. The way I'm using it might be retarded.

Series looks sane to me, but the important details go beyond my background
in this area, so I'd like an additional Reviewed-by before applying this
to the qmp-for-1.6 tree.



Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores

2013-07-29 Thread Luiz Capitulino
On Mon, 29 Jul 2013 16:37:12 +0200
Laszlo Ersek ler...@redhat.com wrote:

 (Apologies for the long To: list, I'm including everyone who
 participated in
 https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg02607.html).
 
 Conceptually, the dump-guest-memory command works as follows:
 (a) pause the guest,
 (b) get a snapshot of the guest's physical memory map, as provided by
 qemu,
 (c) retrieve the guest's virtual mappings, as seen by the guest (this is
 where paging=true vs. paging=false makes a difference),
 (d) filter (c) as requested by the QMP caller,
 (e) write ELF headers, keying off (b) -- the guest's physmap -- and (d)
 -- the filtered guest mappings.
 (f) dump RAM contents, keying off the same (b) and (d),
 (g) unpause the guest (if necessary).
 
 Patch #1 affects step (e); specifically, how (d) is matched against (b),
 when paging is true, and the guest kernel maps more guest-physical
 RAM than it actually has.
 
 This can be done by non-malicious, clean-state guests (eg. a pristine
 RHEL-6.4 guest), and may cause libbfd errors due to PT_LOAD entries
 (coming directly from the guest page tables) exceeding the vmcore file's
 size.
 
 Patches #2 to #4 are independent of the paging option (or, more
 precisely, affect them equally); they affect (b). Currently input
 parameter (b), that is, the guest's physical memory map as provided by
 qemu, is implicitly represented by ram_list.blocks. As a result, steps
 and outputs dependent on (b) will refer to qemu-internal offsets.
 
 Unfortunately, this breaks when the guest-visible physical addresses
 diverge from the qemu-internal, RAMBlock based representation. This can
 happen eg. for guests  3.5 GB, due to the 32-bit PCI hole; see patch #4
 for a diagram.
 
 Patch #2 introduces input parameter (b) explicitly, as a reasonably
 minimal map of guest-physical address ranges. (Minimality is not a hard
 requirement here, it just decreases the number of PT_LOAD entries
 written to the vmcore header.) Patch #3 populates this map. Patch #4
 rebases the dump-guest-memory command to it, so that steps (e) and (f)
 work with guest-phys addresses.
 
 As a result, the crash utility can parse vmcores dumped for big x86_64
 guests (paging=false).
 
 Please refer to Red Hat Bugzilla 981582
 https://bugzilla.redhat.com/show_bug.cgi?id=981582.
 
 Disclaimer: as you can tell from my progress in the RHBZ, I'm new to the
 memory API. The way I'm using it might be retarded.

Is this for 1.6?



Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores

2013-07-29 Thread Laszlo Ersek
On 07/29/13 23:53, Laszlo Ersek wrote:
 On 07/29/13 23:08, Luiz Capitulino wrote:

 Is this for 1.6?
 
 It's for whichever release reviewers and maintainers accept it! :)
 
 On a more serious note, if someone makes an exception out of this, I
 won't object, but I'm not pushing for it. My posting close to the hard
 freeze was a coincidence.

Hmm. I've just caught up on http://wiki.qemu.org/Planning/1.6.
Apparently the hard freeze blocks features only, and this is a bugfix.
So yeah, why not.

Thanks,
Laszlo




Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores

2013-07-29 Thread Laszlo Ersek
On 07/29/13 23:08, Luiz Capitulino wrote:
 On Mon, 29 Jul 2013 16:37:12 +0200
 Laszlo Ersek ler...@redhat.com wrote:
 
 (Apologies for the long To: list, I'm including everyone who
 participated in
 https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg02607.html).

 Conceptually, the dump-guest-memory command works as follows:
 (a) pause the guest,
 (b) get a snapshot of the guest's physical memory map, as provided by
 qemu,
 (c) retrieve the guest's virtual mappings, as seen by the guest (this is
 where paging=true vs. paging=false makes a difference),
 (d) filter (c) as requested by the QMP caller,
 (e) write ELF headers, keying off (b) -- the guest's physmap -- and (d)
 -- the filtered guest mappings.
 (f) dump RAM contents, keying off the same (b) and (d),
 (g) unpause the guest (if necessary).

 Patch #1 affects step (e); specifically, how (d) is matched against (b),
 when paging is true, and the guest kernel maps more guest-physical
 RAM than it actually has.

 This can be done by non-malicious, clean-state guests (eg. a pristine
 RHEL-6.4 guest), and may cause libbfd errors due to PT_LOAD entries
 (coming directly from the guest page tables) exceeding the vmcore file's
 size.

 Patches #2 to #4 are independent of the paging option (or, more
 precisely, affect them equally); they affect (b). Currently input
 parameter (b), that is, the guest's physical memory map as provided by
 qemu, is implicitly represented by ram_list.blocks. As a result, steps
 and outputs dependent on (b) will refer to qemu-internal offsets.

 Unfortunately, this breaks when the guest-visible physical addresses
 diverge from the qemu-internal, RAMBlock based representation. This can
 happen eg. for guests  3.5 GB, due to the 32-bit PCI hole; see patch #4
 for a diagram.

 Patch #2 introduces input parameter (b) explicitly, as a reasonably
 minimal map of guest-physical address ranges. (Minimality is not a hard
 requirement here, it just decreases the number of PT_LOAD entries
 written to the vmcore header.) Patch #3 populates this map. Patch #4
 rebases the dump-guest-memory command to it, so that steps (e) and (f)
 work with guest-phys addresses.

 As a result, the crash utility can parse vmcores dumped for big x86_64
 guests (paging=false).

 Please refer to Red Hat Bugzilla 981582
 https://bugzilla.redhat.com/show_bug.cgi?id=981582.

 Disclaimer: as you can tell from my progress in the RHBZ, I'm new to the
 memory API. The way I'm using it might be retarded.
 
 Is this for 1.6?

It's for whichever release reviewers and maintainers accept it! :)

On a more serious note, if someone makes an exception out of this, I
won't object, but I'm not pushing for it. My posting close to the hard
freeze was a coincidence.

Thanks,
Laszlo