Re: [Xen-devel] [PATCH v1 10/20] acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops
On 07/19/2016 05:11 AM, Jan Beulich wrote: Boris Ostrovsky07/08/16 6:20 PM >>> >> On 07/08/2016 11:35 AM, Jan Beulich wrote: >> On 08.07.16 at 17:23, wrote: Is it up to the builder to decide which tables are important and which are not? >>> I'm afraid that's not so easy to tell. If for example we can't fit the >>> HPET table, the guest could be run without HPET unless a HPET >>> was specifically requested (rather than just defaulted to). >> But again --- how will the caller know that it was only HPET table that >> was not built? > Why would the caller care? I guess examples could be found where it is > necessary for the caller to know, but for the specific example (and at least > some others) failure is of no interest to the caller - it's only the guest > which > is affected. HPET was just an example, the same question could be asked for (almost) any other table. But I can see that we can defer to the guest to deal with ACPI brokenness, although some not built tables will almost certainly lead to guest's failure. (We probably will not get to use this new free() op anyway since failure to allocate memory is currently the only possible error and there is one allocation per table) -boris ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v1 10/20] acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops
>>> Boris Ostrovsky07/08/16 6:20 PM >>> >On 07/08/2016 11:35 AM, Jan Beulich wrote: > On 08.07.16 at 17:23, wrote: >>> Is it up to the builder to decide which tables are important and which >>> are not? >> I'm afraid that's not so easy to tell. If for example we can't fit the >> HPET table, the guest could be run without HPET unless a HPET >> was specifically requested (rather than just defaulted to). > >But again --- how will the caller know that it was only HPET table that >was not built? Why would the caller care? I guess examples could be found where it is necessary for the caller to know, but for the specific example (and at least some others) failure is of no interest to the caller - it's only the guest which is affected. Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v1 10/20] acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops
On 07/08/2016 11:35 AM, Jan Beulich wrote: On 08.07.16 at 17:23,wrote: >> On 07/08/2016 09:58 AM, Jan Beulich wrote: >> On 05.07.16 at 21:05, wrote: Components that wish to use ACPI builder will need to provide their own mem_alloc() and virt_to_phys() routines. Pointers to these routines will be passed to the builder as memory ops. Signed-off-by: Boris Ostrovsky --- Changes in v1: * Keep memory ops seprate from acpi_config, in struct acpi_context. Jan requested adding a free() op to struct acpi_mem_ops. I couldn't see who might want to use those. The builder uses (should use) mem_alloc() to allocate memory for tables, not as a general-purpose allocator. >>> In addition to what I said back then, did you think of error cleanup >>> paths here? Not all errors mean the guest has to die. >> If there is an error and the builder decides to free up memory needed >> for a table, how do we communicate to the caller which table has been >> failed? > I don't understand - that's what the proposed free hook would be for. The hook will free the memory for the failed table. What I meant was how will the caller know, upon return from the builder, which tables have not been built? And so freeing a chunk of memory doesn't really buy us much. > >> Is it up to the builder to decide which tables are important and which >> are not? > I'm afraid that's not so easy to tell. If for example we can't fit the > HPET table, the guest could be run without HPET unless a HPET > was specifically requested (rather than just defaulted to). But again --- how will the caller know that it was only HPET table that was not built? I suppose we can return a bitmask of successes but that would be somewhat awkward, don't you think? -boris ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v1 10/20] acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops
>>> On 08.07.16 at 17:23,wrote: > On 07/08/2016 09:58 AM, Jan Beulich wrote: > On 05.07.16 at 21:05, wrote: >>> Components that wish to use ACPI builder will need to provide their own >>> mem_alloc() and virt_to_phys() routines. Pointers to these routines will >>> be passed to the builder as memory ops. >>> >>> Signed-off-by: Boris Ostrovsky >>> --- >>> >>> Changes in v1: >>> * Keep memory ops seprate from acpi_config, in struct acpi_context. >>> >>> Jan requested adding a free() op to struct acpi_mem_ops. I couldn't see who >>> might want to >>> use those. The builder uses (should use) mem_alloc() to allocate memory for >>> tables, not as a >>> general-purpose allocator. >> In addition to what I said back then, did you think of error cleanup >> paths here? Not all errors mean the guest has to die. > > If there is an error and the builder decides to free up memory needed > for a table, how do we communicate to the caller which table has been > failed? I don't understand - that's what the proposed free hook would be for. > Is it up to the builder to decide which tables are important and which > are not? I'm afraid that's not so easy to tell. If for example we can't fit the HPET table, the guest could be run without HPET unless a HPET was specifically requested (rather than just defaulted to). Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v1 10/20] acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops
On 07/08/2016 09:58 AM, Jan Beulich wrote: On 05.07.16 at 21:05,wrote: >> Components that wish to use ACPI builder will need to provide their own >> mem_alloc() and virt_to_phys() routines. Pointers to these routines will >> be passed to the builder as memory ops. >> >> Signed-off-by: Boris Ostrovsky >> --- >> >> Changes in v1: >> * Keep memory ops seprate from acpi_config, in struct acpi_context. >> >> Jan requested adding a free() op to struct acpi_mem_ops. I couldn't see who >> might want to >> use those. The builder uses (should use) mem_alloc() to allocate memory for >> tables, not as a >> general-purpose allocator. > In addition to what I said back then, did you think of error cleanup > paths here? Not all errors mean the guest has to die. If there is an error and the builder decides to free up memory needed for a table, how do we communicate to the caller which table has been failed? Is it up to the builder to decide which tables are important and which are not? > >> >> +struct acpi_ctxt { >> +struct acpi_mem_ops { >> +void *(*alloc)(struct acpi_ctxt *ctxt, uint32_t size, uint32_t >> align); >> +unsigned long (*v2p)(struct acpi_ctxt *ctxt, void *v); >> +} mem_ops; >> +}; >> + >> struct acpi_config { >> const unsigned char *dsdt_anycpu; >> unsigned int dsdt_anycpu_len; > While you mention this in the revision log, I don't see the reason > for keeping this fully separate. Quite a few of the changes you > do here could be avoided if the new structure got pointed to by a > field in struct acpi_config. I kept them separate here because acpi_config is intended to pass data about tables content and acpi_ctxt is needed for storing info used for building (ops, and as will be seen in patch 20, certain allocator information). -boris ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v1 10/20] acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops
>>> On 05.07.16 at 21:05,wrote: > Components that wish to use ACPI builder will need to provide their own > mem_alloc() and virt_to_phys() routines. Pointers to these routines will > be passed to the builder as memory ops. > > Signed-off-by: Boris Ostrovsky > --- > > Changes in v1: > * Keep memory ops seprate from acpi_config, in struct acpi_context. > > Jan requested adding a free() op to struct acpi_mem_ops. I couldn't see who > might want to > use those. The builder uses (should use) mem_alloc() to allocate memory for > tables, not as a > general-purpose allocator. In addition to what I said back then, did you think of error cleanup paths here? Not all errors mean the guest has to die. > @@ -453,18 +462,18 @@ static int new_vm_gid(struct acpi_config *config) > return 1; > > /* copy to allocate BIOS memory */ > -buf = (uint64_t *) mem_alloc(sizeof(config->vm_gid), 8); > +buf = (uint64_t *) ctxt->mem_ops.alloc(ctxt, sizeof(config->vm_gid), 8); Once again this appears to be a cast that already before was unnecessary. So it (and in any event the stray blank) should perhaps already get deleted when you touch this line the first time (in one of the earlier patches). > --- a/tools/firmware/hvmloader/acpi/libacpi.h > +++ b/tools/firmware/hvmloader/acpi/libacpi.h > @@ -64,6 +64,13 @@ struct acpi_numa { > xen_vmemrange_t *vmemrange; > }; > > +struct acpi_ctxt { > +struct acpi_mem_ops { > +void *(*alloc)(struct acpi_ctxt *ctxt, uint32_t size, uint32_t > align); > +unsigned long (*v2p)(struct acpi_ctxt *ctxt, void *v); > +} mem_ops; > +}; > + > struct acpi_config { > const unsigned char *dsdt_anycpu; > unsigned int dsdt_anycpu_len; While you mention this in the revision log, I don't see the reason for keeping this fully separate. Quite a few of the changes you do here could be avoided if the new structure got pointed to by a field in struct acpi_config. Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 10/20] acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops
Components that wish to use ACPI builder will need to provide their own mem_alloc() and virt_to_phys() routines. Pointers to these routines will be passed to the builder as memory ops. Signed-off-by: Boris Ostrovsky--- Changes in v1: * Keep memory ops seprate from acpi_config, in struct acpi_context. Jan requested adding a free() op to struct acpi_mem_ops. I couldn't see who might want to use those. The builder uses (should use) mem_alloc() to allocate memory for tables, not as a general-purpose allocator. tools/firmware/hvmloader/acpi/build.c | 87 +-- tools/firmware/hvmloader/acpi/libacpi.h |9 +++- tools/firmware/hvmloader/util.c | 16 +- 3 files changed, 71 insertions(+), 41 deletions(-) diff --git a/tools/firmware/hvmloader/acpi/build.c b/tools/firmware/hvmloader/acpi/build.c index 570d26f..0adb3d6 100644 --- a/tools/firmware/hvmloader/acpi/build.c +++ b/tools/firmware/hvmloader/acpi/build.c @@ -54,7 +54,8 @@ static void set_checksum( p[checksum_offset] = -sum; } -static struct acpi_20_madt *construct_madt(struct acpi_config *config) +static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt, + struct acpi_config *config) { struct acpi_20_madt *madt; struct acpi_20_madt_intsrcovr *intsrcovr; @@ -67,7 +68,7 @@ static struct acpi_20_madt *construct_madt(struct acpi_config *config) sz += sizeof(struct acpi_20_madt_ioapic); sz += sizeof(struct acpi_20_madt_lapic) * config->hvminfo->nr_vcpus; -madt = mem_alloc(sz, 16); +madt = ctxt->mem_ops.alloc(ctxt, sz, 16); if (!madt) return NULL; memset(madt, 0, sizeof(*madt)); @@ -147,11 +148,12 @@ static struct acpi_20_madt *construct_madt(struct acpi_config *config) return madt; } -static struct acpi_20_hpet *construct_hpet(void) +static struct acpi_20_hpet *construct_hpet(struct acpi_ctxt *ctxt, + struct acpi_config *config) { struct acpi_20_hpet *hpet; -hpet = mem_alloc(sizeof(*hpet), 16); +hpet = ctxt->mem_ops.alloc(ctxt, sizeof(*hpet), 16); if (!hpet) return NULL; memset(hpet, 0, sizeof(*hpet)); @@ -170,11 +172,12 @@ static struct acpi_20_hpet *construct_hpet(void) return hpet; } -static struct acpi_20_waet *construct_waet(void) +static struct acpi_20_waet *construct_waet(struct acpi_ctxt *ctxt, + struct acpi_config *config) { struct acpi_20_waet *waet; -waet = mem_alloc(sizeof(*waet), 16); +waet = ctxt->mem_ops.alloc(ctxt, sizeof(*waet), 16); if (!waet) return NULL; memcpy(waet, , sizeof(*waet)); @@ -185,7 +188,8 @@ static struct acpi_20_waet *construct_waet(void) return waet; } -static struct acpi_20_srat *construct_srat(struct acpi_config *config) +static struct acpi_20_srat *construct_srat(struct acpi_ctxt *ctxt, + struct acpi_config *config) { struct acpi_20_srat *srat; struct acpi_20_srat_processor *processor; @@ -197,7 +201,7 @@ static struct acpi_20_srat *construct_srat(struct acpi_config *config) size = sizeof(*srat) + sizeof(*processor) * config->hvminfo->nr_vcpus + sizeof(*memory) * config->numa.nr_vmemranges; -p = mem_alloc(size, 16); +p = ctxt->mem_ops.alloc(ctxt, size, 16); if ( !p ) return NULL; @@ -243,7 +247,8 @@ static struct acpi_20_srat *construct_srat(struct acpi_config *config) return srat; } -static struct acpi_20_slit *construct_slit(struct acpi_config *config) +static struct acpi_20_slit *construct_slit(struct acpi_ctxt *ctxt, + struct acpi_config *config) { struct acpi_20_slit *slit; unsigned int i, num, size; @@ -251,7 +256,7 @@ static struct acpi_20_slit *construct_slit(struct acpi_config *config) num = config->numa.nr_vnodes * config->numa.nr_vnodes; size = sizeof(*slit) + num * sizeof(uint8_t); -slit = mem_alloc(size, 16); +slit = ctxt->mem_ops.alloc(ctxt, size, 16); if ( !slit ) return NULL; @@ -275,7 +280,8 @@ static struct acpi_20_slit *construct_slit(struct acpi_config *config) return slit; } -static int construct_passthrough_tables(unsigned long *table_ptrs, +static int construct_passthrough_tables(struct acpi_ctxt *ctxt, +unsigned long *table_ptrs, int nr_tables, struct acpi_config *config) { @@ -300,7 +306,7 @@ static int construct_passthrough_tables(unsigned long *table_ptrs, header = (struct acpi_header*)acpi_pt_addr; -buffer = mem_alloc(header->length, 16); +buffer = ctxt->mem_ops.alloc(ctxt, header->length, 16); if ( buffer == NULL ) break; memcpy(buffer, header, header->length);