On Fri, 17 Sep 2021, Jan Beulich wrote:
> While the hypervisor hasn't been enforcing this, we would still better
> avoid issuing requests with GFNs not aligned to the requested order.
> Instead of altering the value also in the call to panic(), drop it
> there for being static and hence easy to
The pull request you sent on Fri, 17 Sep 2021 18:38:52 +0200:
> git://git.infradead.org/users/hch/dma-mapping.git tags/dma-mapping-5.15-1
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/b9b11b133b4a0b4f8dc36ec04d81d630f763eaa6
Thank you!
--
Deet-doot-dot, I am a
The following changes since commit c1dec343d7abdf8e71aab2a289ab45ce8b1afb7e:
hexagon: use the generic global coherent pool (2021-08-19 09:02:40 +0200)
are available in the Git repository at:
git://git.infradead.org/users/hch/dma-mapping.git tags/dma-mapping-5.15-1
for you to fetch changes
The pull request you sent on Fri, 17 Sep 2021 13:22:42 +0200:
> git://git.infradead.org/nvme.git tags/nvme-5.15-2021-09-15
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/65ed1e692f2b996292a5bd973200442816dd0ec1
Thank you!
--
Deet-doot-dot, I am a bot.
s/only/Only/ in subject
On Fri, Sep 17, 2021 at 12:48:03PM +0200, Jan Beulich wrote:
> The driver's module init function, pcifront_init(), invokes
> xen_pv_domain() first thing. That construct produces constant "false"
> when !CONFIG_XEN_PV. Hence there's no point building the driver in
> non-PV
On Fri, Sep 17, 2021 at 4:23 AM Christoph Hellwig wrote:
>
> nvme fixes for Linux 5.15
This presumably went to the wrong person for the same reason the
subject line was bogus.
I got these fixes through Jens, if you had an _actual_ dma-mapping
branch you wanted me to pull, you did the wrong pull
> -Original Message-
> From: Jon Nettleton [mailto:j...@solid-run.com]
> Sent: 16 September 2021 12:17
> To: Shameerali Kolothum Thodi
> Cc: Robin Murphy ; Lorenzo Pieralisi
> ; Laurentiu Tudor ;
> linux-arm-kernel ; ACPI Devel Maling
> List ; Linux IOMMU
> ; Joerg Roedel ; Will
>
The following changes since commit 67f3b2f822b7e71cfc9b42dbd9f3144fa2933e0b:
blk-mq: avoid to iterate over stale request (2021-09-12 19:32:43 -0600)
are available in the Git repository at:
git://git.infradead.org/nvme.git tags/nvme-5.15-2021-09-15
for you to fetch changes up to
Hi Jan,
>>> In order to be sure to catch all uses like this one (including ones
>>> which make it upstream in parallel to yours), I think you will want
>>> to rename the original IO_TLB_SEGSIZE to e.g. IO_TLB_DEFAULT_SEGSIZE.
>>
>> I don't understand your point. Can you clarify this?
>
>
The code is unreachable for HVM or PVH, and it also makes little sense
in auto-translated environments. On Arm, with
xen_{create,destroy}_contiguous_region() both being stubs, I have a hard
time seeing what good the Xen specific variant does - the generic one
ought to be fine for all purposes
xen_swiotlb and pci_xen_swiotlb_init() are only used within the file
defining them, so make them static and remove the stubs. Otoh
pci_xen_swiotlb_detect() has a use (as function pointer) from the main
pci-swiotlb.c file - convert its stub to a #define to NULL.
Signed-off-by: Jan Beulich
The driver's module init function, pcifront_init(), invokes
xen_pv_domain() first thing. That construct produces constant "false"
when !CONFIG_XEN_PV. Hence there's no point building the driver in
non-PV configurations.
Drop the (now implicit and generally wrong) X86 dependency: At present,
While the hypervisor hasn't been enforcing this, we would still better
avoid issuing requests with GFNs not aligned to the requested order.
Instead of altering the value also in the call to panic(), drop it
there for being static and hence easy to determine without being part
of the panic message.
The primary intention really was the last patch, there you go (on top
of what is already in xen/tip.git for-linus-5.15) ...
1: swiotlb-xen: ensure to issue well-formed XENMEM_exchange requests
2: PCI: only build xen-pcifront in PV-enabled environments
3: xen/pci-swiotlb: reduce visibility of
On 17.09.2021 11:36, Roman Skakun wrote:
> I use Xen PV display. In my case, PV display backend(Dom0) allocates
> contiguous buffer via DMA-API to
> to implement zero-copy between Dom0 and DomU.
Why does the buffer need to be allocated by Dom0? If it was allocated
by DomU, it could use grants to
On 2021-09-17 10:36, Roman Skakun wrote:
Hi, Christoph
I use Xen PV display. In my case, PV display backend(Dom0) allocates
contiguous buffer via DMA-API to
to implement zero-copy between Dom0 and DomU.
Well, something's gone badly wrong there - if you have to shadow the
entire thing in a
Hi, Christoph
I use Xen PV display. In my case, PV display backend(Dom0) allocates
contiguous buffer via DMA-API to
to implement zero-copy between Dom0 and DomU.
When I start Weston under DomU, I got the next log in Dom0:
```
[ 112.554471] CPU: 0 PID: 367 Comm: weston Tainted: G O
17 matches
Mail list logo