On Sat, 1 Aug 2020 at 23:09, Christoph Hellwig wrote:
>
> On Sat, Aug 01, 2020 at 05:27:04PM +0530, Amit Pundir wrote:
> > Hi, I found the problematic memory region. It was a memory
> > chunk reserved/removed in the downstream tree but was
> > seemingly reserved upstream for different drivers. I
On Sat, 1 Aug 2020 at 23:58, Linus Torvalds
wrote:
>
> On Sat, Aug 1, 2020 at 4:57 AM Amit Pundir wrote:
> >
> > Hi, I found the problematic memory region. It was a memory
> > chunk reserved/removed in the downstream tree but was
> > seemingly reserved upstream for different drivers.
>
> Is this
Hi Bjorn,
On Tue, Jul 14, 2020 at 1:24 PM Rajat Jain wrote:
>
> On Tue, Jul 14, 2020 at 1:15 PM Rajat Jain wrote:
> >
> > The ACS "Translation Blocking" bit blocks the translated addresses from
> > the devices. We don't expect such traffic from devices unless ATS is
> > enabled on them. A
Hi Bjorn,
On Tue, Jul 14, 2020 at 1:15 PM Rajat Jain wrote:
> The ACS "Translation Blocking" bit blocks the translated addresses from
> the devices. We don't expect such traffic from devices unless ATS is
> enabled on them. A device sending such traffic without ATS enabled,
> indicates
On Sat, Aug 1, 2020 at 4:57 AM Amit Pundir wrote:
>
> Hi, I found the problematic memory region. It was a memory
> chunk reserved/removed in the downstream tree but was
> seemingly reserved upstream for different drivers.
Is this happening with a clean tree, or are there external drivers
On Sat, Aug 01, 2020 at 05:27:04PM +0530, Amit Pundir wrote:
> Hi, I found the problematic memory region. It was a memory
> chunk reserved/removed in the downstream tree but was
> seemingly reserved upstream for different drivers. I failed to
> calculate the length of the total region reserved
Hi Jim, here's some comments after testing your series against RPi4.
On Fri, 2020-07-24 at 16:33 -0400, Jim Quinlan wrote:
> The new field 'dma_range_map' in struct device is used to facilitate the
> use of single or multiple offsets between mapping regions of cpu addrs and
> dma addrs. It
On Sat, 2020-08-01 at 17:27 +0530, Amit Pundir wrote:
[...]
> > I'm between a rock and a hard place here. If we simply want to revert
> > commits as-is to make sure both the Raspberry Pi 4 and thone phone do
> > not regress we'll have to go all the way back and revert the whole SEV
> > pool
The return value of pci_read_config_*() may not indicate a device error.
However, the value read by these functions is more likely to indicate
this kind of error. This presents two overlapping ways of reporting
errors and complicates error checking.
It is possible to move to one single way of
The return value of pci_read_config_*() may not indicate a device error.
However, the value read by these functions is more likely to indicate
this kind of error. This presents two overlapping ways of reporting
errors and complicates error checking.
It is possible to move to one single way of
On 01/08/20 3:48 pm, Mike Rapoport wrote:
On Thu, Jul 30, 2020 at 10:15:13PM +1000, Michael Ellerman wrote:
Mike Rapoport writes:
From: Mike Rapoport
fadump_reserve_crash_area() reserves memory from a specified base address
till the end of the RAM.
Replace iteration through the
On Sat, Aug 01, 2020 at 01:24:29PM +0200, Saheed O. Bolarinwa wrote:
> The return value of pci_read_config_*() may not indicate a device error.
> However, the value read by these functions is more likely to indicate
> this kind of error. This presents two overlapping ways of reporting
> errors and
On Sat, 1 Aug 2020 at 14:27, Christoph Hellwig wrote:
>
> On Sat, Aug 01, 2020 at 01:20:07AM -0700, David Rientjes wrote:
> > To follow-up on this, the introduction of the DMA atomic pools in 5.8
> > fixes an issue for any AMD SEV enabled guest that has a driver that
> > requires atomic DMA
On Thu, Jul 30, 2020 at 10:15:13PM +1000, Michael Ellerman wrote:
> Mike Rapoport writes:
> > From: Mike Rapoport
> >
> > fadump_reserve_crash_area() reserves memory from a specified base address
> > till the end of the RAM.
> >
> > Replace iteration through the memblock.memory with a single
On Sat, Aug 01, 2020 at 01:20:07AM -0700, David Rientjes wrote:
> To follow-up on this, the introduction of the DMA atomic pools in 5.8
> fixes an issue for any AMD SEV enabled guest that has a driver that
> requires atomic DMA allocations (for us, nvme) because runtime decryption
> of memory
On Fri, Jul 31, 2020 at 12:04:28PM -0700, David Rientjes wrote:
> On Fri, 31 Jul 2020, Christoph Hellwig wrote:
>
> > > Hi Nicolas, Christoph,
> > >
> > > Just out of curiosity, I'm wondering if we can restore the earlier
> > > behaviour and make DMA atomic allocation configured thru platform
>
On Fri, 31 Jul 2020, David Rientjes wrote:
> > > Hi Nicolas, Christoph,
> > >
> > > Just out of curiosity, I'm wondering if we can restore the earlier
> > > behaviour and make DMA atomic allocation configured thru platform
> > > specific device tree instead?
> > >
> > > Or if you can allow a
Polling by MSI isn't necessarily faster than polling by SEV. Tests on
hi1620 show hns3 100G NIC network throughput can improve from 25G to
27G if we disable MSI polling while running 16 netperf threads sending
UDP packets in size 32KB.
This patch provides a command line option so that users can
On Fri, 31 Jul 2020 at 19:50, Amit Pundir wrote:
>
> On Fri, 31 Jul 2020 at 19:45, Nicolas Saenz Julienne
> wrote:
> >
> > On Fri, 2020-07-31 at 16:47 +0530, Amit Pundir wrote:
> > > On Fri, 31 Jul 2020 at 16:17, Nicolas Saenz Julienne
> >
> > [...]
> >
> > > > Ok, so lets see who's doing what
19 matches
Mail list logo