On Wed, 2019-08-21 at 07:45 -0700, h...@infradead.org wrote:
> Btw, for the next version it also might make sense to do one
> optimization at a time. E.g. the empty cpumask one as the
> first patch, the local cpu directly one next, and the threshold
> based full flush as a third.
ok sure. I will
Btw, for the next version it also might make sense to do one
optimization at a time. E.g. the empty cpumask one as the
first patch, the local cpu directly one next, and the threshold
based full flush as a third.
On Tue, Aug 20, 2019 at 08:29:47PM +, Atish Patra wrote:
> Sounds good to me. Christoph has already mm/tlbflush.c in his mmu
> series. I will rebase on top of it.
It was't really intended for the nommu series but for the native
clint prototype. But the nommu series grew so many cleanups and
On Wed, Aug 21, 2019 at 09:22:48AM +0530, Anup Patel wrote:
> I agree that IPI mechanism should be standardized for RISC-V but I
> don't support the idea of mandating CLINT as part of the UNIX
> platform spec. For example, the AndesTech SOC does not use CLINT
> instead they have PLMT for per-HART
On Wed, Aug 21, 2019 at 7:10 AM h...@infradead.org wrote:
>
> On Wed, Aug 21, 2019 at 09:29:22AM +0800, Alan Kao wrote:
> > IMHO, this approach should be avoided because CLINT is compatible to but
> > not mandatory in the privileged spec. In other words, it is possible that
> > a Linux-capable
On Wed, Aug 21, 2019 at 09:29:22AM +0800, Alan Kao wrote:
> IMHO, this approach should be avoided because CLINT is compatible to but
> not mandatory in the privileged spec. In other words, it is possible that
> a Linux-capable RISC-V platform does not contain a CLINT component but
> rely on some
On Tue, Aug 20, 2019 at 08:28:36PM +, Atish Patra wrote:
> On Tue, 2019-08-20 at 02:22 -0700, h...@infradead.org wrote:
> > On Tue, Aug 20, 2019 at 08:42:19AM +, Atish Patra wrote:
> > > cmask NULL is pretty common case and we would be unnecessarily
> > > executing bunch of instructions
On Tue, 2019-08-20 at 15:18 -0700, h...@infradead.org wrote:
> CAUTION: This email originated from outside of Western Digital. Do
> not click on links or open attachments unless you recognize the
> sender and know that the content is safe.
>
>
> On Tue, Aug 20, 2019 at 08:28:36PM +, Atish
On Tue, Aug 20, 2019 at 08:28:36PM +, Atish Patra wrote:
> > http://git.infradead.org/users/hch/riscv.git/commitdiff/ea4067ae61e20fcfcf46a6f6bd1cc25710ce3afe
>
> This does seem a lot cleaner to me. We can reuse some of the code for
> this patch as well. Based on NATIVE_CLINT configuration, it
On Tue, 2019-08-20 at 14:21 +0530, Anup Patel wrote:
> On Tue, Aug 20, 2019 at 6:17 AM Atish Patra
> wrote:
> > In RISC-V, tlb flush happens via SBI which is expensive.
> > If the target cpumask contains a local hartid, some cost
> > can be saved by issuing a local tlb flush as we do that
> > in
On Tue, 2019-08-20 at 02:22 -0700, h...@infradead.org wrote:
> On Tue, Aug 20, 2019 at 08:42:19AM +, Atish Patra wrote:
> > cmask NULL is pretty common case and we would be unnecessarily
> > executing bunch of instructions everytime while not saving much.
> > Kernel
> > still have to make an
On Tue, Aug 20, 2019 at 08:42:19AM +, Atish Patra wrote:
> cmask NULL is pretty common case and we would be unnecessarily
> executing bunch of instructions everytime while not saving much. Kernel
> still have to make an SBI call and OpenSBI is doing a local flush
> anyways.
>
> Looking at
On Tue, Aug 20, 2019 at 6:17 AM Atish Patra wrote:
>
> In RISC-V, tlb flush happens via SBI which is expensive.
> If the target cpumask contains a local hartid, some cost
> can be saved by issuing a local tlb flush as we do that
> in OpenSBI anyways. There is also no need of SBI call if
> cpumask
On Aug 20 2019, Atish Patra wrote:
> +
> + cpuid = get_cpu();
> + if (!cmask) {
> + riscv_cpuid_to_hartid_mask(cpu_online_mask, );
> + goto issue_sfence;
> + }
> +
> +
> + if (cpumask_test_cpu(cpuid, cmask) && cpumask_weight(cmask) ==
> 1)
On Tue, 2019-08-20 at 09:46 +0200, Andreas Schwab wrote:
> On Aug 19 2019, Atish Patra wrote:
>
> > @@ -42,20 +43,44 @@ static inline void flush_tlb_range(struct
> > vm_area_struct *vma,
> >
> > #include
> >
> > -static inline void remote_sfence_vma(struct cpumask *cmask,
> > unsigned long
On Aug 19 2019, Atish Patra wrote:
> @@ -42,20 +43,44 @@ static inline void flush_tlb_range(struct vm_area_struct
> *vma,
>
> #include
>
> -static inline void remote_sfence_vma(struct cpumask *cmask, unsigned long
> start,
> - unsigned long size)
>
On Tue, Aug 20, 2019 at 09:14:58AM +0200, Andreas Schwab wrote:
> On Aug 19 2019, "h...@infradead.org" wrote:
>
> > This looks a little odd to m and assumes we never pass a size smaller
> > than PAGE_SIZE. Whule that is probably true, why not something like:
> >
> > if (size < PAGE_SIZE &&
On Aug 19 2019, "h...@infradead.org" wrote:
> This looks a little odd to m and assumes we never pass a size smaller
> than PAGE_SIZE. Whule that is probably true, why not something like:
>
> if (size < PAGE_SIZE && size != -1)
ITYM size <= PAGE_SIZE. And since size is unsigned it cannot
On Mon, Aug 19, 2019 at 05:47:35PM -0700, Atish Patra wrote:
> In RISC-V, tlb flush happens via SBI which is expensive.
> If the target cpumask contains a local hartid, some cost
> can be saved by issuing a local tlb flush as we do that
> in OpenSBI anyways. There is also no need of SBI call if
>
In RISC-V, tlb flush happens via SBI which is expensive.
If the target cpumask contains a local hartid, some cost
can be saved by issuing a local tlb flush as we do that
in OpenSBI anyways. There is also no need of SBI call if
cpumask is empty.
Do a local flush first if current cpu is present in
20 matches
Mail list logo