On Tue, 3 May 2016 15:46:33 Alistair Popple wrote:
> There's one call to pr_warn() in pnv_npu_disable_bypass() that could
arguably
> be converted to pe_warn(), but we can clean that up later as the patch looks
> fine and I'm assuming subsequent patches make use of these.
And inevitably the
On Tue, 3 May 2016 15:46:33 Alistair Popple wrote:
> There's one call to pr_warn() in pnv_npu_disable_bypass() that could
arguably
> be converted to pe_warn(), but we can clean that up later as the patch looks
> fine and I'm assuming subsequent patches make use of these.
And inevitably the
On (05/03/16 14:40), Minchan Kim wrote:
[..]
> > At least, we need sanity check code, still?
> > Otherwise, user can echo "garbage" > /sys/xxx/max_comp_stream" and then
> > cat /sys/xxx/max_comp_stream returns num_online_cpus.
>
> One more thing,
>
> User:
> echo 4 > /sys/xxx/max_comp_stream"
>
On (05/03/16 14:40), Minchan Kim wrote:
[..]
> > At least, we need sanity check code, still?
> > Otherwise, user can echo "garbage" > /sys/xxx/max_comp_stream" and then
> > cat /sys/xxx/max_comp_stream returns num_online_cpus.
>
> One more thing,
>
> User:
> echo 4 > /sys/xxx/max_comp_stream"
>
Addiing DT binding doc for the extcon gpios properties.
Signed-off-by: Venkat Reddy Talla
---
.../devicetree/bindings/extcon/extcon-gpio.txt| 19 +++
1 file changed, 19 insertions(+)
create mode 100644
Hi,
This patch series should have no perceivable changes to load
and util except that load's range is increased by 1024.
My initial tests suggest that. See attached figures. The workload
is running 100us out of every 200us, and 2000us out of every 8000us.
Again fixed workload, fixed CPU, and
Addiing DT binding doc for the extcon gpios properties.
Signed-off-by: Venkat Reddy Talla
---
.../devicetree/bindings/extcon/extcon-gpio.txt| 19 +++
1 file changed, 19 insertions(+)
create mode 100644 Documentation/devicetree/bindings/extcon/extcon-gpio.txt
diff --git
Hi,
This patch series should have no perceivable changes to load
and util except that load's range is increased by 1024.
My initial tests suggest that. See attached figures. The workload
is running 100us out of every 200us, and 2000us out of every 8000us.
Again fixed workload, fixed CPU, and
Adding device tree support for extcon-gpio driver.
Signed-off-by: Venkat Reddy Talla
---
drivers/extcon/extcon-gpio.c | 80 +++---
include/linux/extcon/extcon-gpio.h | 2 +
2 files changed, 76 insertions(+), 6 deletions(-)
diff
Adding device tree support for extcon-gpio driver.
Signed-off-by: Venkat Reddy Talla
---
drivers/extcon/extcon-gpio.c | 80 +++---
include/linux/extcon/extcon-gpio.h | 2 +
2 files changed, 76 insertions(+), 6 deletions(-)
diff --git
There's one call to pr_warn() in pnv_npu_disable_bypass() that could arguably
be converted to pe_warn(), but we can clean that up later as the patch looks
fine and I'm assuming subsequent patches make use of these.
Reviewed-By: Alistair Popple
On Fri, 29 Apr 2016
There's one call to pr_warn() in pnv_npu_disable_bypass() that could arguably
be converted to pe_warn(), but we can clean that up later as the patch looks
fine and I'm assuming subsequent patches make use of these.
Reviewed-By: Alistair Popple
On Fri, 29 Apr 2016 18:55:21 Alexey Kardashevskiy
On (05/03/16 14:23), Minchan Kim wrote:
[..]
> > - zram->max_comp_streams = num;
> > - ret = len;
> > -out:
> > - up_write(>init_lock);
> > - return ret;
>
> At least, we need sanity check code, still?
> Otherwise, user can echo "garbage" > /sys/xxx/max_comp_stream" and then
> cat
On (05/03/16 14:23), Minchan Kim wrote:
[..]
> > - zram->max_comp_streams = num;
> > - ret = len;
> > -out:
> > - up_write(>init_lock);
> > - return ret;
>
> At least, we need sanity check code, still?
> Otherwise, user can echo "garbage" > /sys/xxx/max_comp_stream" and then
> cat
On Mon, Feb 15, 2016 at 3:58 PM, Enric Balletbo i Serra
wrote:
> From: Olof Johansson
>
> Accidentally specified a smaller record size, bring it back
> to the same size as we had when we used the config file.
>
> Signed-off-by: Olof Johansson
On Mon, Feb 15, 2016 at 3:58 PM, Enric Balletbo i Serra
wrote:
> From: Olof Johansson
>
> Accidentally specified a smaller record size, bring it back
> to the same size as we had when we used the config file.
>
> Signed-off-by: Olof Johansson
> Signed-off-by: Enric Balletbo i Serra
>
On Tue, May 03, 2016 at 02:23:24PM +0900, Minchan Kim wrote:
> On Mon, May 02, 2016 at 05:06:00PM +0900, Sergey Senozhatsky wrote:
> > On (05/02/16 16:25), Sergey Senozhatsky wrote:
> > [..]
> > > > Trivial:
> > > > We could remove max_strm now and change description.
> > >
> > > oh, yes.
> >
>
On Tue, May 03, 2016 at 02:23:24PM +0900, Minchan Kim wrote:
> On Mon, May 02, 2016 at 05:06:00PM +0900, Sergey Senozhatsky wrote:
> > On (05/02/16 16:25), Sergey Senozhatsky wrote:
> > [..]
> > > > Trivial:
> > > > We could remove max_strm now and change description.
> > >
> > > oh, yes.
> >
>
__update_sched_avg() has these steps:
1. add the left of the last incomplete period
2. decay old sum
3. accumulate new sum since last_update_time
4. add the current incomplete period
5. update averages
Previously, we separately computed steps 1, 3, and 4, leading to
each one of them ugly in codes
__update_sched_avg() has these steps:
1. add the left of the last incomplete period
2. decay old sum
3. accumulate new sum since last_update_time
4. add the current incomplete period
5. update averages
Previously, we separately computed steps 1, 3, and 4, leading to
each one of them ugly in codes
Hi Enric,
On Mon, Feb 15, 2016 at 3:58 PM, Enric Balletbo i Serra
wrote:
> From: Gene Chen
>
> Add support for Leon touch devices, which is the same as
> slippy/falco/peppy/wolf on the same buses using the LynxPoint-LP I2C via
> the
After cleaning up the sched metrics, these two definitions that cause
ambiguity are not needed any more. Use NICE_0_LOAD_SHIFT and NICE_0_LOAD
instead (the names suggest clearly who they are).
Suggested-by: Ben Segall
Signed-off-by: Yuyang Du
---
After cleaning up the sched metrics, these two definitions that cause
ambiguity are not needed any more. Use NICE_0_LOAD_SHIFT and NICE_0_LOAD
instead (the names suggest clearly who they are).
Suggested-by: Ben Segall
Signed-off-by: Yuyang Du
---
kernel/sched/fair.c |4 ++--
Hi Enric,
On Mon, Feb 15, 2016 at 3:58 PM, Enric Balletbo i Serra
wrote:
> From: Gene Chen
>
> Add support for Leon touch devices, which is the same as
> slippy/falco/peppy/wolf on the same buses using the LynxPoint-LP I2C via
> the i2c-designware-pci driver.
>
> Based on the following patch:
>
The increased scale or precision for kernel load has been disabled
since the commit e4c2fb0d5776 ("sched: Disable (revert) SCHED_LOAD_SCALE
increase"). But we do need it when we have task groups, especially on
bigger machines. Otherwise, we probably will run out of precision for
load distribution.
The increased scale or precision for kernel load has been disabled
since the commit e4c2fb0d5776 ("sched: Disable (revert) SCHED_LOAD_SCALE
increase"). But we do need it when we have task groups, especially on
bigger machines. Otherwise, we probably will run out of precision for
load distribution.
Everybody has it. If code-size is not the problem, __accumulate_sum()
should have it too.
Signed-off-by: Yuyang Du
---
kernel/sched/fair.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 17bc721..1655280
These sched metrics have become complex enough. We introduce them
at their definitions.
Signed-off-by: Yuyang Du
---
include/linux/sched.h | 60 -
1 file changed, 49 insertions(+), 11 deletions(-)
diff --git
Currently, load_avg = scale_load_down(load) * runnable%. The extra scaling
down of load does not make much sense, because load_avg is primarily THE
load and on top of that, we take runnable time into account.
We therefore remove scale_load_down() for load_avg. But we need to
carefully consider
Everybody has it. If code-size is not the problem, __accumulate_sum()
should have it too.
Signed-off-by: Yuyang Du
---
kernel/sched/fair.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 17bc721..1655280 100644
---
These sched metrics have become complex enough. We introduce them
at their definitions.
Signed-off-by: Yuyang Du
---
include/linux/sched.h | 60 -
1 file changed, 49 insertions(+), 11 deletions(-)
diff --git a/include/linux/sched.h
Currently, load_avg = scale_load_down(load) * runnable%. The extra scaling
down of load does not make much sense, because load_avg is primarily THE
load and on top of that, we take runnable time into account.
We therefore remove scale_load_down() for load_avg. But we need to
carefully consider
Rename scale_load() and scale_load_down() to user_to_kernel_load()
and kernel_to_user_load() respectively. This helps us tag them
clearly and avoid confusion.
[update calculate_imbalance]
Signed-off-by: Vincent Guittot
Signed-off-by: Yuyang Du
Integer metric needs fixed point arithmetic. In sched/fair, a few
metrics, including weight, load, load_avg, util_avg, freq, and capacity,
may have different fixed point ranges.
In order to avoid errors relating to the fixed point range of these
metrics, we definie a basic fixed point range, and
This doc file has the programs to generate the constants to compute
sched averages.
Signed-off-by: Yuyang Du
---
Documentation/scheduler/sched-avg.txt | 137 +
1 file changed, 137 insertions(+)
create mode 100644
Rename scale_load() and scale_load_down() to user_to_kernel_load()
and kernel_to_user_load() respectively. This helps us tag them
clearly and avoid confusion.
[update calculate_imbalance]
Signed-off-by: Vincent Guittot
Signed-off-by: Yuyang Du
---
kernel/sched/core.c |8
Integer metric needs fixed point arithmetic. In sched/fair, a few
metrics, including weight, load, load_avg, util_avg, freq, and capacity,
may have different fixed point ranges.
In order to avoid errors relating to the fixed point range of these
metrics, we definie a basic fixed point range, and
This doc file has the programs to generate the constants to compute
sched averages.
Signed-off-by: Yuyang Du
---
Documentation/scheduler/sched-avg.txt | 137 +
1 file changed, 137 insertions(+)
create mode 100644 Documentation/scheduler/sched-avg.txt
diff
__compute_runnable_contrib() uses a loop to compute sum, whereas a
table lookup can do it faster in a constant time.
The program to generate the constants is located at:
Documentation/scheduler/sched-avg.txt
Signed-off-by: Yuyang Du
Reviewed-by: Morten Rasmussen
From: Roman Pen
This patch has been added to the 3.12 stable tree. If you have any
objections, please let us know.
===
commit 346c09f80459a3ad97df1816d6d606169a51001a upstream.
The bug in a workqueue leads to a stalled IO request in MQ ctx->rq_list
In sched average update, a period is about 1ms, so a 32-bit unsigned
integer can approximately hold a maximum of 49 (=2^32/1000/3600/24)
days, which means it is big enough and 64-bit is needless.
Signed-off-by: Yuyang Du
---
kernel/sched/fair.c | 27
__compute_runnable_contrib() uses a loop to compute sum, whereas a
table lookup can do it faster in a constant time.
The program to generate the constants is located at:
Documentation/scheduler/sched-avg.txt
Signed-off-by: Yuyang Du
Reviewed-by: Morten Rasmussen
Acked-by: Vincent Guittot
From: Roman Pen
This patch has been added to the 3.12 stable tree. If you have any
objections, please let us know.
===
commit 346c09f80459a3ad97df1816d6d606169a51001a upstream.
The bug in a workqueue leads to a stalled IO request in MQ ctx->rq_list
with the following backtrace:
[
In sched average update, a period is about 1ms, so a 32-bit unsigned
integer can approximately hold a maximum of 49 (=2^32/1000/3600/24)
days, which means it is big enough and 64-bit is needless.
Signed-off-by: Yuyang Du
---
kernel/sched/fair.c | 27 +--
1 file
The names of sched averages (including load_avg and util_avg) have
been changed and added in the past a couple of years, some of
the names are a bit confusing especially to people who first read them.
This patch attempts to make the names more self-explaining. And some
comments are updated too.
Hi Peter,
This patch series combines the previous cleanup and optimization
series. And as you and Ingo suggested, the increased kernel load
scale is reinstated when on 64BIT and FAIR_GROUP_SCHED. In addition
to that, the changes include Vincent's fix, typos fixes, changelog
and comment reword.
Hi Peter,
This patch series combines the previous cleanup and optimization
series. And as you and Ingo suggested, the increased kernel load
scale is reinstated when on 64BIT and FAIR_GROUP_SCHED. In addition
to that, the changes include Vincent's fix, typos fixes, changelog
and comment reword.
The names of sched averages (including load_avg and util_avg) have
been changed and added in the past a couple of years, some of
the names are a bit confusing especially to people who first read them.
This patch attempts to make the names more self-explaining. And some
comments are updated too.
> From: Yongji Xie
> Sent: Wednesday, April 27, 2016 8:43 PM
>
> This patch enables mmapping MSI-X tables if hardware supports
> interrupt remapping which can ensure that a given pci device
> can only shoot the MSIs assigned for it.
>
> With MSI-X table mmapped, we also need to expose the
>
> From: Yongji Xie
> Sent: Wednesday, April 27, 2016 8:43 PM
>
> This patch enables mmapping MSI-X tables if hardware supports
> interrupt remapping which can ensure that a given pci device
> can only shoot the MSIs assigned for it.
>
> With MSI-X table mmapped, we also need to expose the
>
> diff --git a/arch/powerpc/include/asm/cputable.h
> b/arch/powerpc/include/asm/cputable.h
> index df4fb5f..a4739a1 100644
> --- a/arch/powerpc/include/asm/cputable.h
> +++ b/arch/powerpc/include/asm/cputable.h
> @@ -205,6 +205,7 @@ enum {
> #define CPU_FTR_DABRX
>
> diff --git a/arch/powerpc/include/asm/cputable.h
> b/arch/powerpc/include/asm/cputable.h
> index df4fb5f..a4739a1 100644
> --- a/arch/powerpc/include/asm/cputable.h
> +++ b/arch/powerpc/include/asm/cputable.h
> @@ -205,6 +205,7 @@ enum {
> #define CPU_FTR_DABRX
>
From: Joonsoo Kim
split_page() calls set_page_owner() to set up page_owner to each pages.
But, it has a drawback that head page and the others have different
stacktrace because callsite of set_page_owner() is slightly differnt.
To avoid this problem, this patch copies
From: Joonsoo Kim
split_page() calls set_page_owner() to set up page_owner to each pages.
But, it has a drawback that head page and the others have different
stacktrace because callsite of set_page_owner() is slightly differnt.
To avoid this problem, this patch copies head page's page_owner to
From: Joonsoo Kim
Currently, we store each page's allocation stacktrace on corresponding
page_ext structure and it requires a lot of memory. This causes the problem
that memory tight system doesn't work well if page_owner is enabled.
Moreover, even with this large memory
On Mon, May 02, 2016 at 05:06:00PM +0900, Sergey Senozhatsky wrote:
> On (05/02/16 16:25), Sergey Senozhatsky wrote:
> [..]
> > > Trivial:
> > > We could remove max_strm now and change description.
> >
> > oh, yes.
>
> how about something like this? remove max_comp_streams entirely, but
> leave
From: Joonsoo Kim
Currently, we store each page's allocation stacktrace on corresponding
page_ext structure and it requires a lot of memory. This causes the problem
that memory tight system doesn't work well if page_owner is enabled.
Moreover, even with this large memory consumption, we cannot
On Mon, May 02, 2016 at 05:06:00PM +0900, Sergey Senozhatsky wrote:
> On (05/02/16 16:25), Sergey Senozhatsky wrote:
> [..]
> > > Trivial:
> > > We could remove max_strm now and change description.
> >
> > oh, yes.
>
> how about something like this? remove max_comp_streams entirely, but
> leave
From: Joonsoo Kim
It's not necessary to initialized page_owner with holding the zone lock.
It would cause more contention on the zone lock although it's not
a big problem since it is just debug feature. But, it is better
than before so do it. This is also preparation step
From: Joonsoo Kim
Currently, copy_page_owner() doesn't copy all the owner information.
It skips last_migrate_reason because copy_page_owner() is used for
migration and it will be properly set soon. But, following patch
will use copy_page_owner() and this skip will cause
From: Joonsoo Kim
It's not necessary to initialized page_owner with holding the zone lock.
It would cause more contention on the zone lock although it's not
a big problem since it is just debug feature. But, it is better
than before so do it. This is also preparation step to use stackdepot
in
From: Joonsoo Kim
Currently, copy_page_owner() doesn't copy all the owner information.
It skips last_migrate_reason because copy_page_owner() is used for
migration and it will be properly set soon. But, following patch
will use copy_page_owner() and this skip will cause the problem that
From: Joonsoo Kim
Page owner will be changed to store more deep stacktrace
so current temporary buffer size isn't enough. Increase it.
Signed-off-by: Joonsoo Kim
---
tools/vm/page_owner_sort.c | 9 +++--
1 file changed, 7 insertions(+), 2
From: Joonsoo Kim
We don't need to split freepages with holding the zone lock. It will cause
more contention on zone lock so not desirable.
Signed-off-by: Joonsoo Kim
---
include/linux/mm.h | 1 -
mm/compaction.c| 42
From: Joonsoo Kim
This patchset changes a way to store stacktrace in page_owner in order to
reduce memory usage. Below is motivation of this patchset coped
from the patch 6.
Currently, we store each page's allocation stacktrace on corresponding
page_ext structure and it
From: Joonsoo Kim
Page owner will be changed to store more deep stacktrace
so current temporary buffer size isn't enough. Increase it.
Signed-off-by: Joonsoo Kim
---
tools/vm/page_owner_sort.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git
From: Joonsoo Kim
We don't need to split freepages with holding the zone lock. It will cause
more contention on zone lock so not desirable.
Signed-off-by: Joonsoo Kim
---
include/linux/mm.h | 1 -
mm/compaction.c| 42 ++
mm/page_alloc.c| 27
From: Joonsoo Kim
This patchset changes a way to store stacktrace in page_owner in order to
reduce memory usage. Below is motivation of this patchset coped
from the patch 6.
Currently, we store each page's allocation stacktrace on corresponding
page_ext structure and it requires a lot of
Quoting Andrew G. Morgan (mor...@kernel.org):
> On 2 May 2016 6:04 p.m., "Eric W. Biederman" wrote:
> >
> > "Serge E. Hallyn" writes:
> >
> > > On Tue, Apr 26, 2016 at 03:39:54PM -0700, Kees Cook wrote:
> > >> On Tue, Apr 26, 2016 at 3:26 PM, Serge E.
Quoting Andrew G. Morgan (mor...@kernel.org):
> On 2 May 2016 6:04 p.m., "Eric W. Biederman" wrote:
> >
> > "Serge E. Hallyn" writes:
> >
> > > On Tue, Apr 26, 2016 at 03:39:54PM -0700, Kees Cook wrote:
> > >> On Tue, Apr 26, 2016 at 3:26 PM, Serge E. Hallyn
> wrote:
> > >> > Quoting Kees Cook
From: Joonsoo Kim
Recently, we allow to save the stacktrace whose hashed value is 0.
It causes the problem that stackdepot could return 0 even if in success.
User of stackdepot cannot distinguish whether it is success or not so we
need to solve this problem. In this
From: Joonsoo Kim
Recently, we allow to save the stacktrace whose hashed value is 0.
It causes the problem that stackdepot could return 0 even if in success.
User of stackdepot cannot distinguish whether it is success or not so we
need to solve this problem. In this patch, 1 bit are added to
Currently, in an ACPI based system, the processor driver registers
one cooling device per processor. However, the cooling device type
is the same for each processor. For example, on a system with four
processors, the sysfs reading of each cooling device would look like:
ebv@besouro ~ $ cat
Currently, in an ACPI based system, the processor driver registers
one cooling device per processor. However, the cooling device type
is the same for each processor. For example, on a system with four
processors, the sysfs reading of each cooling device would look like:
ebv@besouro ~ $ cat
On Tue, May 03, 2016 at 01:29:02PM +0900, Sergey Senozhatsky wrote:
> On (05/03/16 11:30), Sergey Senozhatsky wrote:
> > > We are concerning about returing back to no per-cpu options but actually,
> > > I don't want. If duplicate compression is really problem(But It's really
> > > unlikely), we
On Tue, May 03, 2016 at 01:29:02PM +0900, Sergey Senozhatsky wrote:
> On (05/03/16 11:30), Sergey Senozhatsky wrote:
> > > We are concerning about returing back to no per-cpu options but actually,
> > > I don't want. If duplicate compression is really problem(But It's really
> > > unlikely), we
"Andrew G. Morgan" writes:
> On 2 May 2016 6:04 p.m., "Eric W. Biederman"
> wrote:
>>
>> "Serge E. Hallyn" writes:
>>
>> > On Tue, Apr 26, 2016 at 03:39:54PM -0700, Kees Cook wrote:
>> >> On Tue, Apr 26, 2016 at 3:26 PM, Serge E.
"Andrew G. Morgan" writes:
> On 2 May 2016 6:04 p.m., "Eric W. Biederman"
> wrote:
>>
>> "Serge E. Hallyn" writes:
>>
>> > On Tue, Apr 26, 2016 at 03:39:54PM -0700, Kees Cook wrote:
>> >> On Tue, Apr 26, 2016 at 3:26 PM, Serge E. Hallyn
> wrote:
>> >> > Quoting Kees Cook
In testing with HiKey, we found that since commit 3f30b158eba5c60
(asix: On RX avoid creating bad Ethernet frames), we're seeing lots of
noise during network transfers:
[ 239.027993] asix 1-1.1:1.0 eth0: asix_rx_fixup() Data Header
synchronisation was lost, remaining 988
[ 239.037310] asix
In testing with HiKey, we found that since commit 3f30b158eba5c60
(asix: On RX avoid creating bad Ethernet frames), we're seeing lots of
noise during network transfers:
[ 239.027993] asix 1-1.1:1.0 eth0: asix_rx_fixup() Data Header
synchronisation was lost, remaining 988
[ 239.037310] asix
The KVM_MAX_VCPUS define provides the maximum number of vCPUs per guest, and
also the upper limit for vCPU ids. This is okay for all archs except PowerPC
which can have higher ids, depending on the cpu/core/thread topology. In the
worst case (single threaded guest, host with 8 threads per core),
The KVM_MAX_VCPUS define provides the maximum number of vCPUs per guest, and
also the upper limit for vCPU ids. This is okay for all archs except PowerPC
which can have higher ids, depending on the cpu/core/thread topology. In the
worst case (single threaded guest, host with 8 threads per core),
From: Wanpeng Li
max_idle_balance_cost and avg_idle which used to capture short idle
are not associated with schedstats, however, the information of these
two factors are't printed out if w/o CONFIG_SCHEDSTATS.
This patch fix it by moving max_idle_balance_cost and
From: Wanpeng Li
max_idle_balance_cost and avg_idle which used to capture short idle
are not associated with schedstats, however, the information of these
two factors are't printed out if w/o CONFIG_SCHEDSTATS.
This patch fix it by moving max_idle_balance_cost and avg_idle print
out of
On (05/03/16 11:30), Sergey Senozhatsky wrote:
> > We are concerning about returing back to no per-cpu options but actually,
> > I don't want. If duplicate compression is really problem(But It's really
> > unlikely), we should try to solve the problem itself with different way
> > rather than
On (05/03/16 11:30), Sergey Senozhatsky wrote:
> > We are concerning about returing back to no per-cpu options but actually,
> > I don't want. If duplicate compression is really problem(But It's really
> > unlikely), we should try to solve the problem itself with different way
> > rather than
Hi Jens,
Today's linux-next merge of the block tree got a conflict in:
drivers/nvme/host/pci.c
between commit:
9bf2b972afea ("NVMe: Fix reset/remove race")
from Linus' tree and commit:
bb8d261e0888 ("nvme: introduce a controller state machine")
from the block tree.
I fixed it up (I
Hi Jens,
Today's linux-next merge of the block tree got a conflict in:
drivers/nvme/host/pci.c
between commit:
9bf2b972afea ("NVMe: Fix reset/remove race")
from Linus' tree and commit:
bb8d261e0888 ("nvme: introduce a controller state machine")
from the block tree.
I fixed it up (I
On Monday 02 May 2016 10:42 PM, J.D. Schroeder wrote:
> This series of patches fixes several discrepancies between the
> AM57/DRA7 clock tree description and the actual hardware behavior and
> frequencies. With these changes a more complete picture of the clock
> tree is represented for a few of
On Monday 02 May 2016 10:42 PM, J.D. Schroeder wrote:
> This series of patches fixes several discrepancies between the
> AM57/DRA7 clock tree description and the actual hardware behavior and
> frequencies. With these changes a more complete picture of the clock
> tree is represented for a few of
Salutations
http://kaminoexpress.com/jet.php?garden=1pbbqhgcy4f346prg
cputrdoc
Salutations
http://kaminoexpress.com/jet.php?garden=1pbbqhgcy4f346prg
cputrdoc
Hi John,
On 4/30/2016 15:00, John Keeping Wrote:
Hi Enric,
On Fri, Apr 29, 2016 at 04:59:27PM +0200, Enric Balletbo Serra wrote:
2015-12-09 11:32 GMT+01:00 John Keeping :
If we only clear the tx/rx state when both are disabled it is not
possible to start/stop one multiple
Hi John,
On 4/30/2016 15:00, John Keeping Wrote:
Hi Enric,
On Fri, Apr 29, 2016 at 04:59:27PM +0200, Enric Balletbo Serra wrote:
2015-12-09 11:32 GMT+01:00 John Keeping :
If we only clear the tx/rx state when both are disabled it is not
possible to start/stop one multiple times while the
On Thu, 2016-04-28 at 00:34 +0200, Arnd Bergmann wrote:
> The rtc-generic driver provides an architecture specific
> wrapper on top of the generic rtc_class_ops abstraction,
> and powerpc has another abstraction on top, which is a bit
> silly.
>
> This changes the powerpc rtc-generic device to
On Thu, 2016-04-28 at 00:34 +0200, Arnd Bergmann wrote:
> The rtc-generic driver provides an architecture specific
> wrapper on top of the generic rtc_class_ops abstraction,
> and powerpc has another abstraction on top, which is a bit
> silly.
>
> This changes the powerpc rtc-generic device to
On Fri, 29 Apr 2016, Nicolas Morey Chaisemartin wrote:
> Hi everyone,
>
> This is a repost from a different address as it seems the previous one ended
> in Gmail junk due to a domain error..
linux-kernel is a very high volume list which few are reading:
that also will account for your lack of
On Fri, 29 Apr 2016, Nicolas Morey Chaisemartin wrote:
> Hi everyone,
>
> This is a repost from a different address as it seems the previous one ended
> in Gmail junk due to a domain error..
linux-kernel is a very high volume list which few are reading:
that also will account for your lack of
On Mon, May 02, 2016 at 12:20:08PM +0200, Thomas Renninger wrote:
> On Sunday, May 01, 2016 01:11:33 PM Sean Fu wrote:
> > Hi guys:
> > I encountered a build error when running "make V=1 tools/all".
> > Shall we write a patch to fix it?
>
> This is not a bug.
>
> > The
On Mon, May 02, 2016 at 12:20:08PM +0200, Thomas Renninger wrote:
> On Sunday, May 01, 2016 01:11:33 PM Sean Fu wrote:
> > Hi guys:
> > I encountered a build error when running "make V=1 tools/all".
> > Shall we write a patch to fix it?
>
> This is not a bug.
>
> > The
1 - 100 of 2198 matches
Mail list logo