Add hisilicon spi-nor flash controller driver
Signed-off-by: Binquan Peng
Signed-off-by: Jiancheng Xue
Acked-by: Rob Herring
Reviewed-by: Ezequiel Garcia
---
change log
v8:
Fixed issues pointed by
Add hisilicon spi-nor flash controller driver
Signed-off-by: Binquan Peng
Signed-off-by: Jiancheng Xue
Acked-by: Rob Herring
Reviewed-by: Ezequiel Garcia
---
change log
v8:
Fixed issues pointed by Ezequiel Garcia and Brian Norris.
Moved dts binding file to mtd directory.
Changed the
On Thu 10-03-16 15:50:13, Johannes Weiner wrote:
> When setting memory.high below usage, nothing happens until the next
> charge comes along, and then it will only reclaim its own charge and
> not the now potentially huge excess of the new memory.high. This can
> cause groups to stay in excess of
Hi, Jianyu
On 03/11/16 at 03:19pm, Jianyu Zhan wrote:
> On Fri, Mar 11, 2016 at 2:21 PM, wrote:
> > A useful use case for min_t and max_t is comparing two values with larger
> > type. For example comparing an u64 and an u32, usually we do not want to
> > truncate the u64, so
On Thu 10-03-16 15:50:13, Johannes Weiner wrote:
> When setting memory.high below usage, nothing happens until the next
> charge comes along, and then it will only reclaim its own charge and
> not the now potentially huge excess of the new memory.high. This can
> cause groups to stay in excess of
Hi, Jianyu
On 03/11/16 at 03:19pm, Jianyu Zhan wrote:
> On Fri, Mar 11, 2016 at 2:21 PM, wrote:
> > A useful use case for min_t and max_t is comparing two values with larger
> > type. For example comparing an u64 and an u32, usually we do not want to
> > truncate the u64, so we need use min_t
At the end of the function we expect "status" to be zero, but it's
either -EINVAL or unitialized.
Fixes: 788bf83db301 ('drm/amdkfd: Add wave control operation to debugger')
Signed-off-by: Dan Carpenter
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
At the end of the function we expect "status" to be zero, but it's
either -EINVAL or unitialized.
Fixes: 788bf83db301 ('drm/amdkfd: Add wave control operation to debugger')
Signed-off-by: Dan Carpenter
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
Driver enabled runtime PM but did not revert this on removal. Re-binding
of a device triggered warning:
exynos-rng 10830400.rng: Unbalanced pm_runtime_enable!
Fixes: b329669ea0b5 ("hwrng: exynos - Add support for Exynos random number
generator")
Signed-off-by: Krzysztof Kozlowski
Driver enabled runtime PM but did not revert this on removal. Re-binding
of a device triggered warning:
exynos-rng 10830400.rng: Unbalanced pm_runtime_enable!
Fixes: b329669ea0b5 ("hwrng: exynos - Add support for Exynos random number
generator")
Signed-off-by: Krzysztof Kozlowski
---
The driver uses pm_runtime_put_noidle() after initialization so the
device might remain in active state if the core does not read from it
(the read callback contains regular runtime put). The put_noidle() was
chosen probably to avoid unneeded suspend and resume cycle after the
initialization.
The driver uses pm_runtime_put_noidle() after initialization so the
device might remain in active state if the core does not read from it
(the read callback contains regular runtime put). The put_noidle() was
chosen probably to avoid unneeded suspend and resume cycle after the
initialization.
Add proper error path (for disabling runtime PM) when registering of
hwrng fails.
Fixes: b329669ea0b5 ("hwrng: exynos - Add support for Exynos random number
generator")
Signed-off-by: Krzysztof Kozlowski
---
drivers/char/hw_random/exynos-rng.c | 7 ++-
1 file
Add proper error path (for disabling runtime PM) when registering of
hwrng fails.
Fixes: b329669ea0b5 ("hwrng: exynos - Add support for Exynos random number
generator")
Signed-off-by: Krzysztof Kozlowski
---
drivers/char/hw_random/exynos-rng.c | 7 ++-
1 file changed, 6 insertions(+), 1
Replace ifdef with __maybe_unused to silence compiler warning on when
SUSPEND=n and PM=y:
drivers/char/hw_random/exynos-rng.c:166:12: warning: ‘exynos_rng_suspend’
defined but not used [-Wunused-function]
static int exynos_rng_suspend(struct device *dev)
^
In case of timeout during read operation, the exit path lacked PM
runtime put. This could lead to unbalanced runtime PM usage counter thus
leaving the device in an active state.
Fixes: d7fd6075a205 ("hwrng: exynos - Add timeout for waiting on init done")
Cc: # v4.4+
Replace ifdef with __maybe_unused to silence compiler warning on when
SUSPEND=n and PM=y:
drivers/char/hw_random/exynos-rng.c:166:12: warning: ‘exynos_rng_suspend’
defined but not used [-Wunused-function]
static int exynos_rng_suspend(struct device *dev)
^
In case of timeout during read operation, the exit path lacked PM
runtime put. This could lead to unbalanced runtime PM usage counter thus
leaving the device in an active state.
Fixes: d7fd6075a205 ("hwrng: exynos - Add timeout for waiting on init done")
Cc: # v4.4+
Signed-off-by: Krzysztof
Hi, Minfei
On 03/11/16 at 03:19pm, Minfei Huang wrote:
> On 03/11/16 at 02:21pm, dyo...@redhat.com wrote:
> > @@ -231,7 +231,8 @@ static ssize_t __read_vmcore(char *buffe
> >
> > list_for_each_entry(m, _list, list) {
> > if (*fpos < m->offset + m->size) {
> > -
Hi, Minfei
On 03/11/16 at 03:19pm, Minfei Huang wrote:
> On 03/11/16 at 02:21pm, dyo...@redhat.com wrote:
> > @@ -231,7 +231,8 @@ static ssize_t __read_vmcore(char *buffe
> >
> > list_for_each_entry(m, _list, list) {
> > if (*fpos < m->offset + m->size) {
> > -
On Tue, Mar 08, 2016 at 03:50:41PM +, Jon Medhurst (Tixy) wrote:
> > The reside is requested for "a descriptor". For example if you have prepared
> > two descriptors A and B and submitted them, then you can request status and
> > reside for A and you need to calculate that for A only and not
КЛИЕНТСКИЕ БАЗЫ!
Соберем для Вас по интернет базу данных потенциальных клиентов
для Вашего Бизнеса!
Много! Быстро! Недорого!
Узнайте об этом подробнее по
Тел: +79133913837
Viber: +79133913837
Whatsapp: +79133913837
Skype: prodawez389
Email: mgordee...@gmail.com
On Tue, Mar 08, 2016 at 03:50:41PM +, Jon Medhurst (Tixy) wrote:
> > The reside is requested for "a descriptor". For example if you have prepared
> > two descriptors A and B and submitted them, then you can request status and
> > reside for A and you need to calculate that for A only and not
КЛИЕНТСКИЕ БАЗЫ!
Соберем для Вас по интернет базу данных потенциальных клиентов
для Вашего Бизнеса!
Много! Быстро! Недорого!
Узнайте об этом подробнее по
Тел: +79133913837
Viber: +79133913837
Whatsapp: +79133913837
Skype: prodawez389
Email: mgordee...@gmail.com
On Thu, Mar 10, 2016 at 09:55:25PM -0700, Jonathan Corbet wrote:
> Not on kernel.org. From MAINTAINERS:
>
> T: git git://git.lwn.net/linux.git docs-next
Allright thanks :)
--
Philippe Loctaux
On Thu, Mar 10, 2016 at 09:55:25PM -0700, Jonathan Corbet wrote:
> Not on kernel.org. From MAINTAINERS:
>
> T: git git://git.lwn.net/linux.git docs-next
Allright thanks :)
--
Philippe Loctaux
For supporting migration from VM, we need to have address_space
on every page so zsmalloc shouldn't use page->mapping. So,
this patch moves zs_meta from mapping to freelist.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 23 ---
1 file changed, 12
Now, VM has a feature to migrate non-lru movable pages so
balloon doesn't need custom migration hooks in migrate.c
and compact.c. Instead, this patch implements page->mapping
->{isolate|migrate|putback} functions.
With that, we could remove hooks for ballooning in general
migration functions and
Let's remove unused pool param in obj_free
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 156edf909046..b4fb11831acb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@
Currently, we rely on class->lock to prevent zspage destruction.
It was okay until now because the critical section is short but
with run-time migration, it could be long so class->lock is not
a good apporach any more.
So, this patch introduces [un]freeze_zspage functions which
freeze allocated
Let's remove unused pool param in obj_free
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 156edf909046..b4fb11831acb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1435,8 +1435,7 @@
Currently, we rely on class->lock to prevent zspage destruction.
It was okay until now because the critical section is short but
with run-time migration, it could be long so class->lock is not
a good apporach any more.
So, this patch introduces [un]freeze_zspage functions which
freeze allocated
For supporting migration from VM, we need to have address_space
on every page so zsmalloc shouldn't use page->mapping. So,
this patch moves zs_meta from mapping to freelist.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 23 ---
1 file changed, 12 insertions(+), 11
Now, VM has a feature to migrate non-lru movable pages so
balloon doesn't need custom migration hooks in migrate.c
and compact.c. Instead, this patch implements page->mapping
->{isolate|migrate|putback} functions.
With that, we could remove hooks for ballooning in general
migration functions and
This patch cleans up function parameter ordering to order
higher data structure first.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 50 ++
1 file changed, 26 insertions(+), 24 deletions(-)
diff --git a/mm/zsmalloc.c
This patch introduces run-time migration feature for zspage.
To begin with, it supports only head page migration for
easy review(later patches will support tail page migration).
For migration, it supports three functions
* zs_page_isolate
It isolates a zspage which includes a subpage VM want to
Currently, putback_zspage does free zspage under class->lock
if fullness become ZS_EMPTY but it makes trouble to implement
locking scheme for new zspage migration.
So, this patch is to separate free_zspage from putback_zspage
and free zspage out of class->lock which is preparation for
zspage
For tail page migration, we shouldn't use page->lru which
was used for page chaining because VM will use it for own
purpose so that we need another field for chaining.
For chaining, singly linked list is enough and page->index
of tail page to point first object offset in the page could
be replaced
This patch cleans up function parameter ordering to order
higher data structure first.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 50 ++
1 file changed, 26 insertions(+), 24 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index
This patch introduces run-time migration feature for zspage.
To begin with, it supports only head page migration for
easy review(later patches will support tail page migration).
For migration, it supports three functions
* zs_page_isolate
It isolates a zspage which includes a subpage VM want to
Currently, putback_zspage does free zspage under class->lock
if fullness become ZS_EMPTY but it makes trouble to implement
locking scheme for new zspage migration.
So, this patch is to separate free_zspage from putback_zspage
and free zspage out of class->lock which is preparation for
zspage
For tail page migration, we shouldn't use page->lru which
was used for page chaining because VM will use it for own
purpose so that we need another field for chaining.
For chaining, singly linked list is enough and page->index
of tail page to point first object offset in the page could
be replaced
This patch enables tail page migration of zspage.
In this point, I tested zsmalloc regression with micro-benchmark
which does zs_malloc/map/unmap/zs_free for all size class
in every CPU(my system is 12) during 20 sec.
It shows 1% regression which is really small when we consider
the benefit of
Currently, we store class:fullness into page->mapping.
The number of class we can support is 255 and fullness is 4 so
(8 + 2 = 10bit) is enough to represent them.
Meanwhile, the bits we need to store in-use objects in zspage
is that 11bit is enough.
For example, If we assume that 64K PAGE_SIZE,
Every zspage in a size_class has same number of max objects so
we could move it to a size_class.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 29 ++---
1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
For migration, we need to create sub-page chain of zspage
dynamically so this patch factors it out from alloc_zspage.
As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
more clear in obj_malloc(it could be another patch but it's
trivial so I want to put together in this patch).
Zsmalloc stores first free object's position into first_page->freelist
in each zspage. If we change it with object index from first_page
instead of location, we could squeeze it into page->mapping because
the number of bit we need to store offset is at most 11bit.
Signed-off-by: Minchan Kim
This patch enables tail page migration of zspage.
In this point, I tested zsmalloc regression with micro-benchmark
which does zs_malloc/map/unmap/zs_free for all size class
in every CPU(my system is 12) during 20 sec.
It shows 1% regression which is really small when we consider
the benefit of
Currently, we store class:fullness into page->mapping.
The number of class we can support is 255 and fullness is 4 so
(8 + 2 = 10bit) is enough to represent them.
Meanwhile, the bits we need to store in-use objects in zspage
is that 11bit is enough.
For example, If we assume that 64K PAGE_SIZE,
Every zspage in a size_class has same number of max objects so
we could move it to a size_class.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 29 ++---
1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index
For migration, we need to create sub-page chain of zspage
dynamically so this patch factors it out from alloc_zspage.
As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
more clear in obj_malloc(it could be another patch but it's
trivial so I want to put together in this patch).
Zsmalloc stores first free object's position into first_page->freelist
in each zspage. If we change it with object index from first_page
instead of location, we could squeeze it into page->mapping because
the number of bit we need to store offset is at most 11bit.
Signed-off-by: Minchan Kim
---
From: Gioh Kim
The anon_inodes has already complete interfaces to create manage
many anonymous inodes but don't have interface to get
new inode. Other sub-modules can create anonymous inode
without creating and mounting it's own pseudo filesystem.
Acked-by: Rafael Aquini
There are many BUG_ON in zsmalloc.c which is not recommened so
change them as alternatives.
Normal rule is as follows:
1. avoid BUG_ON if possible. Instead, use VM_BUG_ON or VM_BUG_ON_PAGE
2. use VM_BUG_ON_PAGE if we need to see struct page's fields
3. use those assertion in primitive functions
Procedure of page migration is as follows:
First of all, it should isolate a page from LRU and try to
migrate the page. If it is successful, it releases the page
for freeing. Otherwise, it should put the page back to LRU
list.
For LRU pages, we have used putback_lru_page for both freeing
and
We have allowed migration for only LRU pages until now and it was
enough to make high-order pages. But recently, embedded system(e.g.,
webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory)
so we have seen several reports about troubles of small high-order
allocation. For fixing
We have allowed migration for only LRU pages until now and it was
enough to make high-order pages. But recently, embedded system(e.g.,
webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory)
so we have seen several reports about troubles of small high-order
allocation. For fixing
From: Gioh Kim
The anon_inodes has already complete interfaces to create manage
many anonymous inodes but don't have interface to get
new inode. Other sub-modules can create anonymous inode
without creating and mounting it's own pseudo filesystem.
Acked-by: Rafael Aquini
Signed-off-by: Gioh
There are many BUG_ON in zsmalloc.c which is not recommened so
change them as alternatives.
Normal rule is as follows:
1. avoid BUG_ON if possible. Instead, use VM_BUG_ON or VM_BUG_ON_PAGE
2. use VM_BUG_ON_PAGE if we need to see struct page's fields
3. use those assertion in primitive functions
Procedure of page migration is as follows:
First of all, it should isolate a page from LRU and try to
migrate the page. If it is successful, it releases the page
for freeing. Otherwise, it should put the page back to LRU
list.
For LRU pages, we have used putback_lru_page for both freeing
and
> On Mar 11, 2016, at 15:23, Lu Bing wrote:
>
> From: l00215322
>
> Many android devices have zram,so we should add "MM_SWAPENTS" in tasksize.
> Refer oom_kill.c,we add pte also.
>
> Reviewed-by: Chen Feng
>
Zsmalloc is ready for page migration so zram can use __GFP_MOVABLE
from now on.
I did test to see how it helps to make higher order pages.
Test scenario is as follows.
KVM guest, 1G memory, ext4 formated zram block device,
for i in `seq 1 8`;
do
dd if=/dev/vda1 of=mnt/test$i.txt bs=128M
> On Mar 11, 2016, at 15:23, Lu Bing wrote:
>
> From: l00215322
>
> Many android devices have zram,so we should add "MM_SWAPENTS" in tasksize.
> Refer oom_kill.c,we add pte also.
>
> Reviewed-by: Chen Feng
> Reviewed-by: Fu Jun
> Reviewed-by: Xu YiPing
> Reviewed-by: Yu DongBin
>
Zsmalloc is ready for page migration so zram can use __GFP_MOVABLE
from now on.
I did test to see how it helps to make higher order pages.
Test scenario is as follows.
KVM guest, 1G memory, ext4 formated zram block device,
for i in `seq 1 8`;
do
dd if=/dev/vda1 of=mnt/test$i.txt bs=128M
This patch cleans up function parameter "struct page".
Many functions of zsmalloc expects that page paramter is "first_page"
so use "first_page" rather than "page" for code readability.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 62
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer
This patch cleans up function parameter "struct page".
Many functions of zsmalloc expects that page paramter is "first_page"
so use "first_page" rather than "page" for code readability.
Signed-off-by: Minchan Kim
---
mm/zsmalloc.c | 62 ++-
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer
Hi Shawn,
> -Original Message-
> From: Shawn Lin [mailto:shawn@kernel-upstream.org]
> Sent: Friday, March 11, 2016 2:34 PM
> To: Chao Yu; 'Shawn Lin'; 'Jaegeuk Kim'
> Cc: shawn@kernel-upstream.org; linux-kernel@vger.kernel.org;
> linux-f2fs-de...@lists.sourceforge.net
> Subject:
Hi Shawn,
> -Original Message-
> From: Shawn Lin [mailto:shawn@kernel-upstream.org]
> Sent: Friday, March 11, 2016 2:34 PM
> To: Chao Yu; 'Shawn Lin'; 'Jaegeuk Kim'
> Cc: shawn@kernel-upstream.org; linux-kernel@vger.kernel.org;
> linux-f2fs-de...@lists.sourceforge.net
> Subject:
From: l00215322
Many android devices have zram,so we should add "MM_SWAPENTS" in tasksize.
Refer oom_kill.c,we add pte also.
Reviewed-by: Chen Feng
Reviewed-by: Fu Jun
Reviewed-by: Xu YiPing
From: l00215322
Many android devices have zram,so we should add "MM_SWAPENTS" in tasksize.
Refer oom_kill.c,we add pte also.
Reviewed-by: Chen Feng
Reviewed-by: Fu Jun
Reviewed-by: Xu YiPing
Reviewed-by: Yu DongBin
Signed-off-by: Lu Bing
---
drivers/staging/android/lowmemorykiller.c | 4
On Fri, Mar 11, 2016 at 2:21 PM, wrote:
> A useful use case for min_t and max_t is comparing two values with larger
> type. For example comparing an u64 and an u32, usually we do not want to
> truncate the u64, so we need use min_t or max_t with u64.
>
> To simplify the usage
On Fri, Mar 11, 2016 at 2:21 PM, wrote:
> A useful use case for min_t and max_t is comparing two values with larger
> type. For example comparing an u64 and an u32, usually we do not want to
> truncate the u64, so we need use min_t or max_t with u64.
>
> To simplify the usage introducing two
On 03/11/16 at 02:21pm, dyo...@redhat.com wrote:
> @@ -231,7 +231,8 @@ static ssize_t __read_vmcore(char *buffe
>
> list_for_each_entry(m, _list, list) {
> if (*fpos < m->offset + m->size) {
> - tsz = min_t(size_t, m->offset + m->size - *fpos,
> buflen);
On 03/11/16 at 02:21pm, dyo...@redhat.com wrote:
> @@ -231,7 +231,8 @@ static ssize_t __read_vmcore(char *buffe
>
> list_for_each_entry(m, _list, list) {
> if (*fpos < m->offset + m->size) {
> - tsz = min_t(size_t, m->offset + m->size - *fpos,
> buflen);
Hi,
Could be related (the same?) with [0].
I have a driver (hwrng/exynos-rng) which in probe does:
pm_runtime_set_autosuspend_delay(>dev, EXYNOS_AUTOSUSPEND_DELAY);
pm_runtime_use_autosuspend(>dev);
pm_runtime_enable(>dev);
and in remove:
pm_runtime_disable(>dev)
Just before unbinding in
Hi,
Could be related (the same?) with [0].
I have a driver (hwrng/exynos-rng) which in probe does:
pm_runtime_set_autosuspend_delay(>dev, EXYNOS_AUTOSUSPEND_DELAY);
pm_runtime_use_autosuspend(>dev);
pm_runtime_enable(>dev);
and in remove:
pm_runtime_disable(>dev)
Just before unbinding in
Rajesh Bhagat writes:
> [ text/plain ]
> The 0.95 xHCI spec says that non-control endpoints will be halted if a
> babble is detected on a transfer. The 0.96 xHCI spec says all types of
> endpoints will be halted when a babble is detected. Some hardware that
> claims to
Rajesh Bhagat writes:
> [ text/plain ]
> The 0.95 xHCI spec says that non-control endpoints will be halted if a
> babble is detected on a transfer. The 0.96 xHCI spec says all types of
> endpoints will be halted when a babble is detected. Some hardware that
> claims to be 0.95 compliant halts
Function skb_splice_bits can return negative values, its result should
be assigned to signed variable to allow correct error checking.
The problem has been detected using patch
scripts/coccinelle/tests/unsigned_lesser_than_zero.cocci.
Signed-off-by: Andrzej Hajda
---
Function skb_splice_bits can return negative values, its result should
be assigned to signed variable to allow correct error checking.
The problem has been detected using patch
scripts/coccinelle/tests/unsigned_lesser_than_zero.cocci.
Signed-off-by: Andrzej Hajda
---
net/kcm/kcmsock.c | 2 +-
On Mon, Mar 7, 2016 at 9:03 AM, Toshi Kani wrote:
> Let me try to summarize...
>
> The original issue Luis brought up was that drivers written to work with
> MTRR may create a single ioremap range covering multiple cache attributes
> since MTRR can overwrite cache attribute of
On Mon, Mar 7, 2016 at 9:03 AM, Toshi Kani wrote:
> Let me try to summarize...
>
> The original issue Luis brought up was that drivers written to work with
> MTRR may create a single ioremap range covering multiple cache attributes
> since MTRR can overwrite cache attribute of a certain range.
On Thu, Mar 10, 2016 at 8:05 AM, Niklas S??derlund
wrote:
> Hi Christoph,
>
> On 2016-03-07 23:38:47 -0800, Christoph Hellwig wrote:
>> Please add some documentation on where/how this should be used. It's
>> not a very obvious interface.
>
> Good idea, I have added
On Thu, Mar 10, 2016 at 8:05 AM, Niklas S??derlund
wrote:
> Hi Christoph,
>
> On 2016-03-07 23:38:47 -0800, Christoph Hellwig wrote:
>> Please add some documentation on where/how this should be used. It's
>> not a very obvious interface.
>
> Good idea, I have added the following to
Version 2 of the Server Base System Architecture (SBSAv2) describes the
Generic UART registers as 32 bits wide. At least one implementation, found
on the Qualcomm Technologies QDF2432, only supports 32 bit accesses.
SBSAv3, which describes supported access sizes in greater detail,
explicitly
Version 2 of the Server Base System Architecture (SBSAv2) describes the
Generic UART registers as 32 bits wide. At least one implementation, found
on the Qualcomm Technologies QDF2432, only supports 32 bit accesses.
SBSAv3, which describes supported access sizes in greater detail,
explicitly
Hi Chao Yu,
On 2016/3/11 13:29, Chao Yu wrote:
Hi Shawn,
-Original Message-
From: Shawn Lin [mailto:shawn@rock-chips.com]
Sent: Friday, March 11, 2016 11:28 AM
To: Jaegeuk Kim
Cc: Shawn Lin; linux-kernel@vger.kernel.org;
linux-f2fs-de...@lists.sourceforge.net
Subject: [f2fs-dev]
Hi Chao Yu,
On 2016/3/11 13:29, Chao Yu wrote:
Hi Shawn,
-Original Message-
From: Shawn Lin [mailto:shawn@rock-chips.com]
Sent: Friday, March 11, 2016 11:28 AM
To: Jaegeuk Kim
Cc: Shawn Lin; linux-kernel@vger.kernel.org;
linux-f2fs-de...@lists.sourceforge.net
Subject: [f2fs-dev]
A useful use case for min_t and max_t is comparing two values with larger
type. For example comparing an u64 and an u32, usually we do not want to
truncate the u64, so we need use min_t or max_t with u64.
To simplify the usage introducing two more macros min_lt and max_lt,
'lt' means larger type.
Hi,
We found a min_t type casting issue in fs/proc/vmcore.c, it uses smaller
type in min_t so that during i386 PAE testing 64bit value was trucated
and then cause saving vmcore failure and BUG() for mmap case.
I introduced new macros min_lt and max_lt to select the larger data type
in x and y.
On i686 PAE enabled machine the contiguous physical area could be large
and it can cause triming down variables in below calculation in
read_vmcore() and mmap_vmcore():
tsz = min_t(size_t, m->offset + m->size - *fpos, buflen);
Then the real size passed down is not correct any more.
A useful use case for min_t and max_t is comparing two values with larger
type. For example comparing an u64 and an u32, usually we do not want to
truncate the u64, so we need use min_t or max_t with u64.
To simplify the usage introducing two more macros min_lt and max_lt,
'lt' means larger type.
Hi,
We found a min_t type casting issue in fs/proc/vmcore.c, it uses smaller
type in min_t so that during i386 PAE testing 64bit value was trucated
and then cause saving vmcore failure and BUG() for mmap case.
I introduced new macros min_lt and max_lt to select the larger data type
in x and y.
On i686 PAE enabled machine the contiguous physical area could be large
and it can cause triming down variables in below calculation in
read_vmcore() and mmap_vmcore():
tsz = min_t(size_t, m->offset + m->size - *fpos, buflen);
Then the real size passed down is not correct any more.
Hi Emese,
2016-03-07 8:05 GMT+09:00 Emese Revfy :
> Add a very simple plugin to demonstrate the GCC plugin infrastructure. This
> GCC
> plugin computes the cyclomatic complexity of each function.
>
> The complexity M of a function's control flow graph is defined as:
> M = E
Hi Emese,
2016-03-07 8:05 GMT+09:00 Emese Revfy :
> Add a very simple plugin to demonstrate the GCC plugin infrastructure. This
> GCC
> plugin computes the cyclomatic complexity of each function.
>
> The complexity M of a function's control flow graph is defined as:
> M = E - N + 2P
> where
>
On 02/29/2016 15:29, Yong, Jonathan wrote:
Hello LKML,
This is a preliminary implementation of the PTM[1] support driver, the code
is obviously hacked together and in need of refactoring. This driver has
only been tested against a virtual PCI bus.
The drivers job is to get to every PTM capable
On 02/29/2016 15:29, Yong, Jonathan wrote:
Hello LKML,
This is a preliminary implementation of the PTM[1] support driver, the code
is obviously hacked together and in need of refactoring. This driver has
only been tested against a virtual PCI bus.
The drivers job is to get to every PTM capable
1 - 100 of 1588 matches
Mail list logo