[...]
> > > >
> > > > From the architecture spec:
> > > >
> > > >
> > > > 11.1.3 GIC memory-mapped register access
> > > >
> > > > In any system, access to the following registers must be supported:
> > > >
> > > > [...]
> > > > * Byte accesses to:
> > > > - GICD_IPRIORITYR.
> > > >
t; OKAY or SLVERR response that is based on the GICT_ERR0CTLR.UE bit.".
Thus the register needs to be written by double word operation and
the step will be: read 32bit, set byte and write it back.
[1] https://services.arm.com/support/s/case/5003t1L4Pba
Signed-off-by: Lecopzer Chen
---
d
> > Hi Will, Mark,
> >
> > On Fri, 15 Jan 2021 at 17:32, Sumit Garg wrote:
> > >
> > > With the recent feature added to enable perf events to use pseudo NMIs
> > > as interrupts on platforms which support GICv3 or later, its now been
> > > possible to enable hard lockup detector (or NMI watchdog)
> On Wed, Mar 24, 2021 at 12:05:22PM +0800, Lecopzer Chen wrote:
> > Before this patch, someone who wants to use VMAP_STACK when
> > KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC.
> >
> > From Will's suggestion [1]:
> > > I would _really_ l
> Hi Will, Mark,
>
> On Fri, 15 Jan 2021 at 17:32, Sumit Garg wrote:
> >
> > With the recent feature added to enable perf events to use pseudo NMIs
> > as interrupts on platforms which support GICv3 or later, its now been
> > possible to enable hard lockup detector (or NMI watchdog) on arm64
> >
:
module_alloc_base/end ffdcf4bed000 ffc01000
And the function that insmod some modules is fine.
Suggested-by: Ard Biesheuvel
Signed-off-by: Lecopzer Chen
---
arch/arm64/kernel/kaslr.c | 18 ++
arch/arm64/kernel/module.c | 16 +---
2 files changed, 19
o shadow
|000|
- KASAN_SHADOW_END
Signed-off-by: Lecopzer Chen
Acked-by: Andrey Konovalov
Tested-by: Andrey Konovalov
Tested-by: Ard Biesheuvel
---
arch/arm64/mm/kasan_init.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/
t step to make VMAP_STACK selected unconditionally.
Bind KANSAN_GENERIC and KASAN_VMALLOC together is supposed to cost more
memory at runtime, thus the alternative is using SW_TAGS KASAN instead.
[1]: https://lore.kernel.org/lkml/20210204150100.GE20815@willie-the-truck/
Suggested-by: Will Deacon
Signed-of
ttps://lkml.org/lkml/2021/1/9/49
v1:
https://lore.kernel.org/lkml/20210103171137.153834-1-lecop...@gmail.com/
---
Lecopzer Chen (5):
arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
arm64: kasan: abstract _text and _end to KERNEL_START/END
arm64: Kconfig: support CONFIG_
Arm64 provides defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.
Signed-off-by: Lecopzer Chen
Acked-by: Andrey Konovalov
Tested-by: Andrey Konovalov
Tested-by: Ard Biesheuvel
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file
We can backed shadow memory in vmalloc area after vmalloc area
isn't populated at kasan_init(), thus make KASAN_VMALLOC selectable.
Signed-off-by: Lecopzer Chen
Acked-by: Andrey Konovalov
Tested-by: Andrey Konovalov
Tested-by: Ard Biesheuvel
---
arch/arm64/Kconfig | 1 +
1 file chang
On Sat, Mar 20, 2021 at 1:38 AM Catalin Marinas wrote:
>
> On Sat, Feb 06, 2021 at 04:35:48PM +0800, Lecopzer Chen wrote:
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
&
On Sat, Mar 20, 2021 at 1:41 AM Catalin Marinas wrote:
>
> Hi Lecopzer,
>
> On Sat, Feb 06, 2021 at 04:35:47PM +0800, Lecopzer Chen wrote:
> > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow mem
Signed-off-by: Lecopzer Chen
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a8f5a9171a85..9be6a57f6447 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -190,6 +190,7 @@ config ARM64
select IOMMU_DMA if
st_kasan_module.ko
It works in Kaslr with CONFIG_RANDOMIZE_MODULE_REGION_FULL
and randomize module region inside vmalloc area.
Also work with VMAP_STACK, thanks Ard for testing it.
[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")
Signed-off-by: Lecopzer Chen
Acked-by: And
Now we can backed shadow memory in vmalloc area,
thus make KASAN_VMALLOC selectable.
Signed-off-by: Lecopzer Chen
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b28ec1..a8f5a9171a85 100644
--- a/arch/arm64/Kconfig
:
module_alloc_base/end ffdcf4bed000 ffc01000
And the function that insmod some modules is fine.
Suggested-by: Ard Biesheuvel
Signed-off-by: Lecopzer Chen
---
arch/arm64/kernel/kaslr.c | 18 ++
arch/arm64/kernel/module.c | 16 +---
2 files changed, 19
Arm64 provides defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.
Signed-off-by: Lecopzer Chen
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/kasan_init.c b
o shadow
|000|
- KASAN_SHADOW_END
Signed-off-by: Lecopzer Chen
---
arch/arm64/mm/kasan_init.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..20d06008785f 1006
Will Deacon 於 2021年2月6日 週六 上午1:19寫道:
>
> On Fri, Feb 05, 2021 at 12:37:21AM +0800, Lecopzer Chen wrote:
> >
> > > On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> &g
> On Thu, Feb 04, 2021 at 11:53:46PM +0800, Lecopzer Chen wrote:
> > > On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote:
> > > > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > ("kasan: support back
> On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote:
> > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > > ("kasan: support backing vmalloc space with
Thu, Feb 04, 2021 at 10:51:27PM +0800, Lecopzer Chen wrote:
> > > On Sat, Jan 09, 2021 at 06:32:50PM +0800, Lecopzer Chen wrote:
> > > > Arm64 provide defined macro for KERNEL_START and KERNEL_END,
> > > > thus replace them by the abstration instead of using _
> On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote:
> > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Acroding to how x86 ported it [1], they early allo
> On Sat, Jan 09, 2021 at 06:32:50PM +0800, Lecopzer Chen wrote:
> > Arm64 provide defined macro for KERNEL_START and KERNEL_END,
> > thus replace them by the abstration instead of using _text and _end.
> >
> > Signed-off-by: Lecopzer Chen
> > ---
&g
> On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote:
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Like how the MODULES_VADDR does now, just not to
> On Sat, 9 Jan 2021 at 11:33, Lecopzer Chen wrote:
> >
> > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Like how the MODULES_VADDR does now, just not to e
Hi,
[...]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c93e801a45e9..3f17c73ad582 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3807,16 +3807,13 @@ alloc_flags_nofragment(struct zone *zone, gfp_t
> gfp_mask)
> return alloc_flags;
> }
>
> -static inline unsign
> On Sat, Jan 09, 2021 at 06:32:52PM +0800, Lecopzer Chen wrote:
> > After KASAN_VMALLOC works in arm64, we can randomize module region
> > into vmalloc area now.
> >
> > Test:
> > VMALLOC area ffc01000 fffdf
Hi all,
I don't see any fix for this issue now(maybe I missed it..?),
could we fix this if there is better solution?
This issue exists almost two years.
Thanks!
BRs,
Lecopzer
> On Tue, Jan 26, 2021 at 11:01:50PM +0800, Lecopzer Chen wrote:
> > > On 2021-01-26 10:59:32 [+], Russell King - ARM Linux admin wrote:
> > > > On Tue, Jan 26, 2021 at 05:17:08PM +0800, Lecopzer Chen wrote:
> > > > > Hi all,
> > > > >
&g
> On 2021-01-26 10:59:32 [+], Russell King - ARM Linux admin wrote:
> > On Tue, Jan 26, 2021 at 05:17:08PM +0800, Lecopzer Chen wrote:
> > > Hi all,
> > >
> > > I don't see any fix for this issue now(maybe I missed it..?),
> > > could we fix
Dear reviewers and maintainers,
Could we have chance to upstream this in 5.12-rc?
So if these patches have any problem I can fix as soon as possible before
next -rc comming.
thanks!
BRs,
Lecopzer
kasan_remove_zero_shadow() shall use original virtual address, start
and size, instead of shadow address.
Fixes: 0207df4fa1a86 ("kernel/memremap, kasan: make ZONE_DEVICE with work with
KASAN")
Signed-off-by: Lecopzer Chen
Reviewed-by: Andrey Konovalov
Cc: Andrey Ryabinin
Cc: Dan Wi
C area and
should keep these area populated.
Signed-off-by: Lecopzer Chen
---
arch/arm64/mm/kasan_init.c | 23 ++-
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..39b218a64279 100644
--- a/
:
module_alloc_base/end ffdcf4bed000 ffc01000
And the function that insmod some modules is fine.
Suggested-by: Ard Biesheuvel
Signed-off-by: Lecopzer Chen
---
arch/arm64/kernel/kaslr.c | 18 ++
arch/arm64/kernel/module.c | 16 +---
2 files changed, 19
Arm64 provide defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.
Signed-off-by: Lecopzer Chen
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/kasan_init.c b
de vmalloc area.
[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")
Signed-off-by: Lecopzer Chen
Acked-by: Andrey Konovalov
Tested-by: Andrey Konovalov
v2 -> v1
1. kasan_init.c tweak indent
2. change Kconfig depends only on HAVE_ARCH_KASAN
3. su
now we can backed shadow memory in vmalloc area,
thus support KASAN_VMALLOC in KASAN_GENERIC mode.
Signed-off-by: Lecopzer Chen
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 05e17351e4f3..ba03820402ee 100644
--- a/arch
Hi Ard,
> On Fri, 8 Jan 2021 at 19:31, Andrey Konovalov wrote:
> >
> > On Sun, Jan 3, 2021 at 6:12 PM Lecopzer Chen wrote:
> > >
> > > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > > ("kasan: support backing vmalloc space with r
Hi Andrey,
>
> On Sun, Jan 3, 2021 at 6:12 PM Lecopzer Chen wrote:
> >
> > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
> > ("kasan: support backing vmalloc space with real shadow memory")
> >
> > Acroding to how x86 ported it [1]
Hi Andrey,
> On Sun, Jan 3, 2021 at 6:13 PM Lecopzer Chen wrote:
> >
> > Now I have no device to test for HW_TAG, so keep it not selected
> > until someone can test this.
> >
> > Signed-off-by: Lecopzer Chen
> > ---
> > arch/arm64/Kconfig | 1 +
>
Hi Sumit,
Thanks for your reply.
> On Mon, 21 Dec 2020 at 21:53, Lecopzer Chen
> wrote:
> >
> > commit 367c820ef08082 ("arm64: Enable perf events based hard lockup
> > detector")
> > reinitilizes lockup detector after arm64 PMU is initial
Now I have no device to test for HW_TAG, so keep it not selected
until someone can test this.
Signed-off-by: Lecopzer Chen
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 05e17351e4f3..29ab35aab59e 100644
--- a/arch/arm64
Arm64 provide defined macro for KERNEL_START and KERNEL_END,
thus replace by the abstration instead of using _text and _end directly.
Signed-off-by: Lecopzer Chen
---
arch/arm64/mm/kasan_init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/kasan_init.c
C area and
should keep these area populated.
Signed-off-by: Lecopzer Chen
---
arch/arm64/mm/kasan_init.c | 23 ++-
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..d7ad3f1e9c4d 100644
--- a/
no proper device), thus keep
HW_TAG and KASAN_VMALLOC mutual exclusion until confirming
the functionality.
[1]: commit 0609ae011deb41c ("x86/kasan: support KASAN_VMALLOC")
Signed-off-by: Lecopzer Chen
Lecopzer Chen (3):
arm64: kasan: don't populate vmalloc area for CONFIG_KAS
map, kasan: make ZONE_DEVICE with work with
KASAN")
Signed-off-by: Lecopzer Chen
---
mm/kasan/init.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index 67051cfae41c..ae9158f7501f 100644
--- a/mm/kasan/init.c
+++ b/
kasan_remove_zero_shadow() shall use original virtual address, start
and size, instead of shadow address.
Fixes: 0207df4fa1a86 ("kernel/memremap, kasan: make ZONE_DEVICE with work with
KASAN")
Signed-off-by: Lecopzer Chen
---
mm/kasan/init.c | 3 +--
1 file changed, 1 insertion(+), 2
+0x18/0x24
lockup_detector_init+0x44/0xa8
armv8_pmu_driver_init+0x54/0x78
do_one_initcall+0x184/0x43c
kernel_init_freeable+0x368/0x380
kernel_init+0x1c/0x1cc
ret_from_fork+0x10/0x30
Fixes: 367c820ef08082 ("arm64: Enable perf events based hard lockup detector")
Sig
dog_hld.c: Fix access percpu in preemptible context")
> url:
> https://github.com/0day-ci/linux/commits/Lecopzer-Chen/kernel-watchdog_hld-c-Fix-access-percpu-in-preemptible-context/20201217-211549
> base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
> acc
BRs,
Lecopzer
> Greeting,
>
> FYI, we noticed the following commit (built with gcc-9):
>
> commit: 6e37d53a67753bcb12a0b9102cac85d98f8a0453 ("[PATCH]
> kernel/watchdog_hld.c: Fix access percpu in preemptible context")
> url:
> https://github.com/0day-ci/
Hi Catalin,
Thanks for your explanation.
> > so there is two points
> > 1. out-of-tree function can't be approved
> > I totally agree with this :) so we may have a driver upstream in the
> > future.
>
> It may not be upstreamable if it relies on the old APM interface ;).
>
> > 2. APM not
+0x18/0x24
lockup_detector_init+0x44/0xa8
armv8_pmu_driver_init+0x54/0x78
do_one_initcall+0x184/0x43c
kernel_init_freeable+0x368/0x380
kernel_init+0x1c/0x1cc
ret_from_fork+0x10/0x30
Fixes: 367c820ef08082 ("arm64: Enable perf events based hard lockup detector")
Signed-off-b
make sense?
thanks!
BRs,
Lecopzer
> On Wed, Nov 25, 2020 at 07:41:30PM +0800, Lecopzer Chen wrote:
> > >> In order to select CONFIG_APM_EMULATION, make SYS_SUPPORTS_APM_EMULATION
> > >> default is y if ACPI isn't configured.
> > >
> > >I
sn't configured.
>
> Signed-off-by: Lecopzer Chen
> Suggested-by: YJ Chiang
> ---
> arch/arm64/Kconfig | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 1515f6f153a0..5e9e3694015a 100644
> --- a/arch/ar
Bernd Edlinger
Signed-off-by: Sebastian Andrzej Siewior
Signed-off-by: Lecopzer Chen
Cc: YJ Chiang
---
arch/arm/mm/fault.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index efa402025031..f1b57b7d5a0c 100644
>> From: "Lecopzer Chen"
>>
>> Although most of modern devices use ACPI, there still has combination
>> of APM + ARM64.
>>
>> In order to select CONFIG_APM_EMULATION, make SYS_SUPPORTS_APM_EMULATION
>> default is y if ACPI isn't configu
From: "Lecopzer Chen"
Although most of modern devices use ACPI, there still has combination
of APM + ARM64.
In order to select CONFIG_APM_EMULATION, make SYS_SUPPORTS_APM_EMULATION
default is y if ACPI isn't configured.
Signed-off-by: Lecopzer Chen
Suggested-by: YJ Chiang
quot;redundant" in cma_alloc() now.
Signed-off-by: Lecopzer Chen
---
mm/cma.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index 7f415d7cda9f..3692a34e2353 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -38,7 +38,6 @@
struct cma cma_areas[MAX_CMA_AREAS];
this work :)
Hi Julien,
Does any Arm maintainers can proceed this action?
This is really useful in debugging.
Thank you!!
[1] https://lkml.org/lkml/2020/4/24/328
Lecopzer
Sumit Garg 於 2020年5月18日 週一 下午1:46寫道:
>
> + Julien
>
> Hi Lecopzer,
>
> On Sat, 16 May 2020 at
ee_percpu_irq(irq, &cpu_armpmu);
per_cpu(cpu_irq, cpu) = 0;
}
Thanks,
Lecopzer
Lecopzer Chen 於 2020年5月16日 週六 下午8:50寫道:
>
> Register perf interrupts by request_nmi()/percpu_nmi() when both
> ARM64_PSEUDO_NMI and ARM64_PSEUDO_NMI_PERF are enabled and nmi
> cpufreature is ac
Register perf interrupts by request_nmi()/percpu_nmi() when both
ARM64_PSEUDO_NMI and ARM64_PSEUDO_NMI_PERF are enabled and nmi
cpufreature is active.
Signed-off-by: Lecopzer Chen
---
drivers/perf/arm_pmu.c | 51 +++-
include/linux/perf/arm_pmu.h | 6
Perf ISR doesn't support for NMI context, thus add some necessary
condition-if to handle NMI context:
- We should not hold pmu_lock since it may have already been acquired
before NMI triggered.
- irq_work should not run at NMI context.
Signed-off-by: Lecopzer Chen
---
arch/arm64/k
This is an extending function for Pseudo NMI that registering
Perf events interrupts as NMI.
It's helpful for sampling irq-off context when using perf.
Signed-off-by: Lecopzer Chen
---
arch/arm64/Kconfig | 10 ++
1 file changed, 10 insertions(+)
diff --git a/arch/arm64/Kconfig b
Lecopzer Chen (3):
arm_pmu: Add support for perf NMI interrupts registration
arm64: perf: Support NMI context for perf event ISR
arm64: Kconfig: Add support for the Perf NMI
arch/arm64/Kconfig | 10 +++
arch/arm64/kernel/perf_event.c | 36 ++--
drivers/perf
ion_buffer_add() insert ion_buffer into rbtree every time creating
an ion_buffer but never use it after ION reworking.
Also, buffer_lock protects only rbtree operation, remove it together.
Signed-off-by: Lecopzer Chen
Cc: YJ Chiang
Cc: Lecopzer Chen
---
drivers/staging/android/ion/ion.c | 36
The size argumnet passed into sparse_buffer_alloc() has already
aligned with PAGE_SIZE or PMD_SIZE.
If the size after aligned is not power of 2 (e.g. 0x48), the
PTR_ALIGN() will return wrong value.
Use roundup to round sparsemap_buf up to next multiple of size.
Signed-off-by: Lecopzer Chen
ffc07d80 - ffc07f60 (30M)
Sparse_buffer_fini
Freeffc07f60 - ffc07f623000 (140K)
The reserved memory between ffc07d623000 - ffc07d80
(~=1.9M) is unfreed.
Let explicit free redundant aligned memory.
Signed-off-by: Lecopzer Chen
Signed-off-by: Mark
Dear Sebastian,
Thanks for your review and helpful suggestion,
Since our platform is relative simple, I just miss the part you refer to.
I will fix them in patch v2.
Thanks,
Lecopzer
Dear Sebastian,
Thanks for your review and helpful suggestion,
Since our platform is relative simple, I just miss the part you refer to.
I will fix them in patch v2.
Thanks,
Lecopzer
Thanks a lot for reply!!
I just misunderstood how a PPI is registered and thought
I have a chance to eliminate the code.
This patch seems nonsense now, please ignore it.
Sorry to disturb you guys.
Thanks,
Lecopzer
ff-by: Lecopzer Chen
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Marc Zyngier
Cc: Julien Thierry
Cc: YJ Chiang
Cc: Lecopzer Chen
---
kernel/irq/manage.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 78f3dde
From: "Lecopzer Chen"
Emulate battery current (variable) and battery CHARGE_COUNTER
(same as battery_capacity) properties.
Signed-off-by: Lecopzer Chen
Cc: YJ Chiang
---
drivers/power/supply/test_power.c | 33 +++
1 file changed, 33 insertions(+)
di
74 matches
Mail list logo