On Fri, Mar 12, 2021 at 06:33:01PM -0800, Andrii Nakryiko wrote:
> On Fri, Mar 12, 2021 at 6:22 PM Sultan Alsawaf wrote:
> >
> > On Fri, Mar 12, 2021 at 05:31:14PM -0800, Andrii Nakryiko wrote:
> > > On Fri, Mar 12, 2021 at 1:43 PM Sultan Alsawaf
> > > wr
On Fri, Mar 12, 2021 at 05:31:14PM -0800, Andrii Nakryiko wrote:
> On Fri, Mar 12, 2021 at 1:43 PM Sultan Alsawaf wrote:
> >
> > From: Sultan Alsawaf
> >
> > We should be using the program fd here, not the perf event fd.
>
> Why? Can you elaborate on what is
From: Sultan Alsawaf
We should be using the program fd here, not the perf event fd.
Fixes: 63f2f5ee856ba ("libbpf: add ability to attach/detach BPF program to perf
event")
Signed-off-by: Sultan Alsawaf
---
tools/lib/bpf/libbpf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletio
On Fri, Oct 16, 2020 at 01:16:18PM +0200, Hans de Goede wrote:
> Hi,
>
> On 9/22/20 11:19 AM, Jiri Kosina wrote:
> > On Wed, 16 Sep 2020, Sultan Alsawaf wrote:
> >
> >> From: Sultan Alsawaf
> >>
> >> This is a fixed resubmission of "
On Tue, Sep 22, 2020 at 09:59:44PM +0200, Jiri Kosina wrote:
> On Tue, 22 Sep 2020, Wolfram Sang wrote:
>
> > > Hans, Benjamin, could you please give this patchset some smoke-testing?
> > > It
> > > looks good to me, but I'd like it to get some testing from your testing
> > > machinery before m
On Thu, Sep 17, 2020 at 10:57:04PM +0200, Wolfram Sang wrote:
> On Wed, Sep 16, 2020 at 10:22:55PM -0700, Sultan Alsawaf wrote:
> > From: Sultan Alsawaf
> >
> > According to the SMBus 3.0 protocol specification, block transfer limits
> > were increased from 32 byt
From: Sultan Alsawaf
According to the SMBus 3.0 protocol specification, block transfer limits
were increased from 32 bytes to 255 bytes. Remove the obsolete 32-byte
limitation.
Signed-off-by: Sultan Alsawaf
---
drivers/i2c/busses/i2c-designware-master.c | 2 +-
1 file changed, 1 insertion
From: Sultan Alsawaf
The point of adding a byte to len in i2c_dw_recv_len() is to make sure
that tx_buf_len is nonzero, so that i2c_dw_xfer_msg() can let the i2c
controller know that the i2c transaction can end. Otherwise, the i2c
controller will think that the transaction can never end for
From: Sultan Alsawaf
SMBus block reads can be broken because the read function will just skip
over bytes it doesn't like until reaching a byte that conforms to the
length restrictions for block reads. This is problematic when it isn't
known if the incoming payload is indeed a confor
From: Sultan Alsawaf
This is a fixed resubmission of "[PATCH 0/2] i2c-hid: Save power by reducing i2c
xfers with block reads". That original patchset did not have enough fixes for
the designware i2c adapter's I2C_M_RECV_LEN feature, which is documented
extensively in the origin
From: Sultan Alsawaf
We have no way of knowing how large an incoming payload is going to be,
so the only strategy available up until now has been to always retrieve
the maximum possible report length over i2c, which can be quite
inefficient. For devices that send reports in block read format
On Tue, Sep 15, 2020 at 02:55:48PM +0300, Jarkko Nikula wrote:
> Hi
>
> On 9/14/20 3:15 AM, Sultan Alsawaf wrote:
> > From: Sultan Alsawaf
> >
> > This is a squash of the following:
> >
> > i2c: designware: Fix transfer failures for invalid SMBus block
On Mon, Sep 07, 2020 at 05:20:31PM +0100, Will Deacon wrote:
> On Fri, Aug 07, 2020 at 12:16:35PM -0700, Sultan Alsawaf wrote:
> > From: Sultan Alsawaf
> >
> > There's no reason to hold an RCU read lock the entire time while
> > optimistically spinning for a
From: Sultan Alsawaf
This is a squash of the following:
i2c: designware: Fix transfer failures for invalid SMBus block reads
SMBus block reads can be broken because the read function will just skip
over bytes it doesn't like until reaching a byte that conforms to the
length restriction
On Tue, Sep 08, 2020 at 08:01:12PM +0200, Borislav Petkov wrote:
> On Tue, Sep 08, 2020 at 07:42:12PM +0200, Jason A. Donenfeld wrote:
> > Are you prepared to track down all the MSRs that might maybe do
> > something naughty?
>
> I'm not prepared - that's why this MSR filtering. To block *all* dir
From: Sultan Alsawaf
There's no reason to hold an RCU read lock the entire time while
optimistically spinning for a mutex lock. This can needlessly lengthen
RCU grace periods and slow down synchronize_rcu() when it doesn't brute
force the RCU grace period via rcupdate.rcu_expedited=
From: Sultan Alsawaf
There's no reason to hold an RCU read lock the entire time while
optimistically spinning for a rwsem. This can needlessly lengthen RCU
grace periods and slow down synchronize_rcu() when it doesn't brute
force the RCU grace period via rcupdate.rcu_expedited=1.
Sig
On Wed, Jul 01, 2020 at 11:04:01AM +0300, Jarkko Nikula wrote:
> On 6/29/20 8:43 PM, Sultan Alsawaf wrote:
> > Hmm, for some reason in 5.8 I get the same problem, but 5.7 is fine. Could
> > you
> > try this on 5.7 and see if it works?
> >
> > In the meantim
On Wed, Jun 17, 2020 at 02:17:19PM +0300, Jarkko Nikula wrote:
> On 6/16/20 6:49 PM, Sultan Alsawaf wrote:
> > From: Sultan Alsawaf
> >
> > We have no way of knowing how large an incoming payload is going to be,
> > so the only strategy available up until now has been
On Tue, Jun 16, 2020 at 09:02:54PM +0300, Andi Shyti wrote:
> Hi Sultan,
>
> > > > > so the only strategy available up until now has been to always
> > > > > retrieve
> > > > > the maximum possible report length over i2c, which can be quite
> > > > > inefficient. For devices that send reports in
On Tue, Jun 16, 2020 at 08:18:54PM +0300, Andi Shyti wrote:
> Hi Andy,
>
> > > so the only strategy available up until now has been to always retrieve
> > > the maximum possible report length over i2c, which can be quite
> > > inefficient. For devices that send reports in block read format, the i2
From: Sultan Alsawaf
We have no way of knowing how large an incoming payload is going to be,
so the only strategy available up until now has been to always retrieve
the maximum possible report length over i2c, which can be quite
inefficient. For devices that send reports in block read format
From: Sultan Alsawaf
SMBus block reads can be broken because the read function will just skip
over bytes it doesn't like until reaching a byte that conforms to the
length restrictions for block reads. This is problematic when it isn't
known if the incoming payload is indeed a confor
On Mon, Jun 15, 2020 at 07:07:42PM +0300, Andy Shevchenko wrote:
> On Mon, Jun 15, 2020 at 7:06 PM Sultan Alsawaf wrote:
> > On Mon, Jun 15, 2020 at 12:40:19PM +0300, Andy Shevchenko wrote:
> > > On Sun, Jun 14, 2020 at 02:02:54PM -0700, Sultan Alsawaf wrote:
> >
On Mon, Jun 15, 2020 at 12:40:19PM +0300, Andy Shevchenko wrote:
> On Sun, Jun 14, 2020 at 02:02:54PM -0700, Sultan Alsawaf wrote:
> > From: Sultan Alsawaf
> >
> > SMBus block reads can be broken because the read function will just skip
> > over bytes it doesn't
From: Sultan Alsawaf
Hi,
I noticed on my Dell Precision 15 5540 with an i9-9880H that simply putting my
finger on the touchpad would increase my system's power consumption by 4W, which
is quite considerable. Resting my finger on the touchpad would generate roughly
4000 i2c irqs per secon
From: Sultan Alsawaf
We have no way of knowing how large an incoming payload is going to be,
so the only strategy available up until now has been to always retrieve
the maximum possible report length over i2c, which can be quite
inefficient. For devices that send reports in block read format
From: Sultan Alsawaf
SMBus block reads can be broken because the read function will just skip
over bytes it doesn't like until reaching a byte that conforms to the
length restrictions for block reads. This is problematic when it isn't
known if the incoming payload is indeed a confor
From: Sultan Alsawaf
This change was originally done in 2005 without any justification in
commit bda98685b855 ("[PATCH] x86: inline spin_unlock if
!CONFIG_DEBUG_SPINLOCK and !CONFIG_PREEMPT"). Perhaps the reasoning at
the time was that PREEMPT was still considered unstable and ne
From: Sultan Alsawaf
This change was originally done in 2005 without any justification in
commit bda98685b855 ("[PATCH] x86: inline spin_unlock if
!CONFIG_DEBUG_SPINLOCK and !CONFIG_PREEMPT"). Perhaps the reasoning at
the time was that PREEMPT was still considered unstable and ne
From: Sultan Alsawaf
In commit 5a7d202b1574, a logical AND was erroneously changed to an OR,
causing WaIncreaseLatencyIPCEnabled to be enabled unconditionally for
kabylake and coffeelake, even when IPC is disabled. Fix the logic so
that WaIncreaseLatencyIPCEnabled is only used when IPC is
On Mon, Sep 16, 2019 at 11:57:32PM +0200, John Kacur wrote:
> Signed-off-by: John Kacur
> But please in the future
> 1. Don't cc lkml on this
> 2. Include the maintainers in your patch
Hi,
Thanks for the sign-off. I was following the instructions listed here:
https://wiki.linuxfoundation.org/rea
From: Sultan Alsawaf
Architecture-specific uaccess.h headers can have dependencies on
linux/uaccess.h (i.e., VERIFY_WRITE), so it cannot be included directly.
Since linux/uaccess.h includes asm/uaccess.h, just do that instead.
This fixes compile errors with certain kernels and architectures
On Tue, Jul 23, 2019 at 10:56:05AM -0600, Andreas Dilger wrote:
> Do you have any kind of performance metrics that show this is an actual
> improvement in performance? This would be either macro-level benchmarks
> (e.g. fio, but this seems unlikely to show any benefit), or micro-level
> measuremen
From: Sultan Alsawaf
In order to prevent redundant entry creation by racing against itself,
mb_cache_entry_create scans through a hash-list of all current entries
in order to see if another allocation for the requested new entry has
been made. Furthermore, it allocates memory for a new entry
From: Sultan Alsawaf
Allocating pages with __get_free_page is slower than going through the
slab allocator to grab free pages out from a pool.
These are the results from running the code at the bottom of this
message:
[1.278602] speedtest: __get_free_page: 9 us
[1.278606] speedtest
On Fri, Jul 12, 2019 at 09:06:40AM +0200, Thomas Gleixner wrote:
> On Fri, 12 Jul 2019, Ming Lei wrote:
> > vmalloc() may sleep, so it is impossible to be called in atomic context.
>
> Allocations from atomic context should be avoided wherever possible and you
> really have to have a very convinci
From: Sultan Alsawaf
Typically, drivers allocate sg lists of sizes up to a few MiB in size.
The current algorithm deals with large sg lists by splitting them into
several smaller arrays and chaining them together. But if the sg list
allocation is large, and we know the size ahead of time, sg
On Wed, May 15, 2019 at 02:32:48PM -0400, Steven Rostedt wrote:
> I'm confused why you did this?
Oleg said that debug_locks_off() could've been called and thus prevented
lockdep complaints about simple_lmk from appearing. To eliminate any possibility
of that, I disabled debug_locks_off().
Oleg al
On Tue, May 14, 2019 at 12:44:53PM -0400, Steven Rostedt wrote:
> OK, this has gotten my attention.
>
> This thread is quite long, do you have a git repo I can look at, and
> also where is the first task_lock() taken before the
> find_lock_task_mm()?
>
> -- Steve
Hi Steve,
This is the git repo
On Fri, May 10, 2019 at 05:10:25PM +0200, Oleg Nesterov wrote:
> I am starting to think I am ;)
>
> If you have task1 != task2 this code
>
> task_lock(task1);
> task_lock(task2);
>
> should trigger print_deadlock_bug(), task1->alloc_lock and task2->alloc_lock
> are
> the "same" lock
On Thu, May 09, 2019 at 05:56:46PM +0200, Oleg Nesterov wrote:
> Impossible ;) I bet lockdep should report the deadlock as soon as
> find_victims()
> calls find_lock_task_mm() when you already have a locked victim.
I hope you're not a betting man ;)
With the following configured:
CONFIG_DEBUG_RT
On Tue, May 07, 2019 at 12:58:27PM +0200, Christian Brauner wrote:
> This is work that is ongoing and requires kernel changes to make it
> feasible. One of the things that I have been working on for quite a
> while is the whole file descriptor for processes thing that is important
> for LMKD (Even
On Tue, May 07, 2019 at 09:28:47AM -0700, Suren Baghdasaryan wrote:
> Hi Sultan,
> Looks like you are posting this patch for devices that do not use
> userspace LMKD solution due to them using older kernels or due to
> their vendors sticking to in-kernel solution. If so, I see couple
> logistical i
mm code is pretty tricky :)
> On 05/06, Sultan Alsawaf wrote:
> >
> > +static unsigned long find_victims(struct victim_info *varr, int *vindex,
> > + int vmaxlen, int min_adj, int max_adj)
> > +{
> > + unsigned long pages_found =
On Tue, May 07, 2019 at 09:43:34AM +0200, Greg Kroah-Hartman wrote:
> Given that any "new" android device that gets shipped "soon" should be
> using 4.9.y or newer, is this a real issue?
It's certainly a real issue for those who can't buy brand new Android devices
without software bugs every six m
On Tue, May 07, 2019 at 09:04:30AM +0200, Greg Kroah-Hartman wrote:
> Um, why can't "all" Android devices take the same patches that the Pixel
> phones are using today? They should all be in the public android-common
> kernel repositories that all Android devices should be syncing with on a
> week
reclaim and waits
until all of its victims' memory is freed before proceeding to kill more
processes.
Signed-off-by: Sultan Alsawaf
---
Hello everyone,
I've addressed some of the concerns that were brought up with the first version
of the Simple LMK patch. I understand that a kernel-base
On Mon, Apr 01, 2019 at 10:43:13PM +0200, Rasmus Villemoes wrote:
> Consider your patch replacing !strcmp(buf, "123") by !memcmp(buf, "123",
> 4). buf is known to point to a nul-terminated string. But it may point
> at, say, the second-last byte in a page, with the last byte in that page
> being a
On Mon, Mar 25, 2019 at 10:24:00PM +0100, Rasmus Villemoes wrote:
> What I'm worried about is your patch changing every single strcmp(,
> "literal") into a memcmp, with absolutely no way of knowing or checking
> anything about the other buffer. And actually, it doesn't have to be a
> BE arch with a
On Sun, Mar 24, 2019 at 10:17:49PM +0100, Rasmus Villemoes wrote:
> gcc already knows the semantics of these functions and can optimize
> accordingly. E.g. for strcpy() of a literal to a buffer, gcc readily
> compiles
The example you gave appears to get optimized accordingly, but there are
numerou
On Sat, Mar 23, 2019 at 08:31:53PM -0700, Nathan Chancellor wrote:
> Explicitly cc'ing some folks who have touched include/linux/string.h in
> the past and might want to take a look at this.
>
> Nathan
Thanks. One last revision with some nitpicks fixed is attached, though I doubt
it'll influence
I messed up the return value for strcat in the first patch. Here's a fixed
version, ready for some scathing reviews.
From: Sultan Alsawaf
When strcpy, strcat, and strcmp are used with a literal string, they can
be optimized to memcpy or memcmp calls. These alternatives are faster
since kn
From: Sultan Alsawaf
When strcpy, strcat, and strcmp are used with a literal string, they can
be optimized to memcpy or memcmp calls. These alternatives are faster
since knowing the length of a string argument beforehand allows
traversal through the string word at a time without being concerned
On Thu, Mar 14, 2019 at 11:16:41PM -0400, Steven Rostedt wrote:
> How would you implement such a method in userspace? kill() doesn't take
> any parameters but the pid of the process you want to send a signal to,
> and the signal to send. This would require a new system call, and be
> quite a bit of
On Thu, Mar 14, 2019 at 10:54:48PM -0400, Joel Fernandes wrote:
> I'm not sure if that makes much semantic sense for how the signal handling is
> supposed to work. Imagine a parent sends SIGKILL to its child, and then does
> a wait(2). Because the SIGKILL blocks in your idea, then the wait cannot
>
On Thu, Mar 14, 2019 at 10:47:17AM -0700, Joel Fernandes wrote:
> About the 100ms latency, I wonder whether it is that high because of
> the way Android's lmkd is observing that a process has died. There is
> a gap between when a process memory is freed and when it disappears
> from the process-tab
On Tue, Mar 12, 2019 at 10:17:43AM -0700, Tim Murray wrote:
> Knowing whether a SIGKILL'd process has finished reclaiming is as far
> as I know not possible without something like procfds. That's where
> the 100ms timeout in lmkd comes in. lowmemorykiller and lmkd both
> attempt to wait up to 100ms
On Tue, Mar 12, 2019 at 09:05:32AM +0100, Michal Hocko wrote:
> The only way to control the OOM behavior pro-actively is to throttle
> allocation speed. We have memcg high limit for that purpose. Along with
> PSI, I can imagine a reasonably working user space early oom
> notifications and reasonabl
On Mon, Mar 11, 2019 at 03:15:35PM -0700, Suren Baghdasaryan wrote:
> This what LMKD currently is - a userspace RT process.
> My point was that this page allocation queue that you implemented
> can't be implemented in userspace, at least not without extensive
> communication with kernel.
Oh, that'
On Mon, Mar 11, 2019 at 05:11:25PM -0400, Joel Fernandes wrote:
> But the point is that a transient temporary memory spike should not be a
> signal to kill _any_ process. The reaction to kill shouldn't be so
> spontaneous that unwanted tasks are killed because the system went into
> panic mode. It
On Mon, Mar 11, 2019 at 01:10:36PM -0700, Suren Baghdasaryan wrote:
> The idea seems interesting although I need to think about this a bit
> more. Killing processes based on failed page allocation might backfire
> during transient spikes in memory usage.
This issue could be alleviated if tasks cou
tion bits are helpful insights too.
I'll take a look at PSI which Joel mentioned as well.
Thanks,
Sultan Alsawaf
On Sun, Mar 10, 2019 at 10:03:35PM +0100, Greg Kroah-Hartman wrote:
> On Sun, Mar 10, 2019 at 01:34:03PM -0700, Sultan Alsawaf wrote:
> > From: Sultan Alsawaf
> >
> > This is a complete low memory killer solution for Android that is small
> > and simple. It kills t
From: Sultan Alsawaf
This is a complete low memory killer solution for Android that is small
and simple. It kills the largest, least-important processes it can find
whenever a page allocation has completely failed (right after direct
reclaim). Processes are killed according to the priorities
rting add_interrupt_randomness()'s spinlocks to
use the irqsave primitive.
Signed-off-by: Sultan Alsawaf
---
drivers/char/random.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 38c6d1af6..1365017a7 100644
--- a/drivers/cha
container_of simply does pointer arithmetic; it's not going to spit out NULL, so
this BUG_ON is unneeded.
Signed-off-by: Sultan Alsawaf
---
drivers/char/random.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 38c6
This is still an issue in the latest kernel version.
Sultan
On Mon, Apr 30, 2018 at 8:12 AM Sultan Alsawaf wrote:
>
> From a872cf4f0bb57a7bb3c95ea557082fb7733344f8 Mon Sep 17 00:00:00 2001
> From: Sultan Alsawaf
> Date: Sun, 29 Apr 2018 20:04:35 -0700
> Subject: [PATCH] random
On Tue, May 01, 2018 at 08:56:04PM -0400, Theodore Y. Ts'o wrote:
> On Tue, May 01, 2018 at 05:43:17PM -0700, Sultan Alsawaf wrote:
> >
> > I've attached what I think is a reasonable stopgap solution until this is
> > actually fixed. If you're willing
t;From 5be2efdde744d3c55db3df81c0493fc67dc35620 Mon Sep 17 00:00:00 2001
From: Sultan Alsawaf
Date: Tue, 1 May 2018 17:36:17 -0700
Subject: [PATCH] random: use urandom instead of random for now and speed up
crng init
With the fixes for CVE-2018-1108, /dev/random now requires user-provided
entr
>From a872cf4f0bb57a7bb3c95ea557082fb7733344f8 Mon Sep 17 00:00:00 2001
From: Sultan Alsawaf
Date: Sun, 29 Apr 2018 20:04:35 -0700
Subject: [PATCH] random: remove unused argument from
add_interrupt_randomness()
The irq_flags parameter is not used. Remove it.
Signed-off-by: Sultan Alsa
t abusing high-resolution timers to get entropy? Since hrtimers can't
make guarantees down to the nanosecond, there's always a skew between the
requested expiry time and the actual expiry time.
Please see the attached patch and let me know just how horrible it is.
Sultan
>From b0d21c38558c66
>From cdc2a03f93fdec88ad040a212605e20ab97c3e19 Mon Sep 17 00:00:00 2001
From: Sultan Alsawaf
Date: Sun, 29 Apr 2018 20:04:35 -0700
Subject: [PATCH] random: remove unused argument from add_interrupt_randomness()
The irq_flags parameter is not used. Remove it.
Signed-off-by: Sultan Alsa
in place. But if you _do_
> want to cheat like this, you could instead just modify the condition
> to only relax the rate limiting when !crng_init().
Good idea. Attached a new patch that's less intrusive. It still fixes my issue,
of course.
Sultan
>From 6870b0383b88438d842599aa8608a26
tware to simply not require
> cryptographic grade entropy before the user has logged in. Because
> it's better than the alternatives.
>
> - Ted
>
The attached patch fixes my crng init woes. With it, crng init completes 0.86
seconds into bo
orever... which is not impression I get from the dmesg output
> above. Boot clearly proceeds... somehow. So now I'm confused.
Hmm... Well, the attached patch (which redirects /dev/random to /dev/urandom)
didn't fix my boot issue, so I'm at a loss as well.
Sultan
>From 15f54
On Sun, Apr 29, 2018 at 08:41:01PM +0200, Pavel Machek wrote:
> Umm. No. https://www.youtube.com/watch?v=xneBjc8z0DE
Okay, but /dev/urandom isn't a solution to this problem because it isn't usable
until crng init is complete, so it suffers from the same init lag as
/dev/random.
Sultan
I'd also like to add that my high-spec x86 laptop exhibits the same issue as
my Edgar Chromebook.
Here's my dmesg: https://hastebin.com/dofejolobi.go
The most interesting line:
[ 90.811633] random: crng init done
I waited 90 seconds after boot to provide entropy myself, at which point crng
ini
On Sun, Apr 29, 2018 at 04:32:05PM +0200, Pavel Machek wrote:
> Hi!
>
> > This is why ultimately, we do need to attack this problem from both
> > ends, which means teaching userspace programs to only request
> > cryptographic-grade randomness when it is really needed --- and most
> > of the time,
> On Thu, Apr 26, 2018 at 10:20:44PM -0700, Sultan Alsawaf wrote:
>> I noted at least 20,000 mmc interrupts before I intervened in the boot
>> process to provide entropy
>> myself. That's just for mmc, so I'm sure there were even more interrupts
>> elsewh
> The CRNG changes were needed because were erroneously saying that the
> entropy pool was securely initialized before it really was. Saying
> that CRNG should be able to init on its own is much like saying, "Ted
> should be able to fly wherever he wants in his own personal Gulfstream
> V." It wo
> Hmm, it looks like the multiuser startup is getting blocked on snapd:
>
> 29.060s snapd.service
>
> graphical.target @1min 32.145s
> └─multi-user.target @1min 32.145s
> └─hddtemp.service @6.512s +28ms
> └─network-online.target @6.508s
> └─NetworkManager-wait-online.service @2
> Hmm, can you let the boot hang for a while? It should continue after
> a few minutes if you wait long enough, but wait a minute or two, then
> give it entropy so the boot can continue. Then can you use
> "systemd-analyze blame" or "systemd-analyize critical-chain" and we
> can see what process
> Thanks for the report!
>
> I assume since you're upgrading your own kernel, you must not be
> running Chrome OS on your Acer CB3-431 Chromebook (Edgar). Are you
> running Chromium --- or some Linux distribution on it?
>
> Thanks,
>
> - Ted
Correct, I'm runn
I noticed "systems without sufficient boot randomness" and would like to add to
this.
With the changes to /dev/random going from 4.16.3 to 4.16.4, my low-spec
Chromebook does not reach
the login screen upon boot (it stays stuck on a black screen) until I provide a
source of entropy to
the syste
85 matches
Mail list logo