On Mon, Sep 24, 2012 at 03:02:35PM +, Luck, Tony wrote:
> > And my plan was to get rid of the fact that backends touch pstore->buf
> > directly. Backends would always receive anonymous 'buf' pointer (we
> > already have write_buf callback that does exactly this), and thus it
>
> It feels like
On Mon, Sep 24, 2012 at 03:02:35PM +, Luck, Tony wrote:
And my plan was to get rid of the fact that backends touch pstore-buf
directly. Backends would always receive anonymous 'buf' pointer (we
already have write_buf callback that does exactly this), and thus it
It feels like we are
> And my plan was to get rid of the fact that backends touch pstore->buf
> directly. Backends would always receive anonymous 'buf' pointer (we
> already have write_buf callback that does exactly this), and thus it
It feels like we are just shuffling the lock problem from one place
to another. In
And my plan was to get rid of the fact that backends touch pstore-buf
directly. Backends would always receive anonymous 'buf' pointer (we
already have write_buf callback that does exactly this), and thus it
It feels like we are just shuffling the lock problem from one place
to another. In the
On Thu, Sep 20, 2012 at 11:48:32PM +, Luck, Tony wrote:
> > True, but the lock is used to protect pstore->buf, I doubt that
> > any backend will actually want to grab it, no?
>
> The lock is doing double duty to protect the buffer, and the back-end driver.
>
> But even if we split it into
> True, but the lock is used to protect pstore->buf, I doubt that
> any backend will actually want to grab it, no?
The lock is doing double duty to protect the buffer, and the back-end driver.
But even if we split it into two (one for the buffer, taken by pstore, and one
internal to the backend
On Thu, Sep 20, 2012 at 11:09:36PM +, Luck, Tony wrote:
> > Mm... why break?
>
> We don't know what the back-end driver will do if we allow another call
> while a previous one is still in progress. It might end up corrupting the
> backing non-volatile storage and losing some previously saved
> Mm... why break?
We don't know what the back-end driver will do if we allow another call
while a previous one is still in progress. It might end up corrupting the
backing non-volatile storage and losing some previously saved records.
Existing drivers (ERST and EFI) are dependent on f/w ... so
On Tue, Sep 18, 2012 at 01:43:44AM +0800, Chuansheng Liu wrote:
> Like 8250 driver, when pstore is registered as a console,
> to avoid recursive spinlocks when panic happening, change the
> spin_lock_irqsave to spin_trylock_irqsave when oops_in_progress
> is true.
>
> Signed-off-by: liu
On Tue, Sep 18, 2012 at 01:43:44AM +0800, Chuansheng Liu wrote:
Like 8250 driver, when pstore is registered as a console,
to avoid recursive spinlocks when panic happening, change the
spin_lock_irqsave to spin_trylock_irqsave when oops_in_progress
is true.
Signed-off-by: liu chuansheng
Mm... why break?
We don't know what the back-end driver will do if we allow another call
while a previous one is still in progress. It might end up corrupting the
backing non-volatile storage and losing some previously saved records.
Existing drivers (ERST and EFI) are dependent on f/w ... so
On Thu, Sep 20, 2012 at 11:09:36PM +, Luck, Tony wrote:
Mm... why break?
We don't know what the back-end driver will do if we allow another call
while a previous one is still in progress. It might end up corrupting the
backing non-volatile storage and losing some previously saved
True, but the lock is used to protect pstore-buf, I doubt that
any backend will actually want to grab it, no?
The lock is doing double duty to protect the buffer, and the back-end driver.
But even if we split it into two (one for the buffer, taken by pstore, and one
internal to the backend to
On Thu, Sep 20, 2012 at 11:48:32PM +, Luck, Tony wrote:
True, but the lock is used to protect pstore-buf, I doubt that
any backend will actually want to grab it, no?
The lock is doing double duty to protect the buffer, and the back-end driver.
But even if we split it into two (one
14 matches
Mail list logo