On Tue, Oct 11, 2016 at 12:56 PM, Jani Nikula
wrote:
> Fair enough. Please copy-paste some of the elaboration to the commit
> message. Ack from me, but it wouldn't hurt to get an ack from Daniel as
> well.
Would be nice if we can trade in some of the #ifdefry with a
On Tue, 11 Oct 2016, Chris Wilson wrote:
> On Tue, Oct 11, 2016 at 01:16:42PM +0300, Jani Nikula wrote:
>> On Tue, 11 Oct 2016, Chris Wilson wrote:
>> > On Tue, Oct 11, 2016 at 12:52:00PM +0300, Jani Nikula wrote:
>> >> On Tue, 11 Oct 2016,
On Tue, Oct 11, 2016 at 01:16:42PM +0300, Jani Nikula wrote:
> On Tue, 11 Oct 2016, Chris Wilson wrote:
> > On Tue, Oct 11, 2016 at 12:52:00PM +0300, Jani Nikula wrote:
> >> On Tue, 11 Oct 2016, Chris Wilson wrote:
> >> > We currently capture
On Tue, 11 Oct 2016, Chris Wilson wrote:
> On Tue, Oct 11, 2016 at 12:52:00PM +0300, Jani Nikula wrote:
>> On Tue, 11 Oct 2016, Chris Wilson wrote:
>> > We currently capture the GPU state after we detect a hang. This is vital
>> > for us to
On Tue, Oct 11, 2016 at 12:52:00PM +0300, Jani Nikula wrote:
> On Tue, 11 Oct 2016, Chris Wilson wrote:
> > We currently capture the GPU state after we detect a hang. This is vital
> > for us to both triage and debug hangs in the wild (post-mortem
> > debugging).
On Tue, 11 Oct 2016, Chris Wilson wrote:
> We currently capture the GPU state after we detect a hang. This is vital
> for us to both triage and debug hangs in the wild (post-mortem
> debugging). However, it comes at the cost of running some potentially
> dangerous code
We currently capture the GPU state after we detect a hang. This is vital
for us to both triage and debug hangs in the wild (post-mortem
debugging). However, it comes at the cost of running some potentially
dangerous code (since it has to make very few assumption about the state
of the driver) that