Moving away from the tactical for a minute, I think being able to track
these over time would be useful.  I can think of a couple of high level
approaches and I was wondering what others think.

1.  Use tags appropriately in JIRA and try to generate a report from that.
2.  Create a new confluence page to try to log each time these occur (and
route cause).
3.  A separate spreadsheet someplace (e.g. Google Sheet).

Thoughts?

-Micah


On Fri, Mar 1, 2019 at 8:55 AM Francois Saint-Jacques <
fsaintjacq...@gmail.com> wrote:

> Also just created https://issues.apache.org/jira/browse/ARROW-4728
>
> On Thu, Feb 28, 2019 at 3:53 AM Ravindra Pindikura <ravin...@dremio.com>
> wrote:
>
> >
> >
> > > On Feb 28, 2019, at 2:10 PM, Antoine Pitrou <anto...@python.org>
> wrote:
> > >
> > >
> > > Le 28/02/2019 à 07:53, Ravindra Pindikura a écrit :
> > >>
> > >>
> > >>> On Feb 27, 2019, at 1:48 AM, Antoine Pitrou <solip...@pitrou.net>
> > wrote:
> > >>>
> > >>> On Tue, 26 Feb 2019 13:39:08 -0600
> > >>> Wes McKinney <wesmck...@gmail.com> wrote:
> > >>>> hi folks,
> > >>>>
> > >>>> We haven't had a green build on master for about 5 days now (the
> last
> > >>>> one was February 21). Has anyone else been paying attention to this?
> > >>>> It seems we should start cataloging which tests and build
> environments
> > >>>> are the most flaky and see if there's anything we can do to reduce
> the
> > >>>> flakiness. Since we are dependent on anaconda.org for build
> toolchain
> > >>>> packages, it's hard to control for the 500 timeouts that occur
> there,
> > >>>> but I'm seeing other kinds of routine flakiness.
> > >>>
> > >>> Isn't it https://issues.apache.org/jira/browse/ARROW-4684 ?
> > >>
> > >> ARROW-4684 seems to be failing consistently in travis CI.
> > >>
> > >> Can I merge a change if this is the only CI failure ?
> > >
> > > Yes, you can.
> >
> > Thanks !
> >
> > >
> > > Regards
> > >
> > > Antoine.
> >
> >
>

Reply via email to