[jira] [Created] (ARROW-8887) [Java] Buffer size for complex vectors increases rapidly in case of clear/write loop

2020-05-21 Thread Projjal Chanda (Jira)
Projjal Chanda created ARROW-8887:
-

 Summary: [Java] Buffer size for complex vectors increases rapidly 
in case of clear/write loop
 Key: ARROW-8887
 URL: https://issues.apache.org/jira/browse/ARROW-8887
 Project: Apache Arrow
  Issue Type: Task
  Components: Java
Reporter: Projjal Chanda
Assignee: Projjal Chanda


Similar to https://issues.apache.org/jira/browse/ARROW-5232



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [IPC] Stream representation of nested dictionaries

2020-05-21 Thread Wes McKinney
It is true that refactoring the IPC reader code to defer dictionary
reassembly given out-of-order dictionaries would be some work. The
worst case scenario in the short term is that this part of the C++
implementation is not implemented, and demonstrating that it works
when they appear in depth-first/bottom-up order may be good enough for
the 1.0 release.

On Thu, May 21, 2020 at 1:12 PM Antoine Pitrou  wrote:
>
>
> Le 21/05/2020 à 19:46, Micah Kornfield a écrit :
> > Hi Antoine,
> > Can you expand on why that restriction  is necessary/makes things easier?
> > It seems a little strange since each dictionary batch has the ID attached,
> > I wouldn't think it would be hard for the reader to track their arrival in
> > any order.
>
> The problem is, you can't really instantiate the outer dictionary (as a
> C++ Array) without having the inner dictionary already.  We could work
> around that by keeping the DictionaryBatch around and decoding it
> lazily, but I don't know the cost of that refactor (nor the runtime
> consequences, e.g. react later to an error in the stream).
>
> Regards
>
> Antoine.


Re: [VOTE] Release Apache Arrow 0.17.1 - RC1

2020-05-21 Thread Neal Richardson
Looks like someone at Homebrew picked up the release before I could submit
a PR. So that's done too.

Neal

On Tue, May 19, 2020 at 3:24 PM Neal Richardson 
wrote:

> R submission to CRAN is done and accepted. I'm waiting to do Homebrew
> until after the website update, given their pushback last time.
>
> Neal
>
> On Tue, May 19, 2020 at 5:25 AM Uwe L. Korn  wrote:
>
>> Current status:
>>
>> 1.  [done] rebase (not required for a patch release)
>> 2.  [done] upload source
>> 3.  [done] upload binaries
>> 4.  [done|in-pr] update website
>> 5.  [done] upload ruby gems
>> 6.  [ ] upload js packages
>> 8.  [done] upload C# packages
>> 9.  [ ] upload rust crates
>> 10. [done] update conda recipes (dropped ppc64le support though)
>> 11. [done] upload wheels to pypi
>> 12. [nealrichardson] update homebrew packages
>> 13. [done] update maven artifacts
>> 14. [done|in-pr] update msys2
>> 15. [nealrichardson] update R packages
>> 16. [done|in-pr] update docs
>>
>> On Tue, May 19, 2020, at 12:06 AM, Krisztián Szűcs wrote:
>> > Current status:
>> >
>> > 1.  [done] rebase (not required for a patch release)
>> > 2.  [done] upload source
>> > 3.  [done] upload binaries
>> > 4.  [done|in-pr] update website
>> > 5.  [done] upload ruby gems
>> > 6.  [ ] upload js packages
>> > 8.  [done] upload C# packages
>> > 9.  [ ] upload rust crates
>> > 10. [in-progress|in-pr] update conda recipes
>> > 11. [done] upload wheels to pypi
>> > 12. [nealrichardson] update homebrew packages
>> > 13. [done] update maven artifacts
>> > 14. [done|in-pr] update msys2
>> > 15. [nealrichardson] update R packages
>> > 16. [done|in-pr] update docs
>> >
>> > On Mon, May 18, 2020 at 11:33 PM Sutou Kouhei 
>> wrote:
>> > >
>> > > >> 14. [ ] update msys2
>> > > >
>> > > > I'll do this.
>> > >
>> > > Oh, sorry. Krisztián already did!
>> > >
>> > > In <20200519.062731.1037230979568376433@clear-code.com>
>> > >   "Re: [VOTE] Release Apache Arrow 0.17.1 - RC1" on Tue, 19 May 2020
>> 06:27:31 +0900 (JST),
>> > >   Sutou Kouhei  wrote:
>> > >
>> > > >> 14. [ ] update msys2
>> > > >
>> > > > I'll do this.
>> > > >
>> > > > In <
>> cahm19a4wsm3hksf0ubixonu4ru+951viuuavdnzky_tynx-...@mail.gmail.com>
>> > > >   "Re: [VOTE] Release Apache Arrow 0.17.1 - RC1" on Mon, 18 May
>> 2020 22:37:50 +0200,
>> > > >   Krisztián Szűcs  wrote:
>> > > >
>> > > >> 1.  [done] rebase (not required for a patch release)
>> > > >> 2.  [done] upload source
>> > > >> 3.  [done] upload binaries
>> > > >> 4.  [done] update website
>> > > >> 5.  [done] upload ruby gems
>> > > >> 6.  [ ] upload js packages
>> > > >> No javascript changes were applied to the patch release, for
>> > > >> consistency we might want to choose to upload a 0.17.1 release
>> though.
>> > > >> 8.  [done] upload C# packages
>> > > >> 9.  [ ] upload rust crates
>> > > >> @Andy Grove the patch release doesn't affect the rust
>> implementation.
>> > > >> We can update the crates despite that no changes were made, not
>> sure
>> > > >> what policy should we choose here (same as with JS)
>> > > >> 10. [ ] update conda recipes
>> > > >> @Uwe Korn seems like arrow-cpp-feedstock have not picked up the new
>> > > >> release once again
>> > > >> 11. [done] upload wheels to pypi
>> > > >> 12. [nealrichardson] update homebrew packages
>> > > >> 13. [done] update maven artifacts
>> > > >> 14. [ ] update msys2
>> > > >> 15. [nealrichardson] update R packages
>> > > >> 16. [in-progress] update docs
>> > > >>
>> > > >> On Mon, May 18, 2020 at 10:29 PM Krisztián Szűcs
>> > > >>  wrote:
>> > > >>>
>> > > >>> Current status:
>> > > >>>
>> > > >>> 1.  [done] rebase (not required for a patch release)
>> > > >>> 2.  [done] upload source
>> > > >>> 3.  [done] upload binaries
>> > > >>> 4.  [done] update website
>> > > >>> 5.  [ ] upload ruby gems
>> > > >>> 6.  [ ] upload js packages
>> > > >>> 8.  [ ] upload C# packages
>> > > >>> 9.  [ ] upload rust crates
>> > > >>> 10. [ ] update conda recipes
>> > > >>> 11. [done] upload wheels to pypi
>> > > >>> 12. [nealrichardson] update homebrew packages
>> > > >>> 13. [done] update maven artifacts
>> > > >>> 14. [ ] update msys2
>> > > >>> 15. [nealrichardson] update R packages
>> > > >>> 16. [in-progress] update docs
>> > > >>>
>> > > >>> On Mon, May 18, 2020 at 9:39 PM Neal Richardson
>> > > >>>  wrote:
>> > > >>> >
>> > > >>> > I'm working on the R stuff and can do Homebrew again.
>> > > >>> >
>> > > >>> > Neal
>> > > >>> >
>> > > >>> > On Mon, May 18, 2020 at 12:30 PM Krisztián Szűcs <
>> szucs.kriszt...@gmail.com>
>> > > >>> > wrote:
>> > > >>> >
>> > > >>> > > Any help with the post release tasks is welcome!
>> > > >>> > >
>> > > >>> > > Checklist:
>> > > >>> > > 1.  [done] rebase (not required for a patch release)
>> > > >>> > > 2.  [done] upload source
>> > > >>> > > 3.  [in-progress] upload binaries
>> > > >>> > > 4.  [done] update website
>> > > >>> > > 5.  [ ] upload ruby gems
>> > > >>> > > 6.  [ ] upload js packages
>> > > >>> > > 8.  [ ] upload C# packages
>> > > >>> > > 9.  [ 

[jira] [Created] (ARROW-8886) [C#] Decide and implement appropriate behaviour for Array builder resize to negative size

2020-05-21 Thread Adam Szmigin (Jira)
Adam Szmigin created ARROW-8886:
---

 Summary: [C#] Decide and implement appropriate behaviour for Array 
builder resize to negative size
 Key: ARROW-8886
 URL: https://issues.apache.org/jira/browse/ARROW-8886
 Project: Apache Arrow
  Issue Type: Improvement
  Components: C#
Affects Versions: 0.17.1
Reporter: Adam Szmigin


h1. Summary

Currently, the {{ArrowBuffer.Builder}} class accepts a negative value to the 
{{Resize()}} method, and treats it as though the caller passed zero.  This was 
implemented deliberately, as there is an explicit unit test to verify the 
behaviour.

However, it is also unusual.  By way of comparison:

* The {{System.Array.Resize()}} method throws 
{{ArgumentOutOfRangeException}} if a negative value is passed: 
https://docs.microsoft.com/en-us/dotnet/api/system.array.resize?view=netcore-3.1
* The Arrow C++ implementation will refuse to accept a negative length: 
https://github.com/apache/arrow/blob/master/cpp/src/arrow/array/builder_base.h#L194

h1. Acceptance Criteria

* The behaviour when receiving a negative length to a {{Resize()}} method 
_must_ be agreed upon.
* Appropriate changes _must_ be made to the codebase in accordance with the 
outcome of the above agreement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARROW-8885) [R] Don't include everything everywhere

2020-05-21 Thread Neal Richardson (Jira)
Neal Richardson created ARROW-8885:
--

 Summary: [R] Don't include everything everywhere
 Key: ARROW-8885
 URL: https://issues.apache.org/jira/browse/ARROW-8885
 Project: Apache Arrow
  Issue Type: Improvement
  Components: R
Reporter: Neal Richardson
Assignee: Neal Richardson
 Fix For: 1.0.0


I noticed that we were jamming all of our arrow #includes in one header file in 
the R bindings and then including that everywhere. Seemed like that was 
wasteful and probably causing compilation to be slower.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [IPC] Stream representation of nested dictionaries

2020-05-21 Thread Antoine Pitrou


Le 21/05/2020 à 19:46, Micah Kornfield a écrit :
> Hi Antoine,
> Can you expand on why that restriction  is necessary/makes things easier?
> It seems a little strange since each dictionary batch has the ID attached,
> I wouldn't think it would be hard for the reader to track their arrival in
> any order.

The problem is, you can't really instantiate the outer dictionary (as a
C++ Array) without having the inner dictionary already.  We could work
around that by keeping the DictionaryBatch around and decoding it
lazily, but I don't know the cost of that refactor (nor the runtime
consequences, e.g. react later to an error in the stream).

Regards

Antoine.


[jira] [Created] (ARROW-8884) [C++] Listing files with S3FileSystem is slow

2020-05-21 Thread Francois Saint-Jacques (Jira)
Francois Saint-Jacques created ARROW-8884:
-

 Summary: [C++] Listing files with S3FileSystem is slow
 Key: ARROW-8884
 URL: https://issues.apache.org/jira/browse/ARROW-8884
 Project: Apache Arrow
  Issue Type: Improvement
  Components: C++
Reporter: Francois Saint-Jacques


Listing files on S3 is slow due to the recursive nature of the algorithm.

The following change modifies the behavior of the S3Result to include all 
objects but no "grouping" (directories). This lower dramatically the number of 
HTTP calls. 
{code:c++}
diff --git a/cpp/src/arrow/filesystem/s3fs.cc b/cpp/src/arrow/filesystem/s3fs.cc
index 70c87f46ec..98a40b17a2 100644
--- a/cpp/src/arrow/filesystem/s3fs.cc
+++ b/cpp/src/arrow/filesystem/s3fs.cc
@@ -986,7 +986,7 @@ class S3FileSystem::Impl {
 if (!prefix.empty()) {
   req.SetPrefix(ToAwsString(prefix) + kSep);
 }
-req.SetDelimiter(Aws::String() + kSep);
+// req.SetDelimiter(Aws::String() + kSep);
 req.SetMaxKeys(kListObjectsMaxKeys);
 
 while (true) {

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [IPC] Stream representation of nested dictionaries

2020-05-21 Thread Micah Kornfield
Hi Antoine,
Can you expand on why that restriction  is necessary/makes things easier?
It seems a little strange since each dictionary batch has the ID attached,
I wouldn't think it would be hard for the reader to track their arrival in
any order.

Thanks,
Micah





On Thu, May 21, 2020 at 10:25 AM Antoine Pitrou  wrote:

>
> Hello,
>
> I'm working on getting nested dictionaries to work in the C++ IPC
> implementation, together with integration tests.
>
> My current implementation introduces a restriction.  Let's say we have
> the following schema field:
>
> - type: List
> - dictionaryEncoding:
>   - id: 123
>   - indexType: Int32
> - children[0]:
>   - type: String
>   - dictionaryEncoding:
> - id: 456
> - indexType: Int32
>
> then my C++ patch requires that the dictionary batch for the inner
> dictionary (id 456) appears before the dictionary batch for the outer
> dictionary (id 123).  It seems like a reasonable restriction, but I'd
> like to check if that's ok.  Also, should we add it to the spec?
>
> Regards
>
> Antoine.
>


[jira] [Created] (ARROW-8883) [Rust] [Integration Testing] Disable unsupported tests

2020-05-21 Thread Neville Dipale (Jira)
Neville Dipale created ARROW-8883:
-

 Summary: [Rust] [Integration Testing] Disable unsupported tests
 Key: ARROW-8883
 URL: https://issues.apache.org/jira/browse/ARROW-8883
 Project: Apache Arrow
  Issue Type: Sub-task
  Components: Integration, Rust
Affects Versions: 0.17.0
Reporter: Neville Dipale


Some of the integration test failures can be avoided by disabling unsupported 
tests, like large lists and nested types



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[IPC] Stream representation of nested dictionaries

2020-05-21 Thread Antoine Pitrou


Hello,

I'm working on getting nested dictionaries to work in the C++ IPC
implementation, together with integration tests.

My current implementation introduces a restriction.  Let's say we have
the following schema field:

- type: List
- dictionaryEncoding:
  - id: 123
  - indexType: Int32
- children[0]:
  - type: String
  - dictionaryEncoding:
- id: 456
- indexType: Int32

then my C++ patch requires that the dictionary batch for the inner
dictionary (id 456) appears before the dictionary batch for the outer
dictionary (id 123).  It seems like a reasonable restriction, but I'd
like to check if that's ok.  Also, should we add it to the spec?

Regards

Antoine.


[jira] [Created] (ARROW-8882) [C#] Add .editorconfig to C# code

2020-05-21 Thread Eric Erhardt (Jira)
Eric Erhardt created ARROW-8882:
---

 Summary: [C#] Add .editorconfig to C# code
 Key: ARROW-8882
 URL: https://issues.apache.org/jira/browse/ARROW-8882
 Project: Apache Arrow
  Issue Type: Bug
  Components: C#
Reporter: Eric Erhardt


This allows for a consistent code format throughout the C# code in the repo. 
That way when a new contributor submits a change, the editors will 
automatically format the code to be in the same format as the current code base.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARROW-8881) [Rust] Add large list and binary support

2020-05-21 Thread Neville Dipale (Jira)
Neville Dipale created ARROW-8881:
-

 Summary: [Rust] Add large list and binary support
 Key: ARROW-8881
 URL: https://issues.apache.org/jira/browse/ARROW-8881
 Project: Apache Arrow
  Issue Type: Sub-task
  Components: Rust
Affects Versions: 0.17.0
Reporter: Neville Dipale


Rust does not yet support large lists and large binary arrays. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARROW-8880) Make R Binary Install Friendlier

2020-05-21 Thread Brian Schultheiss (Jira)
Brian Schultheiss created ARROW-8880:


 Summary: Make R Binary Install Friendlier
 Key: ARROW-8880
 URL: https://issues.apache.org/jira/browse/ARROW-8880
 Project: Apache Arrow
  Issue Type: Improvement
  Components: R
Affects Versions: 0.17.1
 Environment: Linux (Ubuntu)
Reporter: Brian Schultheiss


When R install tries to run a binary install, it looks for an exact match on 
the binary version, say "0.17.1.zip" from 
[https://dl.bintray.com/ursalabs/arrow-r/libarrow/bin/ubuntu-18.04/].

The problem is that even though "0.17.1" is pushed to CRAN as an official 
release, there is a time period (like right now) where bintray does not have an 
official binary build, just a date stamped build:

 

arrow-0.17.0.20200516.zip
arrow-0.17.0.20200517.zip
arrow-0.17.0.20200518.zip
arrow-0.17.0.zip
arrow-0.17.1.20200517.zip
arrow-0.17.1.20200519.zip
arrow-0.17.1.20200520.zip

 

I'd like to suggest adding a new environment variable trigger that would allow 
for the scanning of bintray for a recent timestamped version, if the specific 
release number is not present.

I'd like to suggest enhancing the linux code:

[https://github.com/apache/arrow/blob/02f7be33d1c32d1636323e6fb90c63cb01bf44af/r/tools/linuxlibs.R#L39-L47]

with scanning functionality:

{color:#c1c7d0}try_download <- function(from_url, to_file, scan_dates = FALSE) 
{{color}
{color:#c1c7d0}    try({color}
{color:#c1c7d0}            suppressWarnings({color}
{color:#c1c7d0}                  download.file(from_url, to_file, quiet = 
quietly){color}
{color:#c1c7d0}            ),{color}
{color:#c1c7d0}            silent = quietly{color}
{color:#c1c7d0}      ){color}
{color:#c1c7d0}      if (!file.exists(to_file)) {{color}
{color:#c1c7d0}               {color:#0747a6} if (scan_dates) {{color}{color}
{color:#0747a6}                      scan_dates <- 
format(Sys.Date()-(0:10),"%Y%m%d"){color}
{color:#0747a6}                      for (scan_date in scan_dates) {{color}
{color:#0747a6}                             base_url <- 
tools::file_path_sans_ext(from_url){color}
{color:#0747a6}                             ext <- 
tools::file_ext(from_url){color}
{color:#0747a6}                             scan_url <- sprintf("%s.%s.%s", 
base_url, scan_date, ext){color}
{color:#0747a6}                             if (try_download(from_url = 
scan_url, to_file, scan_dates = FALSE)) {{color}
{color:#0747a6}                                     return(TRUE){color}
{color:#0747a6}                             }{color}
{color:#0747a6}                       }{color}
{color:#0747a6}                       return(FALSE){color}
{color:#0747a6}                  } else {{color}
{color:#0747a6}                       return(FALSE){color}
{color:#0747a6}                  }{color}
{color:#c1c7d0}         } else {{color}
{color:#c1c7d0}             return(TRUE){color}
{color:#c1c7d0}         }{color}
{color:#c1c7d0}}{color}

And then augment the calling function:

[https://github.com/apache/arrow/blob/02f7be33d1c32d1636323e6fb90c63cb01bf44af/r/tools/linuxlibs.R#L55]

 

with:

{color:#0747a6}binary_scan_ok <- 
!identical(tolower(Sys.getenv("LIBARROW_BINARY_SCAN", "false")), "false"){color}
 {color:#c1c7d0}if (try_download(binary_url, libfile{color:#0747a6}, 
{color}{color}{color:#0747a6}scan_dates = 
binary_scan_ok{color}{color:#c1c7d0})) {{color}

 

This would allow automated builds to set the scan option, and then find and 
install the most recent daily build in lieu of an official binary build being 
in place.

 

 

 

 

 

 

 

 

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Arrow Flight connector for SQL Server

2020-05-21 Thread Uwe L. Korn
Hello Brendan,

welcome to the community. In addition to the folks at Dremio, I wanted to make 
you aware of the Python ODBC client library 
https://github.com/blue-yonder/turbodbc which provides a high-performance 
ODBC<->Arrow adapter. It is especially popular with MS SQL Server users as the 
fastest known way to retrieve query results as DataFrames in Python from SQL 
Server, considerably faster than pandas.read_sql or using pyodbc directly.

While being the fastest known, I can tell that still there is a lot time CPU 
spent in the ODBC driver "transforming" results so that it matches the ODBC 
interface. At least here, one could get possibly a lot better performance when 
retrieving large columnar results from SQL Server when going through Arrow 
Flight as an interface instead being constraint to the less efficient ODBC for 
this use case. Currently there is a performance difference of 50x between 
reading the data from a Parquet file and reading the same data from a table in 
SQL Server (simple SELECT, no filtering or so). As nearly for the full 
retrieval time the client CPU is at 100%, using a more efficient protocol for 
data transferral could roughly translate into a 10x speedup.

Best,
Uwe

On Wed, May 20, 2020, at 12:16 AM, Brendan Niebruegge wrote:
> Hi everyone,
> 
> I wanted to informally introduce myself. My name is Brendan Niebruegge, 
> I'm a Software Engineer in our SQL Server extensibility team here at 
> Microsoft. I am leading an effort to explore how we could integrate 
> Arrow Flight with SQL Server. We think this could be a very interesting 
> integration that would both benefit SQL Server and the Arrow community. 
> We are very early in our thoughts so I thought it best to reach out 
> here and see if you had any thoughts or suggestions for me. What would 
> be the best way to socialize my thoughts to date? I am keen to learn 
> and deepen my knowledge of Arrow as well so please let me know how I 
> can be of help to the community.
> 
> Please feel free to reach out anytime (email:brn...@microsoft.com)
> 
> Thanks,
> Brendan Niebruegge
> 
>


[jira] [Created] (ARROW-8879) [FlightRPC][Java] FlightStream should unwrap ExecutionExceptions

2020-05-21 Thread David Li (Jira)
David Li created ARROW-8879:
---

 Summary: [FlightRPC][Java] FlightStream should unwrap 
ExecutionExceptions
 Key: ARROW-8879
 URL: https://issues.apache.org/jira/browse/ARROW-8879
 Project: Apache Arrow
  Issue Type: Improvement
  Components: FlightRPC, Java
Affects Versions: 0.17.1
Reporter: David Li
Assignee: David Li


Currently FlightStream bubbles a lot of exceptions as RuntimeException or 
ExecutionException, or just wraps them with CallStatus.INTERNAL. For 
RuntimeException, we should always check if it's a gRPC StatusRuntimeException 
and convert to the equivalent Flight exception; for ExecutionException, we 
should check if the _cause_ is a gRPC exception and convert.

Example: on master, FlightStream#getDescriptor reports all errors as 
CallStatus.INTERNAL, but we should inspect ExecutionException#getCause instead.

This is needed so that errors get properly reported, e.g. if a service sends a 
PERMISSION_DENIED error, the client should get that and not a RuntimeException, 
ExecutionException, or INTERNAL error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[NIGHTLY] Arrow Build Report for Job nightly-2020-05-21-0

2020-05-21 Thread Crossbow


Arrow Build Report for Job nightly-2020-05-21-0

All tasks: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0

Failed Tasks:
- conda-linux-gcc-py36:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-linux-gcc-py36
- conda-linux-gcc-py37:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-linux-gcc-py37
- conda-linux-gcc-py38:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-linux-gcc-py38
- conda-osx-clang-py36:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-osx-clang-py36
- conda-osx-clang-py37:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-osx-clang-py37
- conda-osx-clang-py38:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-osx-clang-py38
- conda-win-vs2015-py36:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-win-vs2015-py36
- conda-win-vs2015-py37:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-win-vs2015-py37
- conda-win-vs2015-py38:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-azure-conda-win-vs2015-py38
- homebrew-cpp:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-homebrew-cpp
- homebrew-r-autobrew:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-homebrew-r-autobrew
- test-conda-python-3.7-spark-master:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.7-spark-master
- test-conda-python-3.8-dask-master:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.8-dask-master

Succeeded Tasks:
- centos-6-amd64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-centos-6-amd64
- centos-7-aarch64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-centos-7-aarch64
- centos-7-amd64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-centos-7-amd64
- centos-8-aarch64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-centos-8-aarch64
- centos-8-amd64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-centos-8-amd64
- debian-buster-amd64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-debian-buster-amd64
- debian-buster-arm64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-debian-buster-arm64
- debian-stretch-amd64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-debian-stretch-amd64
- debian-stretch-arm64:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-debian-stretch-arm64
- gandiva-jar-osx:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-gandiva-jar-osx
- gandiva-jar-xenial:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-travis-gandiva-jar-xenial
- nuget:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-nuget
- test-conda-cpp-valgrind:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-cpp-valgrind
- test-conda-cpp:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-cpp
- test-conda-python-3.6-pandas-0.23:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.6-pandas-0.23
- test-conda-python-3.6:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.6
- test-conda-python-3.7-dask-latest:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.7-dask-latest
- test-conda-python-3.7-hdfs-2.9.2:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.7-hdfs-2.9.2
- test-conda-python-3.7-kartothek-latest:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.7-kartothek-latest
- test-conda-python-3.7-kartothek-master:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.7-kartothek-master
- test-conda-python-3.7-pandas-latest:
  URL: 
https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-05-21-0-github-test-conda-python-3.7-pandas-latest
-