Hi, I am a Vertica support engineer, and we have open support requests around
NULL values and SQL type conversion with DataFrame read/write over JDBC when
connecting to a Vertica database. The stack traces point to issues with the
generic JDBCDialect in Spark-SQL.
I saw that other vendors
+1 for another preview
Tom
On Monday, December 9, 2019, 12:32:29 AM CST, Xiao Li
wrote:
I got many great feedbacks from the community about the recent 3.0 preview
release. Since the last 3.0 preview release, we already have 353 commits
BTW, our Jenkins seems to be behind.
1. For the first item, `Support JDK 11 with Hadoop 2.7`:
At least, we need a new Jenkins job
`spark-master-test-maven-hadoop-2.7-jdk-11/`.
2. https://issues.apache.org/jira/browse/SPARK-28900 (Test Pyspark, SparkR
on JDK 11 with run-tests)
3.
That looks nice, thanks!
I checked the previous v2.4.4 release; it has around 130 commits (from
2.4.3 to 2.4.4), so
I think branch-2.4 already has enough commits for the next release.
A commit list from 2.4.3 to 2.4.4;
PartitionReader extends Closable, seems reasonable to me to do the same
for DataWriter.
On Wed, Dec 11, 2019 at 1:35 PM Jungtaek Lim
wrote:
> Hi devs,
>
> I'd like to propose to add close() on DataWriter explicitly, which is the
> place for resource cleanup.
>
> The rationalization of the
Sounds good. Thanks for bringing this up!
On Wed, Dec 11, 2019 at 3:18 PM Takeshi Yamamuro
wrote:
> That looks nice, thanks!
> I checked the previous v2.4.4 release; it has around 130 commits (from
> 2.4.3 to 2.4.4), so
> I think branch-2.4 already has enough commits for the next release.
>
> A
Hi devs,
I'd like to propose to add close() on DataWriter explicitly, which is the
place for resource cleanup.
The rationalization of the proposal is due to the lifecycle of DataWriter.
If the scaladoc of DataWriter is correct, the lifecycle of DataWriter
instance ends at either commit() or
I am new and plan to be an individual contributor for bug fix. I assume I
need building the project if I'll be working on source code based on master
branch that the binaries are behind. Do you think this makes sense?
Please let me know if, in this case, I can still use binary instead of