[
https://issues.apache.org/jira/browse/PHOENIX-1973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148153#comment-15148153
]
Hadoop QA commented on PHOENIX-1973:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12788013/PHOENIX-1973-6.patch
against master branch at commit 60ef7cd54e26fd1635e503c7d7981ba2cdf4c6fc.
ATTACHMENT ID: 12788013
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:red}-1 javadoc{color}. The javadoc tool appears to have generated
19 warning messages.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 lineLengths{color}. The patch introduces the following lines
longer than 100:
+ final String logicalNamesAsJson =
TargetTableRefFunctions.LOGICAN_NAMES_TO_JSON.apply(tablesToBeLoaded);
+ outputStream.write(cell.getValueArray(),
cell.getValueOffset(), cell.getValueLength());
+ context.write(new TableRowkeyPair(Integer.toString(tableIndex),
outputKey), aggregatedArray);
+ DataInputStream input = new DataInputStream(new
ByteArrayInputStream(aggregatedArray.get()));
+ public static Function<List<TargetTableRef>,String> LOGICAN_NAMES_TO_JSON
= new Function<List<TargetTableRef>,String>() {
+ public static Function<String,List<String>> NAMES_FROM_JSON = new
Function<String,List<String>>() {
{color:green}+1 core tests{color}. The patch passed unit tests in .
{color:red}-1 core zombie tests{color}. There are 5 zombie test(s):
at
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster.testMergeWithReplicas(TestRegionMergeTransactionOnCluster.java:362)
at
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testCompactionRecordDoesntBlockRolling(TestLogRolling.java:594)
at
org.apache.hadoop.hbase.regionserver.wal.TestLogRollPeriod.testWithEdits(TestLogRollPeriod.java:125)
at
org.apache.hadoop.hbase.regionserver.TestRemoveRegionMetrics.testMoveRegion(TestRemoveRegionMetrics.java:134)
at
org.apache.hadoop.hbase.regionserver.wal.TestWALReplay.testReplayEditsWrittenViaHRegion(TestWALReplay.java:549)
Test results:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/258//testReport/
Javadoc warnings:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/258//artifact/patchprocess/patchJavadocWarnings.txt
Console output:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/258//console
This message is automatically generated.
> Improve CsvBulkLoadTool performance by moving keyvalue construction from map
> phase to reduce phase
> --------------------------------------------------------------------------------------------------
>
> Key: PHOENIX-1973
> URL: https://issues.apache.org/jira/browse/PHOENIX-1973
> Project: Phoenix
> Issue Type: Improvement
> Reporter: Rajeshbabu Chintaguntla
> Assignee: Sergey Soldatov
> Fix For: 4.7.0
>
> Attachments: PHOENIX-1973-1.patch, PHOENIX-1973-2.patch,
> PHOENIX-1973-3.patch, PHOENIX-1973-4.patch, PHOENIX-1973-5.patch,
> PHOENIX-1973-6.patch
>
>
> It's similar to HBASE-8768. Only thing is we need to write custom mapper and
> reducer in Phoenix. In Map phase we just need to get row key from primary key
> columns and write the full text of a line as usual(to ensure sorting). In
> reducer we need to get actual key values by running upsert query.
> It's basically reduces lot of map output to write to disk and data need to be
> transferred through network.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)