[jira] Updated: (PIG-891) Fixing dfs statement for Pig
[ https://issues.apache.org/jira/browse/PIG-891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated PIG-891: --- Priority: Minor (was: Major) > Fixing dfs statement for Pig > > > Key: PIG-891 > URL: https://issues.apache.org/jira/browse/PIG-891 > Project: Pig > Issue Type: Bug >Reporter: Daniel Dai >Priority: Minor > > Several hadoop dfs commands are not support or restrictive on current Pig. We > need to fix that. These include: > 1. Several commands do not supported: lsr, dus, count, rmr, expunge, put, > moveFromLocal, get, getmerge, text, moveToLocal, mkdir, touchz, test, stat, > tail, chmod, chown, chgrp. A reference for these command can be found in > http://hadoop.apache.org/common/docs/current/hdfs_shell.html > 2. All existing dfs commands do not support globing. > 3. Pig should provide a programmatic way to perform dfs commands. Several of > them exist in PigServer, but not all of them. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (PIG-891) Fixing dfs statement for Pig
Fixing dfs statement for Pig Key: PIG-891 URL: https://issues.apache.org/jira/browse/PIG-891 Project: Pig Issue Type: Bug Reporter: Daniel Dai Several hadoop dfs commands are not support or restrictive on current Pig. We need to fix that. These include: 1. Several commands do not supported: lsr, dus, count, rmr, expunge, put, moveFromLocal, get, getmerge, text, moveToLocal, mkdir, touchz, test, stat, tail, chmod, chown, chgrp. A reference for these command can be found in http://hadoop.apache.org/common/docs/current/hdfs_shell.html 2. All existing dfs commands do not support globing. 3. Pig should provide a programmatic way to perform dfs commands. Several of them exist in PigServer, but not all of them. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (PIG-695) Pig should not fail when error logs cannot be created
[ https://issues.apache.org/jira/browse/PIG-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732785#action_12732785 ] Hadoop QA commented on PIG-695: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413830/PIG-695.patch against trunk revision 794937. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no tests are needed for this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/134/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/134/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/134/console This message is automatically generated. > Pig should not fail when error logs cannot be created > - > > Key: PIG-695 > URL: https://issues.apache.org/jira/browse/PIG-695 > Project: Pig > Issue Type: Bug > Components: impl >Affects Versions: 0.2.0 >Reporter: Santhosh Srinivasan >Assignee: Santhosh Srinivasan > Attachments: PIG-695.patch > > > Currently, PIG validates the log file location and fails/exits when the log > file cannot be created. Instead, it should print a warning and continue. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Build failed in Hudson: Pig-Patch-minerva.apache.org #134
See http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/134/ -- [...truncated 96471 lines...] [exec] [junit] 09/07/17 23:13:06 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:57338 is added to blk_-6029229406525293168_1010 size 6 [exec] [junit] 09/07/17 23:13:06 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/hudson/input2.txt. blk_5158845790070510718_1011 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: Receiving block blk_5158845790070510718_1011 src: /127.0.0.1:40935 dest: /127.0.0.1:57686 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: Receiving block blk_5158845790070510718_1011 src: /127.0.0.1:38617 dest: /127.0.0.1:50151 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: Receiving block blk_5158845790070510718_1011 src: /127.0.0.1:50996 dest: /127.0.0.1:57338 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: Received block blk_5158845790070510718_1011 of size 6 from /127.0.0.1 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: PacketResponder 0 for block blk_5158845790070510718_1011 terminating [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: Received block blk_5158845790070510718_1011 of size 6 from /127.0.0.1 [exec] [junit] 09/07/17 23:13:06 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:57338 is added to blk_5158845790070510718_1011 size 6 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: PacketResponder 1 for block blk_5158845790070510718_1011 terminating [exec] [junit] 09/07/17 23:13:06 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50151 is added to blk_5158845790070510718_1011 size 6 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: Received block blk_5158845790070510718_1011 of size 6 from /127.0.0.1 [exec] [junit] 09/07/17 23:13:06 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:57686 is added to blk_5158845790070510718_1011 size 6 [exec] [junit] 09/07/17 23:13:06 INFO dfs.DataNode: PacketResponder 2 for block blk_5158845790070510718_1011 terminating [exec] [junit] 09/07/17 23:13:06 INFO executionengine.HExecutionEngine: Connecting to hadoop file system at: hdfs://localhost:50405 [exec] [junit] 09/07/17 23:13:06 INFO executionengine.HExecutionEngine: Connecting to map-reduce job tracker at: localhost:56232 [exec] [junit] 09/07/17 23:13:06 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size before optimization: 1 [exec] [junit] 09/07/17 23:13:06 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size after optimization: 1 [exec] [junit] 09/07/17 23:13:06 INFO dfs.StateChange: BLOCK* ask 127.0.0.1:40242 to delete blk_3850847610124483467_1005 blk_-2472088381536546707_1004 [exec] [junit] 09/07/17 23:13:06 INFO dfs.StateChange: BLOCK* ask 127.0.0.1:50151 to delete blk_-2472088381536546707_1004 blk_-1041866868912676858_1006 [exec] [junit] 09/07/17 23:13:07 INFO mapReduceLayer.JobControlCompiler: Setting up single store job [exec] [junit] 09/07/17 23:13:07 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. [exec] [junit] 09/07/17 23:13:07 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hudson/mapred/system/job_200907172312_0002/job.jar. blk_-5969292331227849049_1012 [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: Receiving block blk_-5969292331227849049_1012 src: /127.0.0.1:37421 dest: /127.0.0.1:40242 [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: Receiving block blk_-5969292331227849049_1012 src: /127.0.0.1:38620 dest: /127.0.0.1:50151 [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: Receiving block blk_-5969292331227849049_1012 src: /127.0.0.1:50999 dest: /127.0.0.1:57338 [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: Received block blk_-5969292331227849049_1012 of size 1448651 from /127.0.0.1 [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: PacketResponder 0 for block blk_-5969292331227849049_1012 terminating [exec] [junit] 09/07/17 23:13:07 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:57338 is added to blk_-5969292331227849049_1012 size 1448651 [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: Received block blk_-5969292331227849049_1012 of size 1448651 from /127.0.0.1 [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: PacketResponder 1 for block blk_-5969292331227849049_1012 terminating [exec] [junit] 09/07/17 23:13:07 INFO dfs.DataNode: Received block blk_-5969292331227849049_1012 of size 1448651 from /127.0.0.1 [exec] [junit] 09/07/17 23:13:07 INFO dfs.StateChange: BLOC
[jira] Commented: (PIG-792) PERFORMANCE: Support skewed join in pig
[ https://issues.apache.org/jira/browse/PIG-792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732766#action_12732766 ] Ying He commented on PIG-792: - For MPCompiler, the job parallelism is reset to deal with situation when parallelism is not specified. In this case, sampling process uses (0.9 * default reducer) as the total number of reducers when allocating reducers to skewed keys. So the next MR job should use it as parallelism. If parallelism is specified, the rp returned from sampling process is equal to the original value of op. the format of sampling output file is documented in SkewedPartitioner POSkewedJoinFileSetter is removed, the logic is added into SampleOptimizer MapReduceOper keeps the file name of the sampling, so that MapReduceLauncher can set this file name into the jobconf of the join job. > PERFORMANCE: Support skewed join in pig > --- > > Key: PIG-792 > URL: https://issues.apache.org/jira/browse/PIG-792 > Project: Pig > Issue Type: Improvement >Reporter: Sriranjan Manjunath > Attachments: skewedjoin.patch > > > Fragmented replicated join has a few limitations: > - One of the tables needs to be loaded into memory > - Join is limited to two tables > Skewed join partitions the table and joins the records in the reduce phase. > It computes a histogram of the key space to account for skewing in the input > records. Further, it adjusts the number of reducers depending on the key > distribution. > We need to implement the skewed join in pig. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (PIG-889) Pig can not access reporter of PigHadoopLog in Load Func
[ https://issues.apache.org/jira/browse/PIG-889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732732#action_12732732 ] Olga Natkovich commented on PIG-889: I am reviewing this patch > Pig can not access reporter of PigHadoopLog in Load Func > > > Key: PIG-889 > URL: https://issues.apache.org/jira/browse/PIG-889 > Project: Pig > Issue Type: Improvement > Components: impl >Affects Versions: 0.4.0 >Reporter: Jeff Zhang >Assignee: Jeff Zhang > Fix For: 0.4.0 > > Attachments: Pig_889_Patch.txt > > > I'd like to increment Counter in my own LoadFunc, but it will throw > NullPointerException. It seems that the reporter is not initialized. > I looked into this problem and find that it need to call > PigHadoopLogger.getInstance().setReporter(reporter) in PigInputFormat. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (PIG-889) Pig can not access reporter of PigHadoopLog in Load Func
[ https://issues.apache.org/jira/browse/PIG-889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732721#action_12732721 ] Hadoop QA commented on PIG-889: --- +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413822/Pig_889_Patch.txt against trunk revision 794937. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/133/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/133/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/133/console This message is automatically generated. > Pig can not access reporter of PigHadoopLog in Load Func > > > Key: PIG-889 > URL: https://issues.apache.org/jira/browse/PIG-889 > Project: Pig > Issue Type: Improvement > Components: impl >Affects Versions: 0.4.0 >Reporter: Jeff Zhang >Assignee: Jeff Zhang > Fix For: 0.4.0 > > Attachments: Pig_889_Patch.txt > > > I'd like to increment Counter in my own LoadFunc, but it will throw > NullPointerException. It seems that the reporter is not initialized. > I looked into this problem and find that it need to call > PigHadoopLogger.getInstance().setReporter(reporter) in PigInputFormat. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Hudson build is back to normal: Pig-Patch-minerva.apache.org #133
See http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/133/
[jira] Work stopped: (PIG-695) Pig should not fail when error logs cannot be created
[ https://issues.apache.org/jira/browse/PIG-695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on PIG-695 stopped by Santhosh Srinivasan. > Pig should not fail when error logs cannot be created > - > > Key: PIG-695 > URL: https://issues.apache.org/jira/browse/PIG-695 > Project: Pig > Issue Type: Bug > Components: impl >Affects Versions: 0.2.0 >Reporter: Santhosh Srinivasan >Assignee: Santhosh Srinivasan > Attachments: PIG-695.patch > > > Currently, PIG validates the log file location and fails/exits when the log > file cannot be created. Instead, it should print a warning and continue. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (PIG-695) Pig should not fail when error logs cannot be created
[ https://issues.apache.org/jira/browse/PIG-695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Santhosh Srinivasan updated PIG-695: Status: Patch Available (was: Open) > Pig should not fail when error logs cannot be created > - > > Key: PIG-695 > URL: https://issues.apache.org/jira/browse/PIG-695 > Project: Pig > Issue Type: Bug > Components: impl >Affects Versions: 0.2.0 >Reporter: Santhosh Srinivasan >Assignee: Santhosh Srinivasan > Attachments: PIG-695.patch > > > Currently, PIG validates the log file location and fails/exits when the log > file cannot be created. Instead, it should print a warning and continue. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Work started: (PIG-695) Pig should not fail when error logs cannot be created
[ https://issues.apache.org/jira/browse/PIG-695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on PIG-695 started by Santhosh Srinivasan. > Pig should not fail when error logs cannot be created > - > > Key: PIG-695 > URL: https://issues.apache.org/jira/browse/PIG-695 > Project: Pig > Issue Type: Bug > Components: impl >Affects Versions: 0.2.0 >Reporter: Santhosh Srinivasan >Assignee: Santhosh Srinivasan > Attachments: PIG-695.patch > > > Currently, PIG validates the log file location and fails/exits when the log > file cannot be created. Instead, it should print a warning and continue. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (PIG-695) Pig should not fail when error logs cannot be created
[ https://issues.apache.org/jira/browse/PIG-695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Santhosh Srinivasan updated PIG-695: Attachment: PIG-695.patch Attached patch ensures that Pig does not error out when the error log file is not writable. > Pig should not fail when error logs cannot be created > - > > Key: PIG-695 > URL: https://issues.apache.org/jira/browse/PIG-695 > Project: Pig > Issue Type: Bug > Components: impl >Affects Versions: 0.2.0 >Reporter: Santhosh Srinivasan >Assignee: Santhosh Srinivasan > Attachments: PIG-695.patch > > > Currently, PIG validates the log file location and fails/exits when the log > file cannot be created. Instead, it should print a warning and continue. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (PIG-878) Pig is returning too many blocks in the InputSplit
[ https://issues.apache.org/jira/browse/PIG-878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732596#action_12732596 ] Hadoop QA commented on PIG-878: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413820/PIG-878.patch against trunk revision 794937. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no tests are needed for this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/132/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/132/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/132/console This message is automatically generated. > Pig is returning too many blocks in the InputSplit > -- > > Key: PIG-878 > URL: https://issues.apache.org/jira/browse/PIG-878 > Project: Pig > Issue Type: Bug >Affects Versions: 0.3.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Critical > Attachments: PIG-878.patch > > > When SlicerWrapper builds a slice, it currently returns the 3 locations for > every block in the file it is slicing, instead of the 3 locations for the > block covered by that slice. This means Pig's odds of having its maps placed > on nodes local to the data goes way down. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Build failed in Hudson: Pig-Patch-minerva.apache.org #132
See http://hudson.zones.apache.org/hudson/job/Pig-Patch-minerva.apache.org/132/changes Changes: [daijy] PIG-888: Pig do not pass udf to the backend in some situation [sms] PIG-728: All backend error messages must be logged to preserve the original error messages -- [...truncated 96485 lines...] [exec] [junit] 09/07/17 16:44:09 INFO dfs.DataNode: PacketResponder 1 for block blk_2112805789012087151_1011 terminating [exec] [junit] 09/07/17 16:44:09 INFO dfs.DataNode: Received block blk_2112805789012087151_1011 of size 6 from /127.0.0.1 [exec] [junit] 09/07/17 16:44:09 INFO dfs.DataNode: PacketResponder 2 for block blk_2112805789012087151_1011 terminating [exec] [junit] 09/07/17 16:44:09 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:35703 is added to blk_2112805789012087151_1011 size 6 [exec] [junit] 09/07/17 16:44:09 INFO executionengine.HExecutionEngine: Connecting to hadoop file system at: hdfs://localhost:58697 [exec] [junit] 09/07/17 16:44:09 INFO executionengine.HExecutionEngine: Connecting to map-reduce job tracker at: localhost:45005 [exec] [junit] 09/07/17 16:44:09 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size before optimization: 1 [exec] [junit] 09/07/17 16:44:09 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size after optimization: 1 [exec] [junit] 09/07/17 16:44:09 INFO dfs.StateChange: BLOCK* ask 127.0.0.1:35703 to delete blk_2244454724767731207_1006 blk_-3526599218002695088_1005 [exec] [junit] 09/07/17 16:44:09 INFO dfs.StateChange: BLOCK* ask 127.0.0.1:39757 to delete blk_2244454724767731207_1006 blk_-3526599218002695088_1005 blk_-617770427211244_1004 [exec] [junit] 09/07/17 16:44:09 INFO dfs.DataNode: Deleting block blk_-3526599218002695088_1005 file dfs/data/data8/current/blk_-3526599218002695088 [exec] [junit] 09/07/17 16:44:09 INFO dfs.DataNode: Deleting block blk_2244454724767731207_1006 file dfs/data/data7/current/blk_2244454724767731207 [exec] [junit] 09/07/17 16:44:10 INFO mapReduceLayer.JobControlCompiler: Setting up single store job [exec] [junit] 09/07/17 16:44:10 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. [exec] [junit] 09/07/17 16:44:10 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hudson/mapred/system/job_200907171643_0002/job.jar. blk_-9020950463921060731_1012 [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: Receiving block blk_-9020950463921060731_1012 src: /127.0.0.1:54693 dest: /127.0.0.1:39757 [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: Receiving block blk_-9020950463921060731_1012 src: /127.0.0.1:57230 dest: /127.0.0.1:35703 [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: Receiving block blk_-9020950463921060731_1012 src: /127.0.0.1:42881 dest: /127.0.0.1:43830 [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: Received block blk_-9020950463921060731_1012 of size 1448450 from /127.0.0.1 [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: PacketResponder 0 for block blk_-9020950463921060731_1012 terminating [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: Received block blk_-9020950463921060731_1012 of size 1448450 from /127.0.0.1 [exec] [junit] 09/07/17 16:44:10 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:43830 is added to blk_-9020950463921060731_1012 size 1448450 [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: PacketResponder 1 for block blk_-9020950463921060731_1012 terminating [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: Received block blk_-9020950463921060731_1012 of size 1448450 from /127.0.0.1 [exec] [junit] 09/07/17 16:44:10 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:35703 is added to blk_-9020950463921060731_1012 size 1448450 [exec] [junit] 09/07/17 16:44:10 INFO dfs.DataNode: PacketResponder 2 for block blk_-9020950463921060731_1012 terminating [exec] [junit] 09/07/17 16:44:10 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:39757 is added to blk_-9020950463921060731_1012 size 1448450 [exec] [junit] 09/07/17 16:44:10 INFO fs.FSNamesystem: Increasing replication for file /tmp/hadoop-hudson/mapred/system/job_200907171643_0002/job.jar. New replication is 2 [exec] [junit] 09/07/17 16:44:10 INFO fs.FSNamesystem: Reducing replication for file /tmp/hadoop-hudson/mapred/system/job_200907171643_0002/job.jar. New replication is 2 [exec] [junit] 09/07/17 16:44:10 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hudson/mapred/system/job_200907171643_0002/job.split. blk_5333876611240057921_1013 [exec]
[jira] Updated: (PIG-888) Pig do not pass udf to the backend in some situation
[ https://issues.apache.org/jira/browse/PIG-888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated PIG-888: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Pig do not pass udf to the backend in some situation > > > Key: PIG-888 > URL: https://issues.apache.org/jira/browse/PIG-888 > Project: Pig > Issue Type: Bug > Components: impl >Affects Versions: 0.3.0 >Reporter: Daniel Dai >Assignee: Daniel Dai > Fix For: 0.4.0 > > Attachments: PIG-888-1.patch, PIG-888-2.patch > > > If we use udf and do not use register, in some situation backend will > complain that it cannot resolve class. For example, the following script do > not work. > A = load '1.txt' using udf1(); > B = load '2.txt'; > C = join A by $0, B by $0; -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (PIG-878) Pig is returning too many blocks in the InputSplit
[ https://issues.apache.org/jira/browse/PIG-878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732562#action_12732562 ] Olga Natkovich commented on PIG-878: +1 > Pig is returning too many blocks in the InputSplit > -- > > Key: PIG-878 > URL: https://issues.apache.org/jira/browse/PIG-878 > Project: Pig > Issue Type: Bug >Affects Versions: 0.3.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Critical > Attachments: PIG-878.patch > > > When SlicerWrapper builds a slice, it currently returns the 3 locations for > every block in the file it is slicing, instead of the 3 locations for the > block covered by that slice. This means Pig's odds of having its maps placed > on nodes local to the data goes way down. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (PIG-889) Pig can not access reporter of PigHadoopLog in Load Func
[ https://issues.apache.org/jira/browse/PIG-889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Zhang updated PIG-889: --- Status: Patch Available (was: Open) > Pig can not access reporter of PigHadoopLog in Load Func > > > Key: PIG-889 > URL: https://issues.apache.org/jira/browse/PIG-889 > Project: Pig > Issue Type: Improvement > Components: impl >Affects Versions: 0.4.0 >Reporter: Jeff Zhang >Assignee: Jeff Zhang > Fix For: 0.4.0 > > Attachments: Pig_889_Patch.txt > > > I'd like to increment Counter in my own LoadFunc, but it will throw > NullPointerException. It seems that the reporter is not initialized. > I looked into this problem and find that it need to call > PigHadoopLogger.getInstance().setReporter(reporter) in PigInputFormat. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (PIG-889) Pig can not access reporter of PigHadoopLog in Load Func
[ https://issues.apache.org/jira/browse/PIG-889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Zhang updated PIG-889: --- Attachment: Pig_889_Patch.txt attach the patch including testcase. This TestCase indicates this patch can allow you to use reporter to increment your Counter in LoadFunc. This TestCase will fail if you do not apply this patch. > Pig can not access reporter of PigHadoopLog in Load Func > > > Key: PIG-889 > URL: https://issues.apache.org/jira/browse/PIG-889 > Project: Pig > Issue Type: Improvement > Components: impl >Affects Versions: 0.4.0 >Reporter: Jeff Zhang >Assignee: Jeff Zhang > Fix For: 0.4.0 > > Attachments: Pig_889_Patch.txt > > > I'd like to increment Counter in my own LoadFunc, but it will throw > NullPointerException. It seems that the reporter is not initialized. > I looked into this problem and find that it need to call > PigHadoopLogger.getInstance().setReporter(reporter) in PigInputFormat. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (PIG-889) Pig can not access reporter of PigHadoopLog in Load Func
[ https://issues.apache.org/jira/browse/PIG-889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Zhang updated PIG-889: --- Attachment: (was: Pig_889_Patch.txt) > Pig can not access reporter of PigHadoopLog in Load Func > > > Key: PIG-889 > URL: https://issues.apache.org/jira/browse/PIG-889 > Project: Pig > Issue Type: Improvement > Components: impl >Affects Versions: 0.4.0 >Reporter: Jeff Zhang >Assignee: Jeff Zhang > Fix For: 0.4.0 > > > I'd like to increment Counter in my own LoadFunc, but it will throw > NullPointerException. It seems that the reporter is not initialized. > I looked into this problem and find that it need to call > PigHadoopLogger.getInstance().setReporter(reporter) in PigInputFormat. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (PIG-878) Pig is returning too many blocks in the InputSplit
[ https://issues.apache.org/jira/browse/PIG-878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated PIG-878: --- Attachment: PIG-878.patch Patch written collaboratively with Arun Murthy > Pig is returning too many blocks in the InputSplit > -- > > Key: PIG-878 > URL: https://issues.apache.org/jira/browse/PIG-878 > Project: Pig > Issue Type: Bug >Affects Versions: 0.3.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Critical > Attachments: PIG-878.patch > > > When SlicerWrapper builds a slice, it currently returns the 3 locations for > every block in the file it is slicing, instead of the 3 locations for the > block covered by that slice. This means Pig's odds of having its maps placed > on nodes local to the data goes way down. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (PIG-878) Pig is returning too many blocks in the InputSplit
[ https://issues.apache.org/jira/browse/PIG-878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated PIG-878: --- Status: Patch Available (was: Open) > Pig is returning too many blocks in the InputSplit > -- > > Key: PIG-878 > URL: https://issues.apache.org/jira/browse/PIG-878 > Project: Pig > Issue Type: Bug >Affects Versions: 0.3.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Critical > Attachments: PIG-878.patch > > > When SlicerWrapper builds a slice, it currently returns the 3 locations for > every block in the file it is slicing, instead of the 3 locations for the > block covered by that slice. This means Pig's odds of having its maps placed > on nodes local to the data goes way down. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Hudson build is back to normal: Pig-trunk #506
See http://hudson.zones.apache.org/hudson/job/Pig-trunk/506/changes
[jira] Commented: (PIG-888) Pig do not pass udf to the backend in some situation
[ https://issues.apache.org/jira/browse/PIG-888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732492#action_12732492 ] Hudson commented on PIG-888: Integrated in Pig-trunk #506 (See [http://hudson.zones.apache.org/hudson/job/Pig-trunk/506/]) : Pig do not pass udf to the backend in some situation > Pig do not pass udf to the backend in some situation > > > Key: PIG-888 > URL: https://issues.apache.org/jira/browse/PIG-888 > Project: Pig > Issue Type: Bug > Components: impl >Affects Versions: 0.3.0 >Reporter: Daniel Dai >Assignee: Daniel Dai > Fix For: 0.4.0 > > Attachments: PIG-888-1.patch, PIG-888-2.patch > > > If we use udf and do not use register, in some situation backend will > complain that it cannot resolve class. For example, the following script do > not work. > A = load '1.txt' using udf1(); > B = load '2.txt'; > C = join A by $0, B by $0; -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (PIG-728) All backend error messages must be logged to preserve the original error messages
[ https://issues.apache.org/jira/browse/PIG-728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732493#action_12732493 ] Hudson commented on PIG-728: Integrated in Pig-trunk #506 (See [http://hudson.zones.apache.org/hudson/job/Pig-trunk/506/]) : All backend error messages must be logged to preserve the original error messages > All backend error messages must be logged to preserve the original error > messages > - > > Key: PIG-728 > URL: https://issues.apache.org/jira/browse/PIG-728 > Project: Pig > Issue Type: Bug >Affects Versions: 0.3.0 >Reporter: Santhosh Srinivasan >Assignee: Santhosh Srinivasan >Priority: Minor > Fix For: 0.4.0 > > Attachments: PIG-728_1.patch > > > The current error handling framework logs backend error messages only when > Pig is not able to parse the error message. Instead, Pig should log the > backend error message irrespective of Pig's ability to parse backend error > messages. On a side note, the use of instantiateFuncFromSpec in Launcher.java > is not consistent and should avoid the use of class_name + "(" + > string_constructor_args + ")". -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.