and one more thing...

2011-04-20 04:09:23,412 INFO org.apache.hadoop.mapred.TaskTracker:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
taskTracker/jobcache/job_201104200406_0001/attempt_201104200406_0001_m_000002_0/output/file.out
in any of the configured local directories


2011/4/20 pob <peterob...@gmail.com>

> Thats from jobtracker:
>
>
> 2011-04-20 03:36:39,519 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing rack-local task task_201104200331_0002_m_000000
> 2011-04-20 03:36:42,521 INFO org.apache.hadoop.mapred.TaskInProgress: Error
> from attempt_201104200331_0002_m_000000_3: java.lang.NumberFormatException:
> null
>         at java.lang.Integer.parseInt(Integer.java:417)
>         at java.lang.Integer.parseInt(Integer.java:499)
>         at
> org.apache.cassandra.hadoop.ConfigHelper.getRpcPort(ConfigHelper.java:250)
>         at
> org.apache.cassandra.hadoop.pig.CassandraStorage.setConnectionInformation(Unknown
> Source)
>         at
> org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(Unknown Source)
>         at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.mergeSplitSpecificConf(PigInputFormat.java:133)
>         at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.createRecordReader(PigInputFormat.java:111)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:588)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>         at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
>
> and tasktracker
>
> 2011-04-20 03:33:10,942 INFO org.apache.hadoop.mapred.TaskTracker:  Using
> MemoryCalculatorPlugin :
> org.apache.hadoop.util.LinuxMemoryCalculatorPlugin@3c1fc1a6
> 2011-04-20 03:33:10,945 WARN org.apache.hadoop.mapred.TaskTracker:
> TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is
> disabled.
> 2011-04-20 03:33:10,946 INFO org.apache.hadoop.mapred.IndexCache:
> IndexCache created with max memory = 10485760
> 2011-04-20 03:33:11,069 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction (registerTask): attempt_201104200331_0001_m_000000_1 task's
> state:UNASSIGNED
> 2011-04-20 03:33:11,072 INFO org.apache.hadoop.mapred.TaskTracker: Trying
> to launch : attempt_201104200331_0001_m_000000_1
> 2011-04-20 03:33:11,072 INFO org.apache.hadoop.mapred.TaskTracker: In
> TaskLauncher, current free slots : 2 and trying to launch
> attempt_201104200331_0001_m_000000_1
> 2011-04-20 03:33:11,986 INFO org.apache.hadoop.mapred.JvmManager: In
> JvmRunner constructed JVM ID: jvm_201104200331_0001_m_-926908110
> 2011-04-20 03:33:11,986 INFO org.apache.hadoop.mapred.JvmManager: JVM
> Runner jvm_201104200331_0001_m_-926908110 spawned.
> 2011-04-20 03:33:12,400 INFO org.apache.hadoop.mapred.TaskTracker: JVM with
> ID: jvm_201104200331_0001_m_-926908110 given task:
> attempt_201104200331_0001_m_000000_1
> 2011-04-20 03:33:12,895 INFO org.apache.hadoop.mapred.TaskTracker:
> attempt_201104200331_0001_m_000000_1 0.0%
> 2011-04-20 03:33:12,918 INFO org.apache.hadoop.mapred.JvmManager: JVM :
> jvm_201104200331_0001_m_-926908110 exited. Number of tasks it ran: 0
> 2011-04-20 03:33:15,919 INFO org.apache.hadoop.mapred.TaskRunner:
> attempt_201104200331_0001_m_000000_1 done; removing files.
> 2011-04-20 03:33:15,920 INFO org.apache.hadoop.mapred.TaskTracker:
> addFreeSlot : current free slots : 2
> 2011-04-20 03:33:38,090 INFO org.apache.hadoop.mapred.TaskTracker: Received
> 'KillJobAction' for job: job_201104200331_0001
> 2011-04-20 03:36:32,199 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction (registerTask): attempt_201104200331_0002_m_000000_2 task's
> state:UNASSIGNED
> 2011-04-20 03:36:32,199 INFO org.apache.hadoop.mapred.TaskTracker: Trying
> to launch : attempt_201104200331_0002_m_000000_2
> 2011-04-20 03:36:32,199 INFO org.apache.hadoop.mapred.TaskTracker: In
> TaskLauncher, current free slots : 2 and trying to launch
> attempt_201104200331_0002_m_000000_2
> 2011-04-20 03:36:32,813 INFO org.apache.hadoop.mapred.JvmManager: In
> JvmRunner constructed JVM ID: jvm_201104200331_0002_m_-134007035
> 2011-04-20 03:36:32,814 INFO org.apache.hadoop.mapred.JvmManager: JVM
> Runner jvm_201104200331_0002_m_-134007035 spawned.
> 2011-04-20 03:36:33,214 INFO org.apache.hadoop.mapred.TaskTracker: JVM with
> ID: jvm_201104200331_0002_m_-134007035 given task:
> attempt_201104200331_0002_m_000000_2
> 2011-04-20 03:36:33,711 INFO org.apache.hadoop.mapred.TaskTracker:
> attempt_201104200331_0002_m_000000_2 0.0%
> 2011-04-20 03:36:33,731 INFO org.apache.hadoop.mapred.JvmManager: JVM :
> jvm_201104200331_0002_m_-134007035 exited. Number of tasks it ran: 0
> 2011-04-20 03:36:36,732 INFO org.apache.hadoop.mapred.TaskRunner:
> attempt_201104200331_0002_m_000000_2 done; removing files.
> 2011-04-20 03:36:36,733 INFO org.apache.hadoop.mapred.TaskTracker:
> addFreeSlot : current free slots : 2
> 2011-04-20 03:36:50,210 INFO org.apache.hadoop.mapred.TaskTracker: Received
> 'KillJobAction' for job: job_201104200331_0002
>
>
>
>
> 2011/4/20 pob <peterob...@gmail.com>
>
>> ad2. it works with -x local , so there cant be issue with
>> pig->DB(Cassandra).
>>
>> im using pig-0.8 from official site + hadoop-0.20.2 from offic. site.
>>
>>
>> thx
>>
>>
>> 2011/4/20 aaron morton <aa...@thelastpickle.com>
>>
>>> Am guessing but here goes. Looks like the cassandra RPC port is not set,
>>> did you follow these steps in contrib/pig/README.txt
>>>
>>> Finally, set the following as environment variables (uppercase,
>>> underscored), or as Hadoop configuration variables (lowercase, dotted):
>>> * PIG_RPC_PORT or cassandra.thrift.port : the port thrift is listening
>>> on
>>> * PIG_INITIAL_ADDRESS or cassandra.thrift.address : initial address to
>>> connect to
>>> * PIG_PARTITIONER or cassandra.partitioner.class : cluster partitioner
>>>
>>> Hope that helps.
>>> Aaron
>>>
>>>
>>> On 20 Apr 2011, at 11:28, pob wrote:
>>>
>>> Hello,
>>>
>>> I did cluster configuration by
>>> http://wiki.apache.org/cassandra/HadoopSupport. When I run
>>> pig example-script.pig
>>> -x local, everything is fine and i get correct results.
>>>
>>> Problem is occurring with -x mapreduce
>>>
>>> Im getting those errors :>
>>>
>>>
>>> 2011-04-20 01:24:21,791 [main] ERROR
>>> org.apache.pig.tools.pigstats.PigStats - ERROR:
>>> java.lang.NumberFormatException: null
>>> 2011-04-20 01:24:21,792 [main] ERROR
>>> org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
>>> 2011-04-20 01:24:21,793 [main] INFO
>>>  org.apache.pig.tools.pigstats.PigStats - Script Statistics:
>>>
>>> Input(s):
>>> Failed to read data from "cassandra://Keyspace1/Standard1"
>>>
>>> Output(s):
>>> Failed to produce result in "
>>> hdfs://ip:54310/tmp/temp-1383865669/tmp-1895601791"
>>>
>>> Counters:
>>> Total records written : 0
>>> Total bytes written : 0
>>> Spillable Memory Manager spill count : 0
>>> Total bags proactively spilled: 0
>>> Total records proactively spilled: 0
>>>
>>> Job DAG:
>>> job_201104200056_0005   ->      null,
>>> null    ->      null,
>>> null
>>>
>>>
>>> 2011-04-20 01:24:21,793 [main] INFO
>>>  
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
>>> - Failed!
>>> 2011-04-20 01:24:21,803 [main] ERROR org.apache.pig.tools.grunt.Grunt -
>>> ERROR 1066: Unable to open iterator for alias topnames. Backend error :
>>> java.lang.NumberFormatException: null
>>>
>>>
>>>
>>> ====
>>> thats from jobtasks web management - error  from task directly:
>>>
>>> java.lang.RuntimeException: java.lang.NumberFormatException: null
>>> at
>>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.initialize(ColumnFamilyRecordReader.java:123)
>>>  at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:176)
>>> at
>>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:418)
>>>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:620)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>>>  at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>> Caused by: java.lang.NumberFormatException: null
>>> at java.lang.Integer.parseInt(Integer.java:417)
>>>  at java.lang.Integer.parseInt(Integer.java:499)
>>> at
>>> org.apache.cassandra.hadoop.ConfigHelper.getRpcPort(ConfigHelper.java:233)
>>>  at
>>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.initialize(ColumnFamilyRecordReader.java:105)
>>> ... 5 more
>>>
>>>
>>>
>>> Any suggestions where should be problem?
>>>
>>> Thanks,
>>>
>>>
>>>
>>
>

Reply via email to