>
> mvn clean install -DskipTests -Pdist
>
>
> the error is:
>
>
> [INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @
> hadoop-common ---
> [WARNING] [protoc, --version] failed with error code 1
>
> help me out.
>
> thanks in advance.
>
--
Regards
Gordon Wang
08 PM, Avinash Kujur wrote:
> yes. protobuf is installed. libprotoc 2.4.1
> i checked.
>
>
> On Wed, Mar 5, 2014 at 11:04 PM, Gordon Wang wrote:
>
>> Do you have protobuf installed on your build box?
>> you can use "which protoc" to check.
>> Looks
nJar.main(RunJar.java:212)
>
> I believe the issue is related to the changes in Hadoop 2, but where can I
> find a H2 compatible version?
>
> Thanks
>
--
Regards
Gordon Wang
> Follow instructions on README.md from this github site, basically
>>
>>
>>
>> cd hadoop-lzo
>>
>> * mvn clean package test*
>>
>>
>>
>> *To enable this at run time do:*
>>
>> a. Copy the library to the hadoop
over
>>> UNIX domain sockets?
>>>
>>> dfs.client.read.shortcircuit
>>> true
>>>
>>>
>>> dfs.domain.socket.path
>>> /var/lib/hadoop/dn_socket
>>>
>>>
>>> dfs.client.domain.socket.data.traffic
>>> true
>>>
>>>
>>> --
>>> *Best Regards,*
>>> lijin bin
>>>
>>
>>
>>
>> --
>> Cheers
>> -MJ
>>
>
>
>
> --
> *Best Regards,*
> lijin bin
>
--
Regards
Gordon Wang
progress
> 2014-04-02 14:06:17,295 INFO org.apache.hadoop.mapred.ReduceTask:
> attempt_201402271518_0260_r_00_0 Scheduled 0 outputs (1 slow hosts
> and0 dup hosts)
> 2014-04-02 14:06:17,295 INFO org.apache.hadoop.mapred.ReduceTask:
> Penalized(slow) Hosts:
>
--
Regards
Gordon Wang
vileged(Native Method)
>>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>> at org.apache.hadoop.mapred.Child.main(Child.java:249)
>>>>
>>>> one method I can come up with is use Combiner to save sums of some
>>>> matrixs and their count
>>>> but it still can solve the problem because the combiner is not fully
>>>> controled by me.
>>>>
>>>
>>>
>>
>
--
Regards
Gordon Wang
> }
> if(total>1){
> divideWeights(result, total);
> }
> context.write(NullWritable.get(), result);
>
>
> On Thu, Apr 3, 2014 at 5:49 PM, Gordon Wang wrote:
>
>> What is the work in reducer ?
>> Do you have any memory intensive work in reducer(eg. cach
>
> What I want is exactly those HTML pages.
>
> Is there a repository for docs (the code repo does not contain these
> docs)?
>
> Thanks!
> ~t
>
>
--
Regards
Gordon Wang
bytes=112
> Reduce input records=6
> Reduce output records=6
> Spilled Records=12
> Shuffled Maps =8
> Failed Shuffles=0
> Merged Map outputs=8
> GC time elapsed (ms)=186
> CPU time spent (ms)=8890
> Physical memory (bytes) snapshot=1408913408
> Virtual memory (bytes) snapshot=5727019008
> Total committed heap usage (bytes)=1808990208
> Shuffle Errors
> BAD_ID=0
> CONNECTION=0
> IO_ERROR=0
> WRONG_LENGTH=0
> WRONG_MAP=0
> WRONG_REDUCE=0
> File Input Format Counters
> Bytes Read=58
> File Output Format Counters
> Bytes Written=40
>
> Thanks and Regards,
> -Rahul Singh
>
--
Regards
Gordon Wang
in userlogs.
>
>
> On Mon, Apr 14, 2014 at 12:29 PM, Gordon Wang wrote:
>
>> Hi Rahul,
>>
>> What is the log of reduce container ? Please paste the log and we can see
>> the reason.
>>
>>
>> On Mon, Apr 14, 2014 at 2:38 PM, Rahul Singh
>>
p daemons ?
>
> --
> Thanks,
> Ashwin
>
>
>
--
Regards
Gordon Wang
> any disclosure, copying, distribution, or use of the contents of this
> information is prohibited. If you have received this transmission in error,
> please notify us by telephone immediately at 86**-**21-6180
> <21-6180> *
>
>
>
--
Regards
Gordon Wang
<>
Any ideas?
-- Forwarded message --
From: Gordon Wang
Date: Fri, Jul 11, 2014 at 11:53 AM
Subject: How can we check the NumberReplicas for a block Under construction?
To: "hdfs-...@hadoop.apache.org"
Hi all,
Are there any admin tools or debug tools to check the Numb
r in 46910ms for sessionid
> 0x147000ee1f70137, closing socket connection and attempting reconnect
>
> 2014-08-01 04:21:03,703 INFO org.apache.hadoop.ha.ActiveStandbyElector:
> Session disconnected. Entering neutral mode...
>
>
--
Regards
Gordon Wang
.
--
Regards
Gordon Wang
gt; in this case.
>
>
>
> Prompt help is highly appreciated!!
>
> Regards,
> Satyam
>
--
Regards
Gordon Wang
s running. If we changed the application to read data from local
> disk without changing any other business logic, the CPU utilization kept
> stable. So we have conclusion that the CPU utilization is related to HDFS. We
> want to know whether this issue is really related to HDFS and is there any
> solution to fix it?
>
>
>
>
>
> Thanks a lot!
>
>
>
> BR/Shiyuan
>
>
>
>
>
> --
>
> Regards,
>
> *Stanley Shi,*
>
>
--
Regards
Gordon Wang
to 20M/s
> ,surpose i issue balancer command on node A ,i do not know if i just set
> the option on node A's hdfs-site.xml ,or i need to set the option on All
> node of my HDFS cluster? thanks a lot!
>
--
Regards
Gordon Wang
19 matches
Mail list logo