BTW, please ask this kind of question in zeppelin user mail list instead of
hadoop mail list.
Jeff Zhang 于2018年10月25日周四 下午9:01写道:
> >>> java.sql.SQLException: Could not open client transport with JDBC Uri:
> jdbc:hive2://localhost:1: java.net.ConnectException: Con
>>> java.sql.SQLException: Could not open client transport with JDBC Uri:
jdbc:hive2://localhost:1: java.net.ConnectException: Connection refused
(Connection refused)
It looks like due to hive connection issue, Can you first verify whether
you can connect to hive in beeline ?
Lee Ming-Ta
I tried the hadoop 3.0, and can start dfs properly, but when I start yarn,
it fails with the following error
ERROR: Cannot find configuration directory
"/Users/jzhang/Java/lib/hadoop-3.0.0/conf"
Actually, this is not the correct conf folder. It should be
I would suggest user to run hadoop on linux, do not waste time on these OS
related issues.
zlgonzalez 于2017年7月23日周日 下午12:06写道:
> Looks like you're running on Windows. Not sure if hadoop running on
> Windows even for single nodes is still supported...
>
> Thanks,
>
ty.html
> .
>
> "Both Client-Server and Server-Server compatibility is preserved within a
> major release"
>
> HTH
> Ravi.
>
> On Mon, Jun 26, 2017 at 5:21 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
>>
>> It looks like it can. But is there any document about the compatibility
>> between versions ? Thanks
>>
>>
>>
>
It looks like it can. But is there any document about the compatibility
between versions ? Thanks
y
> tasks are done/running/pending?
>
>
>
> *From:* Jeff Zhang [mailto:zjf...@gmail.com]
> *Sent:* Wednesday, March 09, 2016 9:33 PM
> *To:* Frank Luo
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: how to use Yarn API to find task/attempt status
>
>
>
> I don't thin
://issues.apache.org/jira/i#browse/YARN-3238
Thanks Regards
Rohith Sharma K S
*From:* Jeff Zhang [mailto:zjf...@gmail.com]
*Sent:* 18 August 2015 09:11
*To:* user@hadoop.apache.org
*Subject:* Confusing Yarn RPC Configuration
I use yarn.resourcemanager.connect.max-wait.ms
.
--
Best Regards
Jeff Zhang
I see that AllocateResponse has AMCommand which may request AM to resync or
shutdown, but I don't see AMRMClientAsync#CallbackHandler has any method to
handle that. Should AMRMClientAsync#CallbackHandler add method
onAMCommand ?
--
Best Regards
Jeff Zhang
On Aug 13, 2015, at 6:55 AM, Jeff Zhang zjf...@gmail.com wrote:
I see that AllocateResponse has AMCommand which may request AM to resync
or shutdown, but I don't see AMRMClientAsync#CallbackHandler has any method
to handle that. Should AMRMClientAsync#CallbackHandler add method
Might due to performance issue of FileOutputCommitter which is resolved in 2.7
https://issues.apache.org/jira/browse/MAPREDUCE-4815
Best Regard,
Jeff Zhang
From: Ashish Kumar Singh ashish23...@gmail.commailto:ashish23...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
/proxy/application_1430916889869_0003/A, appUser=jzhang
--
Best Regards
Jeff Zhang
Try to export JAVA_HOME in hadoop-env.sh
Best Regard,
Jeff Zhang
From: Anand Murali anand_vi...@yahoo.commailto:anand_vi...@yahoo.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org, Anand Murali
anand_vi
(AMRMClientAsyncImpl.java:224)
--
Best Regards
Jeff Zhang
...
Best regards,
Morbious
--
Best Regards
Jeff Zhang
You can take it similar as the HashMap of java. Use the hashCode of one object
to distribute it into different bucket.
Best Regard,
Jeff Zhang
From: xeonmailinglist-gmail
xeonmailingl...@gmail.commailto:xeonmailingl...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:129)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1088)
--
Best Regards
Jeff Zhang
BTW, I am using hadoop 2.6 and it's a single node cluster on mac.
On Fri, Feb 6, 2015 at 11:35 AM, Jeff Zhang zjf...@gmail.com wrote:
I intermittently meet this issue of Localizer failed, but after I restart
the yarn, then the problem is gone. Is this a known issue ?
Here's log in node
-
ERROR 1200: file rc_app_error_test.pig, line 11, column 28 Unexpected
character 'D'
Details at logfile: /home/u377058/pig_1413774551268.log
please help us
thanks in advance.
--
Best Regards
Jeff Zhang
.
--
Best Regards
Jeff Zhang
to disable
Secret Manager.
Cheers
Oleg
--
Best Regards
Jeff Zhang
, Oleg Zhurakousky
oleg.zhurakou...@gmail.com wrote:
Thanks Jeff
Yes I am using 2.3 and the issue is still there.
Oleg
On Sun, Mar 16, 2014 at 3:10 AM, Jeff Zhang zjf...@gmail.com wrote:
Hi Oleg,
I meet the same issue when I start an unmanaged AM in client side in
thread way
You can start the resource manager without starting any node manager. And
the source code of resource manager and node manager are in different sub
pom project.
On Wed, Mar 12, 2014 at 7:06 AM, anu238 . anurag.guj...@gmail.com wrote:
Hi All,
I am sorry for the blast, I wanted to use only the
I believe in the future the spark functional style api will dominate the
big data world. Very few people will use the native mapreduce API. Even now
usually users use third-party mapreduce library such as cascading,
scalding, scoobi or script language hive, pig rather than the native
mapreduce
Hi Sugandha,
Take gz file as an example, It is not splittable because of the compression
algorithm it is used. It can not guarantee that one record is located in
one block, if one record is in 2 blocks, your program will crash since you
can not get the whole record.
On Wed, Feb 26, 2014 at
I guess I got the answer , the reason is that we need to pass the
environment to AM, and there's no way to pass that to thread, but it is
possible for process. Could anyone confirm that ?
On Wed, Feb 26, 2014 at 3:52 PM, Jeff Zhang jezh...@gopivotal.com wrote:
Hi all,
I look the source code
not sure why here use the token, could any help explain that and guide
me how to resolve my issue? Thanks
Jeff Zhang
abc.txt meadata,such as file name, file path and block
location? The abc.txt file data itself is in master 172.11.12.6 or node1
172.11.12.7,which directory it locate?
Thanks.
- Original Message -
From: Jeff Zhang
To: user@hadoop.apache.org
Sent: Sunday, January 26, 2014 2:30 PM
in eclipse test Run
configuration.
+Vinod
On Jan 21, 2014, at 5:39 PM, Jeff Zhang jezh...@gopivotal.com wrote:
Hi all,
TestDistributedShell is a unit test for DistributedShell. I could run it
successfully in maven, but when I run it in eclipse, it failed. Do I need
any extra setting to make
you just need to run mapreduce job to generate the data you want and then
upload the data into hive table ( create table first if it is not exists )
these 2 steps are totally separated.
On Tue, Jan 21, 2014 at 4:21 PM, Ranjini Rathinam ranjinibe...@gmail.comwrote:
Hi,
Need to load the data
Hi all,
TestDistributedShell is a unit test for DistributedShell. I could run it
successfully in maven, but when I run it in eclipse, it failed. Do I need
any extra setting to make it run in eclipse ?
Here's the error message:
2014-01-22 09:38:20,733 INFO [AsyncDispatcher event handler]
HI all,
I found that the AuxiliaryService's implementation is a little confusing to
me. The initializeApplication will be invoked on each container of one
application including the am. While stopApplication will only
been invoked on am when the job is done. It looks like there's a little
It depend on your input data. E.g. your input consists of 10 files, each
is 65M, then each file will take 2 mappers, overall it would cost 20
mappers, but the input size is actually 650M rather than 20*64=1280M
On Tue, Dec 3, 2013 at 4:28 PM, ch huang justlo...@gmail.com wrote:
i run the MR
to read
the YARN-paper yet? http://www.socc2013.org/home/program
Kim
On Mon, Nov 25, 2013 at 9:34 PM, Jeff Zhang jezh...@gopivotal.com wrote:
Hi ,
I am reading the yarn code, so wondering whether there's any design
document for the yarn. I found the blog post on hortonworks is very useful
J ha...@cloudera.com wrote:
Are you talking of the new Application History Server (which is
generic) or the Job History Server (which is part of the MR project
and only tracks/shows MR jobs)?
On Tue, Nov 26, 2013 at 7:50 AM, Jeff Zhang jezh...@gopivotal.com
wrote:
I have configured
Thanks for the tips
On Tue, Nov 26, 2013 at 6:17 PM, Harsh J ha...@cloudera.com wrote:
When constructing the AM's java command in your launcher/driver, you
could add in your remote debug JVM arguments, which should work.
On Tue, Nov 26, 2013 at 7:34 AM, Jeff Zhang jezh...@gopivotal.com
Hi,
I build a customized application master but have some issues, is it
possible for me to remote debug the application master ? Thanks
I have configured the history server of yarn. But it looks like it can only
help me to see the history log of mapreduce jobs. I still could not see the
logs of non-mapreduce job. How can I see the history log of non-mapreduce
job ?
Hi ,
I am reading the yarn code, so wondering whether there's any design
document for the yarn. I found the blog post on hortonworks is very useful.
But more details document would be helpful. Thanks
Do you set to use yarn framework in mapred-site.xml as following ?
property
namemapreduce.framework.name/name
valueyarn/value
/property
On Tue, Nov 26, 2013 at 1:27 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i run terasort in my hadoop cluster,and it run as a
for
CORP.EBAY.COM in the [realms] section. Or if you actually have
appropriate dns service records for kerberos, you can use dns_lookup_kdc =
true.
Daryn
On Apr 25, 2013, at 12:36 AM, Jeff Zhang wrote:
Hi all,
I could connect to hadoop cluster by ssh tunnel before when there's
:
PriviledgedActionException
as:jianfezh...@corp.ebay.COMcause:javax.security.sasl.SaslException:
GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Cannot get
kdc for realm CORP.EBAY.COM)]
--
Best Regards
Jeff Zhang
-loading-tp32871902p32871902.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Best Regards
Jeff Zhang
Hi all,
I'd like to select random N records from a large amount of data using
hadoop, just wonder how can I archive this ? Currently my idea is that let
each mapper task select N / mapper_number records. Does anyone has such
experience ?
--
Best Regards
Jeff Zhang
Hi all,
I'd like to select random N records from a large amount of data using
hadoop, just wonder how can I archive this ? Currently my idea is that let
each mapper task select N / mapper_number records. Does anyone has such
experience ?
--
Best Regards
Jeff Zhang
rpc port. If yes,
give a try to,
hadoop fs -fs hdfs://10.249.68.39:9000 -ls /
Regards,
Tanping
-Original Message-
From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Thursday, May 26, 2011 11:59 PM
To: core-u...@hadoop.apache.org
Subject: Can not access hadoop cluster from outside
It's hdfs://10.249.68.39:9000
On Fri, May 27, 2011 at 3:06 PM, Harsh J ha...@cloudera.com wrote:
What is your ${fs.default.name} set to?
On Fri, May 27, 2011 at 12:29 PM, Jeff Zhang zjf...@gmail.com wrote:
Hi all,
I meet a wried problem that I can not access hadoop cluster from outside
aborted. exception: Call to /
10.249.68.39:9000 failed on local exception: java.io.IOException: Connection
reset by peer
Has anyone meet this problem before ? I guess maybe some network
configuration problem, but not sure what's wrong. Thanks
--
Best Regards
Jeff Zhang
.
Amar
On 5/10/11 2:02 PM, Jeff Zhang zjf...@gmail.com wrote:
Hi all,
I just remember there's a property for setting the number of failure task
can been tolerated in one job. Does anyone know what's the property name ?
--
Best Regards
Jeff Zhang
Hi all,
I just remember there's a property for setting the number of failure task
can been tolerated in one job. Does anyone know what's the property name ?
--
Best Regards
Jeff Zhang
use interface FileSystem which has implementation for each type of file
system, such HDFS and Local fs
2011/5/9 ltomuno ltom...@163.com
using java
new File(/tmp/common)
but
/tmp/common is a HDFS file
how to implement this feature?
thanks
--
Best Regards
Jeff Zhang
Is there any other ways to add extra jar to CLASSPATH ?
--
Best Regards
Jeff Zhang
Another work around I can think of is that have my own copy of hadoop, and
copy extra jars to my hadoop. But it result into more maintenance effort
On Wed, Mar 23, 2011 at 9:19 AM, Jeff Zhang zjf...@gmail.com wrote:
Hi all,
When I use command hadoop fs -text I need to add extra jar
it to the -libjars flag.
If it is in the job initiation..bundle it in your job jar for fun.
Cheers
James.
On 2011-03-22, at 7:35 PM, Jeff Zhang wrote:
Another work around I can think of is that have my own copy of hadoop,
and
copy extra jars to my hadoop. But it result into more maintenance effort
, Jeff Zhang wrote:
Actually I don't use the jar for mapreduce job. I only need it to display
sequence file.
On Wed, Mar 23, 2011 at 9:41 AM, James Seigel ja...@tynt.com wrote:
Hello, some quick advice for you
which portion of your job needs the jar? if answer = mapper
Thanks Boris
On Tue, Nov 30, 2010 at 9:12 AM, Boris Shkolnik bo...@yahoo-inc.com wrote:
FsShell.java:delete().
On 11/25/10 1:24 AM, Jeff Zhang zjf...@gmail.com wrote:
Hi all,
I check the source code of Trash, seems it will periodically remove
files under ${USER_HOME}/.Trash
But I did
.
Thanks
Best Regards
--
-李平
--
-李平
--
Best Regards
Jeff Zhang
mailing list archive at Nabble.com.
--
Best Regards
Jeff Zhang
external-archived
hadoop-0.20.2-core.jar file, the class JobConf didn't have this method. How
can I add it to the list ? I couldn't even edit the JobConf.class because the
source code is unavailable.
any link to where is this issue handled ?
Thanks,
Maha
--
Best Regards
Jeff Zhang
Shuja-ur-Rehman Baig
--
Best Regards
Jeff Zhang
Any pointers or help will be highly appreciated.
Thanks,
Bibek
[0]
http://hadoop.apache.org/common/docs/r0.20.1/api/org/apache/hadoop/mapred/RecordReader.html#getPos%28%29
[1] http://www.slideshare.net/sh1mmer/upgrading-to-the-new-map-reduce-api
--
Best Regards
Jeff Zhang
Hi all,
I haven't the opportunity to attend the hadoop world 2010, but still
curious to know the conference and hear the voice from the attendees.
And it would be better if someone can share some materials of this
conference. Thanks.
--
Best Regards
Jeff Zhang
with block defragmentation etc. ?
Thanks,
-Rakesh
--
Best Regards
Jeff Zhang
? (Not the job scheduler)
Big thanks,
Shen
--
Best Regards
Jeff Zhang
AM, Shen LI geminialex...@gmail.com wrote:
Hi, Thanks you very much for your reply. I want to run my own algorithm for
this part to see if we can achieve better outcome in specific scenario. So
how can I modify it?
Thanks a lot!
Shen
On Thu, Oct 7, 2010 at 6:33 PM, Jeff Zhang zjf...@gmail.com
is that I have
mapred.job.reuse.jvm.num.tasks=-1 and jvm GC doesn't always start when
it should.
Thanks in Advance,
Vitaliy S
--
Best Regards
Jeff Zhang
Regards
Jeff Zhang
may not need)
On Mon, Sep 13, 2010 at 8:02 PM, Matthew John
tmatthewjohn1...@gmail.com wrote:
When it comes to Writer, I can see the append, appendRaw methods.. But the
next methods (many ! ) in Reader is confusing !.
Can you further info on it ?
Matthew
--
Best Regards
Jeff Zhang
br.com.hotwords.udf.ADOValidaUrlV2(redirv2Result::group::redirV2ComEliminacaoDeIpsRepetidos::url,
redirv2Result::group::redirV2ComEliminacaoDeIpsRepetidos::idParceiro,0)
someone has any ideia?
thanks in advanced.
--
Best Regards
Jeff Zhang
--
Best Regards
Jeff Zhang
Regards
Jeff Zhang
hadoop-core.jar .
Thanks,
Matthew John
--
Best Regards
Jeff Zhang
:
org.apache.hadoop.mapred.MetafileInputFormat.. and so on ...
Thanks,
Matthew
--
Best Regards
Jeff Zhang
to shut down the
cluster first. I was wondering what will happen to the current files on HDFS
(with 128M block size). Are they still there and usable? If so, what is the
block size of those lagacy files?
Thanks,
-Gang
--
Best Regards
Jeff Zhang
John
--
Best Regards
Jeff Zhang
not able to figure out how to rebuild these core containing jar and
examples jar with the modified sort.
--
Best Regards
Jeff Zhang
the output file should be like this
raju CSE
krishan IT
siva MECH
venkat CIVIL
How to do this pig script?
thanks
ramanaiah
--
Best Regards
Jeff Zhang
.
Please share if you have any inputs.
Thanks,
Somdip.
--
Best Regards
Jeff Zhang
it related to connection between pig
and hadoop.
Can someone tell me how to connect Pig and hadoop.
Thanks.
--
Best Regards
Jeff Zhang
, 2010, at 6:10 PM, Jeff Zhang wrote:
Do you put the hadoop conf on classpath ? It seems you are still using
local file system but conncect Hadoop's JobTracker.
Make sure you set the correct configuration in core-site.xml
hdfs-site.xml, mapred-site.xml, and put them on classpath.
On Thu, Aug 26
...@apple.com wrote:
Yes they are running.
On Aug 26, 2010, at 6:59 PM, Jeff Zhang wrote:
Execute command jps in shell to see whether namenode and jobtracker is
running correctly.
On Fri, Aug 27, 2010 at 9:49 AM, rahul rmalv...@apple.com wrote:
Hi Jeff,
I transferred the hadoop conf files
the
error remains the same.
Can you suggest something further ?
Thanks,
Rahul
On Aug 26, 2010, at 7:07 PM, Jeff Zhang wrote:
Can you look at the jobtracker log or access jobtracker web ui ?
It seems you can not connect to jobtracker according your log
Caused by: java.io.IOException: Call
using the TupleFormat, which
outputs text with an additional parenthesis that would cause a subsequent
PigStorage LOAD to get extra parenthesis characters, right?
On Thu, Aug 19, 2010 at 1:50 AM, Jeff Zhang zjf...@gmail.com wrote:
I am afraid you should write your own LoadFunc to interpret
and
intermediate state in memory before the join.
So is BinStorage just storing the Tuples in an internal binary format that
is easily converted back to a Tuple when loaded (i.e. no csv parsing
necessary)?
Thanks.
On Fri, Aug 20, 2010 at 12:06 AM, Jeff Zhang zjf...@gmail.com wrote:
What do you
it all
in memory? Or does it have memory management ala buffer managers in
DBMS's?
--
Best Regards
Jeff Zhang
--
Best Regards
Jeff Zhang
expecting to come across millions of data points.
Thanks for the response by the way. I thought that Hadoop set the number of
splits, regardless of file size, to just 1 by default.
Erik
On 17 August 2010 11:44, Jeff Zhang zjf...@gmail.com wrote:
What size is your input ? If the input size
Regards
Jeff Zhang
!
regards,
Wenhao
--
~_~
--
Harsh J
www.harshj.com
--
~_~
--
Best Regards
Jeff Zhang
Jeff Zhang
)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
Any suggestions?
-Todd
--
Best Regards
Jeff Zhang
be the “best” or “a” way to make the DISTINCT command from pig
in a java map reduce program?
Let me know if anyone is interested in this. I’d like to get some sharing
going.
Cheers
James.
--
Best Regards
Jeff Zhang
Hi all,
I'd like to use the cpp thrift of hdfs, but find that there's no
reflection_limited_types.h which is included by hadoopfs_types.h
--
Best Regards
Jeff Zhang
as that which has
the
most links between itself and the registration node. I'm running into
difficulties here, and wondered whether Hadoop might offer an alternative
approach.
Any pointers would be greatly appreciated.
Thanks,
Tim
--
Best Regards
Jeff Zhang
from the Hadoop core-user mailing list archive at Nabble.com.
--
Best Regards
Jeff Zhang
$Receiver.opWriteBlock(DataTransferProtocol.java:390)
at
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)
--
Best Regards
Jeff Zhang
in order to point Pig to
my Hadoop cluster?
Thanks
Dave Viner
--
Best Regards
Jeff Zhang
? Is it
possible to use streaming data to write to HDFS ?
Thank in Advance
Oleg.
--
Best Regards
Jeff Zhang
-user mailing list archive at Nabble.com.
--
Best Regards
Jeff Zhang
1 - 100 of 287 matches
Mail list logo