Hi Experts,
It is really weird that DistCp could successfully get the file from
FileZilla ftp server on Windows7, but failed from the IIS ftp server on the
same Windows7 OS(but I can get file using wget directly: 'wget
ftp://Viewer:passw...@hostname1.com:21/ftp_file1.txt' ). I tried several
Susheel:
Since I am new to this, what log file should I look for in the log dir and what
should I be looking for.
Thanks
Sent from my iPhone
On 27-Apr-2015, at 2:07 pm, Susheel Kumar Gadalay skgada...@gmail.com wrote:
jps listing is not showing namenode daemon.
Verify why namenode is
Many thanks Wellington , but what should I look for.
Regards
Anand
Sent from my iPhone
On 27-Apr-2015, at 2:34 pm, Wellington Chevreuil
wellington.chevre...@gmail.com wrote:
Hello Anand,
Per your original email, this would be:
I think heartbeat failure cause is hang of nodes.
I found a bug report associated with this problem.
https://issues.apache.org/jira/browse/HDFS-7489
https://issues.apache.org/jira/browse/HDFS-7496
https://issues.apache.org/jira/browse/HDFS-7531
There might be some FATAL/ERROR/WARN or Exception messages in this log file
that can explain why NN process is dying. Can you paste some of the last lines
on the log file?
On 27 Apr 2015, at 09:37, Susheel Kumar Gadalay skgada...@gmail.com wrote:
jps listing is not showing namenode daemon.
Hello Anand,
This error means NN could not find it's metadata directory. You probably need
to run hadoop namenode -format command before trying to start hdfs.
…
2015-04-27 15:21:42,696 WARN
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
loading fsimage
Because you are probably not defining dfs.namenode.name.dir, the NN metadata
directory is being created at tmp and getting deleted once the process is
restarted.
On 27 Apr 2015, at 11:50, Anand Murali anand_vi...@yahoo.com wrote:
Wellington:
I have done it at installation time. I shall
Dear Wellington:
Many thanks for your help. Deeply appreciate it. It seems to work. I have tried
shutting down and starting up twice and tested hdfs dfs -ls /, and it connects
to hdfs.
Once again many thanks.
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh:
Dear Wellington:
You were right. There is a error with respect to temp files. Find attached log
file. Appreciate your help.
Thanks
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh: (044)- 28474593/ 43526162 (voicemail)
On Monday, April 27, 2015 2:46
Wellington:
I have done it at installation time. I shall try once again. However, request
you look at this URL, and maybe let me know your views/suggestions. BTW, if I
uninstall and re-install this error goes away for that session.
Thanks.
Anand Murali 11/7, 'Anand Vihar', Kandasamy St,
Hi,
I have several directories that contain several files of
|SequenceFileOutputFormat| with |org.apache.hadoop.io.Text| as key and
value. I want to merge all these files into one.
I have looked to the join example [1], but it is not working. How I
merge SequenceFiles?
|+
Yes, it looks like we’re running up against YARN-2578. That’s very unfortunate.
Thanks for everyone’s investigation and input.
mn
On Apr 26, 2015, at 10:38 PM, Rohith Sharma K S rohithsharm...@huawei.com
wrote:
Hi
I had seen this issue in my cluster without HA configured when
jps listing is not showing namenode daemon.
Verify why namenode is not up from the logs.
On 4/27/15, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
Please find below.
and_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$
start-dfs.sh
Starting namenodes on [localhost]
localhost: starting
Dear All:
Please find below.
and_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$ start-dfs.sh
Starting namenodes on [localhost]
localhost: starting
Hello Anand,
Per your original email, this would be:
/home/anand_vihar/hadoop-2.6.0/logs/hadoop-anand_vihar-namenode-Latitude-E5540.out
Cheers.
On 27 Apr 2015, at 09:41, Anand Murali anand_vi...@yahoo.com wrote:
Susheel:
Since I am new to this, what log file should I look for in the log
I had read somewhere 2.7 has lots of issues so you should wait for 2.7.1
where most of them are getting addressed
On Mon, Apr 27, 2015 at 2:14 PM, 조주일 tjst...@kgrid.co.kr wrote:
I think heartbeat failure cause is hang of nodes.
I found a bug report associated with this problem.
Hello,
I have a sequence of jobs that depend on each other, output of one job
is input for the next one. Also, there is a loop in one part of the
sequence, containing two jobs executing in a row.
Until now I was able to run this job by simply creating Job objects and
using
Sergey Kazakov asked me to reply, that our main issue is HDFS-7443.
2015-04-24 1:34 GMT+05:00 Sean Busbey bus...@cloudera.com:
I'd love to see a 2.6.1 release with
* HADOOP-11674
* HADOOP-11710
On Thu, Apr 23, 2015 at 12:00 PM, Vinod Kumar Vavilapalli
vino...@hortonworks.com wrote:
I
The MapReduce ApplicationMaster supports only one job. You can say that (YARN
ResourceManager + a bunch of MR ApplicationMasters (one per job) = JobTracker).
Tez does have a notion of multiple DAGs per YARN app.
For your specific use-case, you can force that user to a queue and limit how
much
conceptually, the MR application master is similar to the old job tracker.
if so, can I submit multiple jobs to the same MR application master? it
looks like an odd use case, the context is that we have users generating
lots of MR jobs, and he currently has a little crude scheduler that
Hi,
IMHO, Upgrade *with downtime* after 2.7.1 is the best option left.
Thanks.
Drake 민영근 Ph.D
kt NexR
On Mon, Apr 27, 2015 at 5:46 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
I had read somewhere 2.7 has lots of issues so you should wait for 2.7.1
where most of them are getting addressed
Hello Marko,
Job#waitForCompletion is implemented as a polling loop around
Job#isComplete and Job#isSuccessful. Both of those calls are non-blocking.
http://hadoop.apache.org/docs/r2.7.0/api/org/apache/hadoop/mapreduce/Job.ht
ml#isComplete()
Tx, I am moving this discussion to the dev lists for progress. Will include
these tickets for discussion. Feel free to pitch in there if you need more.
+Vinod
On Apr 27, 2015, at 6:45 AM, Dmitry Simonov
dimmobor...@gmail.commailto:dimmobor...@gmail.com wrote:
Sergey Kazakov asked me to reply,
Hi there,
I have an Application that is trying to launch an AM , but the localization is
failing with the below error message. The Resource visiblity is set to private
which means the localizer will run through a container executor as a user that
submit the job. I checked that hdfs dfs -ls /
25 matches
Mail list logo