for fixing this
abomination. Sad that this code was released GA.
Sorry folks. HDFS/Mapred is really cool tech, I'm just jaded about this
kind of silliness.
In my Not So Humble Opinion.
Chris
On Sat, Mar 23, 2013 at 1:12 AM, Harsh J ha...@cloudera.com wrote:
NameNode does not persist block
. there is a chapter for this.
but only for MRv1.
On Mar 23, 2013 1:50 PM, Sai Sai saigr...@yahoo.in wrote:
Just wondering if there is any step by step explaination/article of MR
output we get when we run a job either in eclipse or ubuntu.
Any help is appreciated.
Thanks
Sai
--
Harsh J
configuration so that it can load
classes from the jar.
If the above makes sense I will file JIRA with patch, otherwise, what am I
missing?
Thank you,
Alex Baranau
--
Harsh J
? If I enter the
HADOOP_HEAPSIZE beyond this, it doesn't run the hadoop command and fails to
instantiate a JVM.
Any comments would be appreciated!
Thank you!
With Regards,
Abhishek S
--
Harsh J
.
Do you have specific use case in mind ?
Thanks
On Wed, Mar 20, 2013 at 9:07 AM, oualid ait wafli
oualid.aitwa...@gmail.com wrote:
Hi,
Which is the best HBase or Cassandra ?
Which are the criteria to compare those tools( HBase and Cassandra)
Thanks
--
Harsh J
/20 Harsh J ha...@cloudera.com
Hi,
The settings property is dfs.data.dir (or dfs.datanode.data.dir) and
its present in the hdfs-site.xml file at each DataNode, under
$HADOOP_HOME/conf/ directories usually. Look for asymmetrical configs among
various DNs for that property.
On Tue, Mar 19
and
to install open jdk 6.
Many thanks for your patience.
--
Harsh J
--Carnegie Mellon University
--
Harsh J
. do I need to install hadoop in the web server so it can communicate
with the hadoop cluster or is there any other way to get the files from
hadoop to the web server similar to the database you need only to connect to
the database using driver?
--
Harsh J
working on a project to device a solution for small files problem
and i am using hdfs federation. I want to integrate our web server with
hdfs. So I need eclipse plugin for this version. Please help me out.
--
Harsh J
of the new fsimage successful? If so
why where the old fsimag.ckpt not deleted?
Or did we lose some data?
Regards Elmar
--
Harsh J
.
or it is the problem of the network itself. (i already check that bond0 is
1gb)
Thanks
Patai
On Wed, Feb 27, 2013 at 11:06 PM, Harsh J ha...@cloudera.com wrote:
The latter (from other machines, inbound to where the reduce is
running, onto the reduce's local disk, via mapred.local.dir
of combiner classes and so on
sequentially) in more detailed?
Thanks very much.
--
Harsh J
I correct to
think that 'client' could be anyone (my laptop in the network that reaches
namenode) with access to the cluster with hadoop installed locally?
Thanks in advance.
--
Harsh J
help, this problem has been puzzled me for a
long time.
BRs
Geelong
--
From Good To Great
--
Harsh J
.
Best regards,
Jens
--
Harsh J
anyone know where the Zookeeper is getting the
classpath/library information? Do I need to restart my Zookeeper? Not sure
what the problem is. Any suggestions would be awesome. Thank you.
--
Harsh J
-libjars and such, and setting the
MR option to have it take precedence over the hadoop-provided jars of
the same kind.
... How do I do this?
Thanks Harsh.
On Tue, Mar 19, 2013 at 12:19 PM, Harsh J ha...@cloudera.com wrote:
ZK is showing its runtime JVM classpath (from the JVM
with it via -libjars and such, and setting the
MR option to have it take precedence over the hadoop-provided jars of
the same kind.
... How do I do this?
Thanks Harsh.
On Tue, Mar 19, 2013 at 12:19 PM, Harsh J ha...@cloudera.com wrote:
ZK is showing its runtime JVM classpath (from the JVM
it.
Thanks!
--Jeremy
--
Harsh J
Correction to my previous post: I completely missed
https://issues.apache.org/jira/browse/MAPREDUCE-4520 which covers the
MR config ends already in 2.0.3. My bad :)
On Wed, Mar 20, 2013 at 5:34 AM, Harsh J ha...@cloudera.com wrote:
You can leverage YARN's CPU Core scheduling feature
hints on how to debug this?
Jens
--
Harsh J
volume
which means that the capacity will be defined by the disk mounted on file
system of Hadoop user's temp directory. While I can't find the detailed
instructions about this.
Why the capacity of others nodes is about 50G?
These bothers me a lot.
BRs
Geelong
2013/3/19 Harsh J ha
be slow unless you raise the allowed
bandwidth.
On Wed, Mar 20, 2013 at 7:37 AM, Tapas Sarangi tapas.sara...@gmail.com wrote:
Any more follow ups ?
Thanks
-Tapas
On Mar 19, 2013, at 9:55 AM, Tapas Sarangi tapas.sara...@gmail.com wrote:
On Mar 18, 2013, at 11:50 PM, Harsh J ha...@cloudera.com
= new
Text(externalip_starttime_endtime);
context.write(key, new Text(outValue));
}
}
}
--
Harsh J
name on the cluster. I tried to find where I can specify my username in the
configuration files but unsuccessfully.
Can anyone point me how to specify my username in the config files?
Thanks.
Dan
--
Harsh J
been issued and approved by
Sporting Index Ltd.
Outbound email has been scanned for viruses and SPAM
--
Harsh J
=\\:levi:supergroup:rwxr-xr-x}}
And this is my curl command:
curl -i -X PUT http://localhost:50070/webhdfs/v1/levi1?op=MKDIRS;
--
Harsh J
--
Bertrand Dechoux
--
Harsh J
, for any user, its next job
will be executed when its prior job is finished?
--
Thanks and Regards
Jagmohan Chauhan
MSc student,CS
Univ. of Saskatchewan
IEEE Graduate Student Member
http://homepage.usask.ca/~jac735/
--
Harsh J
From your email header:
List-Unsubscribe: mailto:common-user-unsubscr...@hadoop.apache.org
On Wed, Mar 13, 2013 at 10:42 AM, Alex Luya alexander.l...@gmail.com wrote:
can't find a way to unsubscribe from this list.
--
Harsh J
native access (JNA) to do this. Has
anyone used JNA with hadoop and been successful? Are there problems I'll
encounter?
Please let me know.
Thanks,
-Julian
--
Harsh J
interacts with stdin
and stdout and cannot make modifications to the hdfs. Or did you mean that
I should use hadoop pipes to write a c/c++ application?
Anyway, I hope that you can help me clear things up in my head.
Thanks,
-Julian
On Sun, Mar 17, 2013 at 2:50 AM, Harsh J ha...@cloudera.com
It is well explained in thread:
http://stackoverflow.com/questions/9678180/change-file-split-size-in-hadoop
.
Regards,
Zheyi.
On Fri, Mar 15, 2013 at 8:49 AM, YouPeng Yang
yypvsxf19870...@gmail.comwrote:
s
--
Harsh J
org.apache.hadoop.util.RunJar. But couldn't find api docs for the hadoop
common jar. Please direct me to the location.
--
Harsh J
)
Is there any workaround? I really do not want to rewrite my project...
Thank you very much.
Regards,
Zheyi.
--
Harsh J
Regards,
Christian.
--
Harsh J
this concept? Also, what are the other heap
space related properties which we can use with the above and how?
Thanks,
Gaurav
--
Harsh J
to whatever data you can throw at it.
Paul
On 8 March 2013 10:57, Harsh J ha...@cloudera.com wrote:
Hi,
When you implement code that starts memory-storing value copies for
every record (even if of just a single key), things are going to break
in big-data-land. Practically, post-partitioning
...@gmail.com wrote:
Hey there
were you able to find a resolution to this problem?
--
--
Harsh J
the info of cluster setup of H we
r working on.
Thanks
Sai
--
Harsh J
on screen showing
me the progress of the maps/reduces, but when I check my log directory
inside the hadoop folder, I do not see any log files for this job nor does
the jobtracker log file have any information for this job.
Sayan Kole
--
Harsh J
for the data on
jobtracker.php jobdetails.jsp?
Has anyone run into this problem of trying to track Hadoop progress from a
remote machine programatically?
Any help is appreciated,
-Kyle
--
Harsh J
=gfbmwhzrwxzcrjqxsite=www.wisestamp.com/email-install
--
Harsh J
, the master)
when I do hadoop dfs -cp?
Many thanks.
Bill
--
Harsh J
there.
Is there anything better, which would be closer to the fault-tolerant
architecture of Hadoop itself?
Thank you,
Mark
--
Harsh J
talking to the server,
and failing the map task if the service does not work out.
Thank you,
Mark
On Wed, Mar 6, 2013 at 11:21 PM, Harsh J ha...@cloudera.com wrote:
Can the mapper not directly talk to whatever application server the
Windows server runs? Is the work needed to be done in the map
?
--
Harsh J
rack awareness and computation migration but haven't really found much
code relating to either one - leading me to believe I'm not supposed to have
to write code to deal with this.
Anyway, could someone please help me out or set me straight on this?
Thanks,
-Julian
--
Harsh J
make a path qualified using the FileSystem object?
i.e. path.makeQualified(FileSystem.get()) ?
--
Jay Vyas
http://jayunit100.blogspot.com
--
Harsh J
want to know where does mapper partitioner and combiner classes are
set for particular filesplit
while executing job
Thank You
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
--
Harsh J
)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
Regards,
samir.
--
--
Harsh J
[ OK ]
Hadoop namenode is dead and pid file exists[FAILED]
Hadoop secondarynamenode is running[ OK ]
Thanks,
--
Harsh J
namenode is dead and pid file exists[FAILED]
Hadoop secondarynamenode is running[ OK ]
Thanks,
On Wed, Jan 23, 2013 at 11:15 PM, Mohit Vadhera
project.linux.p...@gmail.com wrote:
On Wed, Jan 23, 2013 at 10:41 PM, Harsh J ha...@cloudera.com wrote
: main: Assertion `fs' failed.
Aborted
Not sure what i need to do now to get this example working.
--Phil
--
Harsh J
bytes of my non-input
file, and other reads the last two bytes of my non-input file! How can I
make a job with just one map task?
Thanks,
Mike
--
Harsh J
--
--
Harsh J
hdfs hdfs 4096 Feb 28 02:36 namesecondary
New Path
# ll /mnt/san1/hdfs/cache/hdfs/
total 4
drwxr-xr-x 3 hdfs hdfs 4096 Feb 28 02:08 dfs
# ll /mnt/san1/hdfs/cache/hdfs/dfs/
total 4
drwxr-xr-x 2 hdfs hdfs 4096 Feb 28 02:36 namesecondary
Thanks,
On Thu, Feb 28, 2013 at 1:59 PM, Harsh J
4096 Feb 28 11:28 namesecondary
New location
$ sudo ls -l /mnt/san1/hdfs/hdfs/dfs/
total 8
drwx--. 3 hdfs hdfs 4096 Feb 28 11:28 data
drwxr-xr-x 2 hdfs hdfs 4096 Feb 28 11:28 namesecondary
Thanks,
On Fri, Mar 1, 2013 at 12:14 PM, Harsh J ha...@cloudera.com wrote:
I believe I
.
please guide me
-Dhanasekaran
Did I learn something today? If not, I wasted it.
--
Nitin Pawar
--
Harsh J
know what machines that host is copy data from/to?
Regards,
Patai
--
Harsh J
)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at org.apache.hadoop.hbase.mapreduce.CopyTable.main(CopyTable.java:237)
Is their any versioning issue for HBase?
How to resolve it?
Thanks in advance.
--
Harsh J
appreciated.
Thanks
Sai
--
Harsh J
differences.
3. is there any advanced docs about HDFS Federation.
thanks.
Regards.
--
Harsh J
keep concurrency.
why they get differences.
3. is there any advanced docs about HDFS Federation.
thanks.
Regards.
--
Harsh J
and switched the MapFile to an HBase table and see about 30% network used
(which makes sense, as now that 50GB data isn't always local).
What is going on here? How can I debug to see what data is being
transferred over the network?
--
Harsh J
.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
--
Harsh J
framework.
I'm using hadoop 1.0.3. and pig 0.10.0
I need some help around this.
Thanks!
Lucas
--
Harsh J
Abhishek
On Feb 22, 2013, at 1:03 AM, Harsh J ha...@cloudera.com wrote:
HDFS does not have such a client-side feature, but your applications
can use Apache Zookeeper to coordinate and implement this on their own
- it can be used to achieve distributed locking. While at ZooKeeper,
also checkout
very easy.
On Fri, Feb 22, 2013 at 5:17 AM, abhishek abhishek.dod...@gmail.com wrote:
Hello,
How can I impose read lock, for a file in HDFS
So that only one user (or) one application , can access file in hdfs at any
point of time.
Regards
Abhi
--
--
Harsh J
become a part of Hadoop ?
Thanks a lot in advance.
Regards,
Nikhil
--
Harsh J
, 2013 at 2:14 PM, Harsh J ha...@cloudera.com wrote:
You can instead use 'fs -cat' and the 'head' coreutil, as one example:
hadoop fs -cat 100-byte-dfs-file | head -c 5 5-byte-local-file
On Wed, Feb 20, 2013 at 3:38 AM, jamal sasha jamalsha...@gmail.com
wrote:
Hi,
I was wondering
100-byte-dfs-file | tail -c 5 5-byte-local-file
Will have to download the entire file.
Is there a way to jump into a certain position in a file and cat from
there?
JM
2013/2/20, Harsh J ha...@cloudera.com:
Hi JM,
I am not sure how dangerous it is, since we're using a pipe here
brilliant insights?
Thanks
Chris
--
Harsh J
??
Thanks !!
--
Cheers,
Mayur.
--
Harsh J
)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: com.sap
Regards,
samir.
--
--
Harsh J
specify the package
hierarchy of your main class. I did know it already I am specifying but
doesn't work.
I would be much obliged to anyone helped me
Regards,
--
Harsh J
--
Harsh J
To simplify my previous post, your IPs for the master/slave/etc. in
/etc/hosts file should match the ones reported by ifconfig always.
In proper deployments, IP is static. If IP is dynamic, we'll need to
think of some different ways.
On Tue, Feb 19, 2013 at 9:53 PM, Harsh J ha...@cloudera.com
Oops. I just noticed Hemanth has been answering on a dupe thread as
well. Lets drop this thread and carry on there :)
On Tue, Feb 19, 2013 at 11:14 PM, Harsh J ha...@cloudera.com wrote:
Hi,
The new error usually happens if you compile using Java 7 and try to
run via Java 6 (for example
.
-- Keith Wiley
--
Harsh J
hdfspath localpath
can we have specify to copy not full but like xMB's of file to local drive?
Is something like this possible
Thanks
Jamal
--
Harsh J
), or am I actually doing it wrong?
The Hadoop Version is 2.0.0-cdh4.1.2
Regards
Julian
--
Harsh J
-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoNotFoundException
--
Harsh J
details section is bad
--
Harsh J
(exec_str)
Now, I am trying to grab this output to do some manipulation in it.
For example.. count number of files?
I looked into subprocess module but then... these are not native shell
commands. hence not sure whether i can apply those concepts
How to solve this?
Thanks
--
Harsh J
...@iastate.edu
--
Robert Dyer
rd...@iastate.edu
--
Robert Dyer
rd...@iastate.edu
--
Harsh J
://hadoop.6.n7.nabble.com/Running-hadoop-on-directory-structure-tp67904.html
Sent from the common-user mailing list archive at Nabble.com.
--
Harsh J
10.232.29.4:40031 got version 4 expected version 7
2013-02-13 12:16:33,181 INFO namenode.FSNamesystem - Roll Edit Log from
10.232.29.14
==
==
--
Harsh J
,
Raymond Liu
--
Harsh J
!
-- Homer Simpson
--
Harsh J
(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new
org.apache.hadoop.fs.Path(args[0]));
final org.apache.hadoop.fs.Path output = new org.a
--
Harsh J
using RSA keys in pem format.
(It doesn't work)
ssh user@host
Permission denied (publickey).
(It works)
ssh -i ~/key.pem user@host
The nodes in mapreduce communicate using ssh. How I configure the ssh, or
the mapreduce to work with the pem file.
--
Best regards,
P
--
Harsh J
reason why either of these jobs should suddenly and without
jobs being run increase their consumption of resources in a serious way??
--
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com
--
Harsh J
%2Fconstant-values.htmlei=oc4bUa-GDom4igK974DwDwusg=AFQjCNFmbVNRmo70tLRRH-m9iwZUhnWDJQsig2=7r4odDgIBIpN92JYDfmjewbvm=bv.42261806,d.cGEcad=rja
none of which work.
--
Harsh J
I was incorrect here: MR2 does support this; I failed to look for the
right constant reference and there were two.
On Wed, Feb 13, 2013 at 11:32 PM, Harsh J ha...@cloudera.com wrote:
What version are you specifically asking about?
The MR2 (2.x) does not have this anymore in use (regression
it is not that. An INFRA issue?
On Thu, Feb 14, 2013 at 12:43 AM, Mayank Bansal may...@apache.org wrote:
HI Guys,
All the documentation links are broken on apache.
http://hadoop.apache.org/docs/r0.20.2/
Does anybody know how to fix this?
Thanks,
Mayank
--
Harsh J
, maintaining a distributed database such as HBase cant be
justified.
Many thanks.
Cao
--
Harsh J
My reply to your questions is inline.
On Wed, Feb 13, 2013 at 10:59 AM, Harsh J ha...@cloudera.com wrote:
Please do not use the general@ lists for any user-oriented questions.
Please redirect them to user@hadoop.apache.org lists, which is where
the user community and questions lie.
I've
:
mapred.tasktracker.map.tasks.maximum 4
mapred.tasktracker.reduce.tasks.maximum 2
Should I change the parameters on hadoop XML configuration files?
Yes, as these are per *tasktracker* properties, not client ones.
Please advice.
--
Harsh J
501 - 600 of 2355 matches
Mail list logo