we apologize if you receive multiple copies of this message
===
CALL FOR PAPERS
2013 Workshop on
Middleware for HPC and Big Data Systems
MHPC '13
as part of Euro-Par 2013, Aachen, Germany
we apologize if you receive multiple copies of this message
===
CALL FOR PAPERS
2013 Workshop on
Middleware for HPC and Big Data Systems
MHPC '13
as part of Euro-Par 2013, Aachen, Germany
Hi
I am new to Hadoop world. Can you please let me know what is a hadoop stack?
Thanks,
Burberry
On Mon, Apr 22, 2013 at 10:19 AM, Keith Wiley kwi...@keithwiley.com wrote:
Simple question: When I issue a hadoop fs -du command and/or when I view
the namenode web UI to see HDFS disk
Hi Keith,
The fs -du computes length of files, and would not report replicated
on-disk size. HDFS disk utilization OTOH, is the current, simple
report of used/free disk space, which would certainly include
replicated data.
On Mon, Apr 22, 2013 at 10:49 PM, Keith Wiley kwi...@keithwiley.com
To change the MR AM's default log level from INFO, set the job config:
yarn.app.mapreduce.am.log.level to DEBUG or whatever level you
prefer.
On Mon, Apr 22, 2013 at 6:21 PM, Nitzan Raanan
raanan.nit...@comverse.com wrote:
Hi
How do I open the DEBUG level for YARN application master process
Hi Everyone
Today I am testing about 2T data on my cluster, there several failed map
task and reduce task on same node
Here is the log
Map failed:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any
valid local directory for output/spill0.out
at
Does your node5 have adequate free space and proper multi-disk
mapred.local.dir configuration set in it?
On Tue, Apr 23, 2013 at 12:41 PM, 姚吉龙 geelong...@gmail.com wrote:
Hi Everyone
Today I am testing about 2T data on my cluster, there several failed map
task and reduce task on same node
Thanks
That worked !
BR
Raanan Nitzan
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Tuesday, April 23, 2013 9:42 AM
To: user@hadoop.apache.org
Subject: Re: How to open DEBUG level for YARN application master ?
To change the MR AM's default log level from INFO, set
Hello,
Sorting is done by the SortingComparator which performs sorting based on the
value of key. A possible solution would be the following:
You could write a custom Writable comparable class which extends
WritableComparable (lets call it MyCompositeFieldWritableComparable), that will
store
This e-mail message may contain confidential, commercial or privileged
information that constitutes proprietary information of Comverse Technology or
its subsidiaries. If you are not the intended recipient of this message, you
are hereby notified that any
Hi,
Just a question about the implementation of Map/Reduce.
I've been thinking about the output of the map stage.
Logically all of the records emitted by the mapper have to be partitioned and
sorted before they go into the reducers. (We can ignore the partitioning for the
moment and so I'm just
I have set two disk available for tem file, one is /usr another is /sda
But I found the first /usr is full while /sda has not been used.
Why would this hadppen ? especially when the first path is full
[image: 内嵌图片 1]
2013/4/23 Harsh J ha...@cloudera.com
Does your node5 have adequate free space
Enviado por Samsung Mobile
Lol
On Apr 23, 2013 10:39 AM, Gustavo Ioschpe gustavo.iosc...@bigdata.inf.br
wrote:
Enviado por Samsung Mobile
+ mapred dev
On Tue, Apr 16, 2013 at 2:19 PM, Rahul Bhattacharjee
rahul.rec@gmail.com wrote:
Hi,
I have a question related to Hadoop's input sampler ,which is used for
investigating the data set before hand using random selection , sampling
etc .Mainly used for total sort , used in
Asking for help! I'm facing the problem that no datanode to stop. Namenode
has been started but datanode can't be started. What should I do on
namenode and datanode? Thank you very much
2013/4/19 超级塞亚人 shel...@gmail.com
I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I wanna
Hi,
I'm getting my hands on hadoop. One thing I really want to know is how you
launch MR jobs in a development environment.
I'm currently using Eclipse 3.7 with hadoop plugin from hadoop 1.0.2. With
this plugin I can manage HDFS and submit job to cluster. But the strange
thing is, every job
You need to generate a jar file, pass all the parameters on run time if any
is fixed and run at hadoop like hadoop -jar jarfilename.jar parameters
*Thanks Regards*
∞
Shashwat Shriparv
On Tue, Apr 23, 2013 at 6:51 PM, Han JU ju.han.fe...@gmail.com wrote:
Hi,
I'm getting my hands on
Regards,
Neeraj Mahajan
Disclaimer
This communication (including the attached document(s), if a document is attached hereto) contains information that is proprietary and confidential to ABSi, which shall not be disclosed or disseminated outside of ABSi except in connection with ABSi business
Hell Han,
The reason behind this is that the jobs are running inside the
Eclipse itself and not getting submitted to your cluster. Please see if
this links helps :
http://cloudfront.blogspot.in/2013/03/mapreduce-jobs-running-through-eclipse.html#.UXaQsDWH6IQ
Warm Regards,
Tariq
You need to send the request to this address :
user-unsubscr...@hadoop.apache.org
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 23, 2013 at 7:14 PM, neeraj.maha...@absicorp.com
neeraj.maha...@absicorp.com wrote:
Regards,
Neeraj Mahajan
Disclaimer
This
Hi there,
Could you plz show me your config files and DN error logs?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 23, 2013 at 4:35 PM, 超级塞亚人 shel...@gmail.com wrote:
Asking for help! I'm facing the problem that no datanode to stop. Namenode
has been
On Tue, Apr 23, 2013 at 9:23 PM, Mohammad Tariq donta...@gmail.com wrote:
What should I do on namenode and datanode? Thank you very much
As Tariq has ask, can you provide datanode logs snapshots??
*Thanks Regards*
∞
Shashwat Shriparv
Transferring to user list (hdfs-dev bcc'd).
Hi Kevin,
The datanodes are definitely more disposable than the namenodes. If a
Sqoop command unexpectedly consumes a lot of resources, then stealing
resources from the namenode could impact performance of the whole cluster.
Stealing resources from a
MR has a local mode that does what you want. Pig has the ability to use this
mode. I did a quick search but didn't immediately find a good link to
documentation, but hopefully this gets you going in the right direction.
Daryn
On Apr 22, 2013, at 6:01 PM, David Gkogkritsiani wrote:
Helllo,
we apologize if you receive multiple copies of this message
===
CALL FOR PAPERS
2013 Workshop on
Middleware for HPC and Big Data Systems
MHPC '13
as part of Euro-Par 2013, Aachen, Germany
I execute the line:
sqoop import --connect
'jdbc:sqlserver://nbreports:1433;databaseName=productcatalog' --username
USER --password PASSWORD --table CatalogProducts
And I get the following output:
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME
Hi Kevin,
1. What's the output from hadoop fs -cat CatalogProducts/part-m-0
2. Can you re-run with the --verbose option - i.e.
sqoop import --connect
'jdbc:sqlserver://nbreports:1433;databaseName=productcatalog'
--username USER --password PASSWORD --table CatalogProducts
--verbose
If
Hi
Sorry to interrupt you.But nobody answer my question in Hadoop maillist.
I have met a issue after I change the content of hdfs-site.xml to add
another dfs.data.dir in my cluster. /usr/hadoop/tmp/dfs/data is the default
value, /sda is the new one
property
namedata.dfs.dir/name
Hi, I would like to know how much memory our data take on the name-node per
block, file and directory.
For example, the metadata size of a file.
When I store some files in HDFS,how can I get the memory size take on the
name-node?
Is there some tools or commands to test the memory size take on
Thanks for reply.
Will try to implement. I think there is problem in my case where i have
modified write function of mapper context.write and tried to write same key
value pair multiple times.Also for this purpose i have modified partitioner
class. my partitioner class doesnt return single value
I implemented my own InputFormat/RecordReader, and I try to run it with
Hadoop Pipes. I understand I could pass in properties to Pipes program by
either:
property
namehadoop.pipes.java.recordreader/name
valuefalse/value
/property
or alterntively -D
Hi Rahul,
The limitation to use InputSampler is, the K and OK (I mean
Map INKEY and OUTKEY) both should be of same type.
Technically because, while collecting the samples (ie.,
arraylist of keys) in writePartitionFile method it uses the INKEY as the
key. And for writing
Hi,
I have observed that there are multiple ways to write driver method of
Hadoop program.
Following method is given in Hadoop Tutorial by
Yahoohttp://developer.yahoo.com/hadoop/tutorial/module4.html
public void run(String inputPath, String outputPath) throws Exception {
JobConf conf =
34 matches
Mail list logo