There is @Metric MutableCounterLong bytesWritten attribute in
DataNodeMetrics, which is used to IO/sec statistics?
2013/8/31 Jitendra Yadav jeetuyadav200...@gmail.com
Hi,
For IO/sec statistics I think MutableCounterLongRate and
MutableCounterLong more useful than others and for xceiver
Hi,
I am trying to import table from oracle hdfs. i am getting the following
error
ERROR manager.SqlManager: Error executing statement:
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not
establish the connection
java.sql.SQLRecoverableException: IO Error: The Network
Also, if both are defined , the framework will use RawComparator . I hope
you have registered the comparator in a static block as follows
static
{
WritableComparator.define(PairOfInts.class, new Comparator());
}
Regards
Ravi Magham
On Sat, Aug 31, 2013 at 1:23 PM, Ravi Kiran
Hi ,
Can you check if you are able to ping or telnet to the ip address and
port of Oracle database from your machine. I have a hunch that Oracle
Listener is stopped . If so , start it.
The commands to check the status and start if the listener isn't running.
$ lsnrctl status
$ lsnrctl start
Thank you for your help Shahab.
I guess I wasn't being too clear. My logic is that I use a custom type as
key and in order to deserialize it on the compute nodes, I need an extra
piece of information (also a custom type).
To use an analogy, a Text is serialized by writing the length of the
: java.net.UnknownHostException: bdatadev
edit your /etc/hosts file
Regards,
Som Shekhar Sharma
+91-8197243810
On Sat, Aug 31, 2013 at 2:05 AM, Narlin M hpn...@gmail.com wrote:
Looks like I was pointing to incorrect ports. After correcting the port
numbers,
conf.set(fs.defaultFS,
Adeel,
To add to Yong's points
a) Consider tuning the number of threads in reduce tasks and the task
tracker process. mapred.reduce.parallel.copies
b) See if the map output can be compressed to ensure there is less IO .
c) Increase the io.sort.factor to ensure the framework merges a
Hello Hadoopers,
Default port for datanode is 50075 i am able to change namenode default
port by changing
dfs.namenode.http-address.ns1 dfs.namenode.http-address.ns2 in my
hdfs-site.xml of my 2 namenodes
how to change default port address of my multiple datanodes
lets say that
you have some machines in europe and some in US I think you just need the
ips and configure them in your cluster set up
it will work...
On Sat, Aug 31, 2013 at 7:52 AM, Jun Ping Du j...@vmware.com wrote:
Hi,
Although you can set datacenter layer on your network topology,
I would, but bdatadev is not one of my servers, it seems like a random
host name. I can't figure out how or where this name got generated. That's
what puzzling me.
On 8/31/13 5:43 AM, Shekhar Sharma shekhar2...@gmail.com wrote:
: java.net.UnknownHostException: bdatadev
edit your /etc/hosts
The server_address that was mentioned in my original post is not
pointing to bdatadev. I should have mentioned this in my original post,
sorry I missed that.
On 8/31/13 8:32 AM, Narlin M hpn...@gmail.com wrote:
I would, but bdatadev is not one of my servers, it seems like a random
host name. I
Can you please check whether are you able to access HDFS using java
API..and also able to run MR Job.
Regards,
Som Shekhar Sharma
+91-8197243810
On Sat, Aug 31, 2013 at 7:08 PM, Narlin M hpn...@gmail.com wrote:
The server_address that was mentioned in my original post is not
pointing to
The only problem i guess hadoop wont be able to duplicate data from one
data center to another but i guess i can identify data nodes or namenodes
from another data center correct me if i am wrong
On Sat, Aug 31, 2013 at 7:00 PM, Visioner Sadak visioner.sa...@gmail.comwrote:
lets say that
you
Your cluster is using HDFS HA, and therefore requires a little more
configs than just fs.defaultFS/etc..
You need to use the right set of cluster client configs. If you don't
have them at /etc/hadoop/conf and /etc/hbase/conf on your cluster edge
node to pull from, try asking your cluster
Looking at the hdfs-default.xml should help with such questions:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
The property you need is dfs.datanode.http.address
On Sat, Aug 31, 2013 at 6:47 PM, Visioner Sadak
visioner.sa...@gmail.com wrote:
Hello
thanks harsh for a cluster should i enter multiple ip address under
tag dfs.datanode.http.address
as i have 4 data nodes
On Sat, Aug 31, 2013 at 9:44 PM, Harsh J ha...@cloudera.com wrote:
Looking at the hdfs-default.xml should help with such questions:
Thanks for the information. So is the reason that makes the raw comparator
faster is because we can use the bytes to do the comparison .. so if I use
the signature of compareTo in my raw comparator that receives two
writablecomparable objects
public int compare(WritableComparable a,
You can maintain per-DN configs if you wish to restrict the HTTP
server to only the public IP, but otherwise use a wildcard
0.0.0.0:PORT, if you were only just looking to change the port.
On Sat, Aug 31, 2013 at 9:49 PM, Visioner Sadak
visioner.sa...@gmail.com wrote:
thanks harsh for a cluster
cool thanks a ton harsh!!!
On Sat, Aug 31, 2013 at 9:53 PM, Harsh J ha...@cloudera.com wrote:
You can maintain per-DN configs if you wish to restrict the HTTP
server to only the public IP, but otherwise use a wildcard
0.0.0.0:PORT, if you were only just looking to change the port.
On Sat,
What do you think friends I think hadoop clusters can run on multiple data
centers using FEDERATION
On Sat, Aug 31, 2013 at 8:39 PM, Visioner Sadak visioner.sa...@gmail.comwrote:
The only problem i guess hadoop wont be able to duplicate data from one
data center to another but i guess i can
Hi John
This exception should indicate error from the container process. If the
container process exits with non-zero exit code, it will be logged.
In case of such errors, you'd better look at the per-container log see
what's happening there.
Jian
On Fri, Aug 30, 2013 at 10:03 AM, Jian Fang
Yes, MutableCounterLong helps to gather DataNode read/write statics.
There is more option available within this metric
Regards
Jitendra
On 8/31/13, lei liu liulei...@gmail.com wrote:
There is @Metric MutableCounterLong bytesWritten attribute in
DataNodeMetrics, which is used to IO/sec
I want to write a custom writablecomparable object with two List objects
within it ..
public class CompositeKey implements WritableComparable {
private ListJsonKey groupBy;
private ListJsonKey sortBy;
...
}
what I am not sure about is how to write
readFields and write methods for this object.
Please send email to:
user-subscr...@hadoop.apache.org
On Sat, Aug 31, 2013 at 12:36 PM, Surendra , Manchikanti
surendra.manchika...@gmail.com wrote:
-- Surendra Manchikanti
The idea behind write(…) and readFields(…) is simply that of ordering.
You need to write your custom objects (i.e. a representation of them)
in order, and read them back in the same order.
An example way of serializing a list would be to first serialize the
length (so you know how many you'll be
Personally, I don't know a way to access job configuration parameters in
custom implementation of Writables ( at least not an elegant and
appropriate one. Of course hacks of various kinds be done.) Maybe experts
can chime in?
One idea that I thought about was to use MapWritable (if you have not
26 matches
Mail list logo