Dear developer
I am looking for a solution where i can applu the *SingleColumnValueFilter
to select only the value which i will mention in the value parameter not
other then the value which i will pass.*
* Exxample:*
SingleColumnValueFilter colValFilter = new
Dear hadoop/hbase developer
Did Anyone work with Hbase mapreduce with multiple table as input ?
Any url-link or example will help me alot.
Thanks in advance.
Thanks,
samir.
connector --all
*Exception has occurred during processing command *
*Exception: com.sun.jersey.api.client.ClientHandlerException Message:
java.net.ConnectException: Connection refused*
Regards,
samir.
On Tue, Oct 8, 2013 at 12:16 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Dear Sqoop
Dear All
I am getting error like blow mention, did any one got from sqoop2
Error:
sqoop:000 set server --host hostname1 --port 8050 --webapp sqoop
Server is set successfully
sqoop:000 show server -all
Server host: hostname1
Server port: 8050
Server webapp: sqoop
sqoop:000 show version --all
Dear Hadoop/Sqoop users
Is there any way to call sqoop command without hard coding the password for
the specific RDBMS. ?. If we are hard coding the password then it will be
huge issue with sequrity.
Regards,
samir.
Dear Hive/Hadoop Developer
Just I was runing hive mapside join , along with output data I colud
see some empty file in map stage, Why it is ? and how to ignore this file .
Regards,
samir.
Dear All,
Did any one faced the issue :
While Loading huge dataset into hive table , hive restricting me to
query from same table.
I have set hive.support.concurrency=true, still showing
conflicting lock present for TABLENAME mode SHARED
property
namehive.support.concurrency/name
Dear All,
Any One would have face this type of Issue ?
I am getting Some error while processing Sequecen file with LZO compresss
in hive query In CDH4.3.x Distribution.
Error Logs:
SET hive.exec.compress.output=true;
SET
Hi all,
How to get the mapper output filename inside the the mapper .
or
How to change the mapper ouput file name.
Default it looks like part-m-0,part-m-1 etc.
Regards,
samir.
Hi All,
I could able to connect the hadoop (source ) cluster after ssh is
established.
But i wanted to know, If I want to pull some data using distcp from source
secured hadoop box to another hadoop cluster , I could not able to ping
name node machine. In this approach how to run distcp
able to see the hdfs on server1 from server2?
On Tue, May 28, 2013 at 5:17 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
I could able to connect the hadoop (source ) cluster after ssh is
established.
But i wanted to know, If I want to pull some data using distcp from
Hi all,
We tried to pull the data from upstream cluster(cdh3) which is running
cdh3 to down stream system (running cdh4) ,Using *distcp* to copy the data,
it was throughing some exception bcz due to version isssue.
I wanted to know is there any solution to pull the data from CDH3 to CDH4
/ops_mgt.html#copytable
What kind of help do you need?
JM
2013/3/20 samir das mohapatra samir.help...@gmail.com:
Hi All,
Can you help me to copy one hbase table to another cluster hbase
(Table
copy) .
Regards,
samir
simply open the
org.apache.hadoop.hbase.mapreduce.CopyTable, look into it, and do
almost the same thing for your needs?
JM
2013/3/20 samir das mohapatra samir.help...@gmail.com:
Thanks, for reply
I need to copy the hbase table into another cluster through the java
code.
Any example
...@gmail.com wrote:
Use distcp.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Thu, Mar 14, 2013 at 3:40 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Regards,
samir.
.
samir das mohapatra samir.help...@gmail.com wrote:
how to pull delta data that means filter data not whole data as off now i
know we can do whole data through the distcp, colud you plese help if i am
wrong or any other way to pull efficiently.
like : get data based on filter condition
Hi All,
Is there any way to get information from Hbase once some record get
updated? , Like the Database Trigger.
Regards,
samir.
Use can use Custom Partitioner for that same.
Regards,
Samir.
On Wed, Mar 13, 2013 at 2:29 PM, Vikas Jadhav vikascjadha...@gmail.comwrote:
Hi
I am specifying requirement again with example.
I have use case where i need to shufffle same (key,value) pair to multiple
reducers
For
Hi All,
I have very fundamental doubt, I have file having size 1.5KB and block
size is default block size, But i could see two mapper it got creted during
the Job. Could you please help to get whole picture why it is .
Regards,
samir.
Through the RecordReader and FileStatus you can get it.
On Tue, Mar 12, 2013 at 4:08 PM, Roth Effy effyr...@gmail.com wrote:
Hi,everyone,
I want to join the k-v pairs in Reduce(),but how to get the record
position?
Now,what I thought is to save the context status,but class Context doesn't
Problem I could see in you log file is , No available free map slot for
job.
I think you have to increase the block size to reduce the # of MAP , Bcz
you are passing Big data as Input.
The ideal approach is , first increase the
1) block size,
2) mapp site buffer
3) jvm re-use etc.
Austin,
I think you have to use partitioner to spawn more then one reducer for
small data set.
Default Partitioner will allow you only one reducer, you have to
overwrite and implement you own logic to spawn more then one reducer.
On Tue, Mar 5, 2013 at 1:27 AM, Austin Chungath
Any help...
On Fri, Mar 1, 2013 at 12:06 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
I am facing one problem , how to specify the schema name before the
table while executing the sqoop import statement.
$ sqoop import --connect jdbc:sap://host:port/db_name --driver
few more things
Same setup was working in Ubuntu machine(Dev cluster), only failing under
CentOS 6.3(prod Cluster)
On Thu, Feb 28, 2013 at 9:06 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
I am facing on strange issue, That is In a cluster having 1k machine i
could
Hi All,
I am facing one problem , how to specify the schema name before the table
while executing the sqoop import statement.
$ sqoop import --connect jdbc:sap://host:port/db_name --driver
com.sap.db.jdbc.Driver --table SchemaName.Test-m 1 --username
--password
Hi All,
Can any one share some example how to run sqoop Import results of SQL
'statement' ?
for example:
sqoop import -connect jdbc:. --driver xxx
after this if i am specifying --query select statement it is even not
recognizing as sqoop valid statement..
Hi All,
Using sqoop how to take entire database table into HDFS insted of Table
by Table ?.
How do you guys did it?
Is there some trick?
Regards,
samir.
thanks all.
On Wed, Feb 27, 2013 at 4:41 PM, Jagat Singh jagatsi...@gmail.com wrote:
You might want to read this
http://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_literal_sqoop_import_all_tables_literal
On Wed, Feb 27, 2013 at 10:09 PM, samir das mohapatra
samir.help
...
Sent from a remote device. Please excuse any typos...
Mike Segel
On Feb 27, 2013, at 5:15 AM, samir das mohapatra samir.help...@gmail.com
wrote:
thanks all.
On Wed, Feb 27, 2013 at 4:41 PM, Jagat Singh jagatsi...@gmail.com wrote:
You might want to read this
http://sqoop.apache.org
-- Forwarded message --
From: samir das mohapatra samir.help...@gmail.com
Date: Mon, Feb 25, 2013 at 3:05 PM
Subject: ISSUE IN CDH4.1.2 : transfer data between different HDFS
clusters.(using distch)
To: cdh-u...@cloudera.org
Hi All,
I am getting bellow error , can any one help
yes
On Mon, Feb 25, 2013 at 3:30 PM, Nitin Pawar nitinpawar...@gmail.comwrote:
does this match with your issue
https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/kIPOvrFaQE8
On Mon, Feb 25, 2013 at 3:20 PM, samir das mohapatra
samir.help...@gmail.com wrote
I am using CDH4.1.2 with MRv1 not YARN.
On Mon, Feb 25, 2013 at 3:47 PM, samir das mohapatra
samir.help...@gmail.com wrote:
yes
On Mon, Feb 25, 2013 at 3:30 PM, Nitin Pawar nitinpawar...@gmail.comwrote:
does this match with your issue
https://groups.google.com/a/cloudera.org/forum
more like a SAP side fault than a Hadoop side one and you should
ask on their forums with the stacktrace posted.
On Thu, Feb 21, 2013 at 11:58 AM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All
Can you plese tell me why I am getting error while loading data from
SAP HANA
Hi All,
I wanted to know how to connect Hive(hadoop-cdh4 distribution) with
MircoStrategy
Any help is very helpfull.
Witing for you response
Note: It is little bit urgent do any one have exprience in that
Thanks,
samir
mailing list and do not copy this to hdfs-user.
On Thu, Feb 7, 2013 at 7:20 AM, samir das mohapatra
samir.help...@gmail.com wrote:
Any Suggestion...
On Thu, Feb 7, 2013 at 4:17 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
I could not see the hive meta
Hi All,
I am using cdh4 with MRv1 . When I am running any hadoop mapreduce
program from java , all the map task is assigned to one node. It suppose
to distribute the map task among the cluster's nodes.
Note : 1) My jobtracker web-UI is showing 500 nodes
2) when it is comming to
Hi All,
I wanted to know how to connect HAdoop with MircoStrategy
Any help is very helpfull.
Witing for you response
Note: Any Url and Example will be really help full for me.
Thanks,
samir
Hi all
I we need the connectivity of SAP HANA with Hadoop,
Do you have any experience with that can you please share some documents
and example with me ,so that it will be really help full for me
thanks,
samir
We are using coludera Hadoop
On Thu, Jan 31, 2013 at 2:12 AM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
I wanted to know how to connect HAdoop with MircoStrategy
Any help is very helpfull.
Witing for you response
Note: Any Url and Example will be really help
Hi All,
My Company wanted to implement right Distribution for Apache Hadoop
for its Production as well as Dev. Can any one suggest me which one
will good for future.
Hints:
They wanted to know both pros and cons.
Regards,
samir.
thanks all.
On Thu, Jan 31, 2013 at 11:19 AM, Satbeer Lamba satbeer.la...@gmail.comwrote:
I might be wrong but have you considered distcp?
On Jan 31, 2013 11:15 AM, samir das mohapatra samir.help...@gmail.com
wrote:
Hi All,
Any one knows, how to load data from one hadoop cluster(CDH4
just try to apply
$chmod 755 -R /home/wj/apps/apache-nutch-1.6
then try after it.
On Wed, Jan 23, 2013 at 9:23 PM, 吴靖 qhwj2...@126.com wrote:
hi, everyone!
I want use the nutch to crawl the web pages, but problem comes as the
log like, I think it maybe some permissions problem,but i am
, new Path(args[1]));
JobClient.runJob(conf);
return 0;
}
public static void main(String[] args) throws Exception {
int exitCode = ToolRunner.run(new SortByNorm1(), args);
System.exit(exitCode);
}
On Tue, May 29, 2012 at 1:55 PM, samir das mohapatra
Yes . Hadoop Is only for Huge Dataset Computaion .
May not good for small dataset.
On Wed, May 30, 2012 at 6:53 AM, liuzhg liu...@cernet.com wrote:
Hi,
Mike, Nitin, Devaraj, Soumya, samir, Robert
Thank you all for your suggestions.
Actually, I want to know if hadoop has any advantage
In your logs details i colud not find the NN stating.
It is the Problem of NN itself.
Harsh also suggested for that same.
On Sun, May 27, 2012 at 10:51 PM, Rohit Pandey rohitpandey...@gmail.comwrote:
Hello Hadoop community,
I have been trying to set up a double node Hadoop cluster
*Step wise Details (Ubantu 10.x version ): Go through properly and Run one
by one. it will sove your problem (You can change the path,IP ,Host name as
you like to do)*
-
1. Start the terminal
]));
SequenceFileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
return 0;
}
}
On Wed, May 30, 2012 at 6:57 PM, samir das mohapatra
samir.help...@gmail.com wrote:
PFA.
On Wed, May 30, 2012 at 2:45 AM, Mark question markq2...@gmail.comwrote:
Hi Samir, can you email me your main
, 2012, at 7:40 AM, samir das mohapatra samir.help...@gmail.com
wrote:
Hi All,
Did any one work on hadoop with LDAP integration.
Please help me for same.
Thanks
samir
Yes it is possible by using MultipleInputs format to multiple mapper
(basically 2 different mapper)
Setp: 1
MultipleInputs.addInputPath(conf, new Path(args[0]), TextInputFormat.class,
*Mapper1.class*);
MultipleInputs.addInputPath(conf, new Path(args[1]),
TextInputFormat.class, *Mapper2.class*);
Hi Mark
public void map(LongWritable offset, Text
val,OutputCollector
FloatWritable,Text output, Reporter reporter)
throws IOException {
output.collect(new FloatWritable(*1*), val); *//chanage 1 to 1.0f
then it will work.*
}
let me know the status after the change
On Wed, May
Hi Mark
See the out put for that same Application .
I am not getting any error.
On Wed, May 30, 2012 at 1:27 AM, Mark question markq2...@gmail.com wrote:
Hi guys, this is a very simple program, trying to use TextInputFormat and
SequenceFileoutputFormat. Should be easy but I get the
Hi All,
How to configure the external jar , which is use by application
internally.
For eample:
JDBC ,Hive Driver etc.
Note:- I dont have permission to start and stop the hadoop machine.
So I need to configure application level (Not hadoop level )
If we will put
://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapreduce/lib/input/MultipleInputs.html
.
On Thu, May 24, 2012 at 1:17 AM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
How to compare to input file In M/R Job.
let A Log file around 30GB
and B Log file
Hi
This Could be due to the Following reason
1) The *NameNode http://wiki.apache.org/hadoop/NameNode* does not have
any available DataNodes
2) Namenode not able to start properly
3) other wise some IP Issue .
Note:- Pleaes mention localhost instead of 127.0.0.1 (If it is in
local)
)
Follow URL:
http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
Thanks
samir
On Sat, May 19, 2012 at 11:30 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi
This Could be due to the Following reason
1) The *NameNode http
HI,
Your requirment is that your M/R will use full xml file while operating.
(If it is write then please one of the approach bellow)
So you can put this xml file in DistrubutedChache which will shared
accross the M/R . So that your will get whole xml instead of chunk of data.
Thanks
Samir
Hi financeturd financet...@yahoo.com,
My Point of view second step like bellow is the good approach
{Separate server} -- {JBoss server}
and then
{Separate server} -- HDFS
thanks
samir
On Sat, May 12, 2012 at 6:00 AM, financeturd financeturd
financet...@yahoo.com wrote:
Hello,
We
Hi Mohit,
1) Hadoop is more portable with Linux,Ubantu or any non dos file
system.
but you are running hadoop on window it colud be the problem bcz hadoop
will generate some partial out put file for temporary use.
2) Another thing is that your are running hadoop version as 0.19 , I think
58 matches
Mail list logo