No, not ofcourse I blinded it.
On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar praveen...@gmail.comwrote:
Is this property correct ?
property
namefs.default.name/name
value-BLANKED/value
/property
Regards
Prav
On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas fatih.hal
?sortby=publicationDate
which helps solving such type of problems as well.
Best wishes
Mirko
2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:
Dear All,
I have just restarted machines of my hadoop clusters. Now, I am trying
to restart hadoop clusters again, but getting error
://docs.oracle.com/javase/6/docs/api/constant-values.html#java.sql.Types.OTHER
On Mon, Jul 22, 2013 at 04:03:42PM +0400, Fatih Haltas wrote:
Hi everyone,
I am trying to import data from postgre to hdfs but unfortunately, I am
taking this error. What should I do?
I would be really obliged
argument --schema to specify schema other than
the default public, for example:
sqoop list-tables --connect ... -- --schema private_schema
Jarcec
On Sun, Jul 21, 2013 at 03:18:53PM +0400, Fatih Haltas wrote:
Hi everyone,
I had another problem while trying to import or list tables
Is there any --schema option to be able to list schemas other than public,
because, I am not able to list other tables under non-public schemas?
On Mon, Jul 22, 2013 at 12:39 PM, Fatih Haltas fatih.hal...@nyu.edu wrote:
Hi Jarek,
Thanks for your help. But I am using sqoop 1.4.3 but --schema
Finally I found the mails between Jarek and Vantesh true usage is
sqoop import --connect
jdbc:postgresql://192.168.194.158:5432/pgsql--username pgsql
--password XXX -- --schema fatih
As far as I read emails, it was a bug then solved by Vantesh, Thanks.
Hi everyone,
I had another problem while trying to import or list tables on postgresql
via sqoop.
I am using this command
./sqoop list-databases --connect jdbc:postgresql://
192.168.194.158:5432/pgsql --username pgsql -P
It does see only the tables under the public schema, other are not
solved it? Thanks.
Regards,
Shahab
On Tue, Jul 16, 2013 at 9:58 AM, Fatih Haltas fatih.hal...@nyu.eduwrote:
Thanks Shahab, I solved my problem, in anyother way,
Hi everyone,
I am trying to import data from postgresql to hdfs. But I am having some
problems, Here is the problem details:
Sqoop Version: 1.4.3
Hadoop Version:1.0.4
*1) When I use this command:*
*
*
*./sqoop import-all-tables --connect jdbc:postgresql://
192.168.194.158:5432/IMS --username
Thanks Shahab, I solved my problem, in anyother way,
Hi Everyone,
I am trying to import data from postgresql to hdfs via sqoop, however, all
examples, i got on internet is talking about hive,hbase etc. kind of
system,running within hadoop.
I am not using, any of these systems, isnt it possible to import data
without having those kind of
-into-hadoop-hdfs
Regards
On Jul 10, 2013, at 9:59 AM, Fatih Haltas fatih.hal...@nyu.edu wrote:
Hi Everyone,
I am trying to import data from postgresql to hdfs via sqoop, however,
all examples, i got on internet is talking about hive,hbase etc. kind of
system,running within hadoop.
I am
Hi everyone,
Does anybody know that which version of sqoop supports hadoop-1.0.4?
Is sqoop true usage for importing data in PostgreSQL to hadoop?
I will be really obliged if you can help me.
Thank you very much.
What about Postgresql? Do you think that it differs?
On Mon, Jul 8, 2013 at 2:04 PM, Fatih Haltas fatih.hal...@nyu.edu wrote:
I just tried sqoop 1.4.3 with hadoop 1.0.4 and did some data import from
mysql and it works.
And other answer is enough for me.
Thank you so much.
On Mon, Jul 8
Thanks Nitin. I really appreciate your quick answers.
On Mon, Jul 8, 2013 at 3:17 PM, Nitin Pawar nitinpawar...@gmail.com wrote:
technically it should not make a difference as long as you have correct
jdbc drivers on java classpath
On Mon, Jul 8, 2013 at 4:44 PM, Fatih Haltas fatih.hal
I want to get reduce output as key and value then I want to pass them to a
new reduce as input key and input value.
So is there any Map-Reduce-Reduce kind of method?
Thanks to all.
://incubator.apache.org/projects/tez.html soon. Meanwhile, you can
read the proposal behind this project at
http://wiki.apache.org/incubator/TezProposal. Initial sources are at
https://svn.apache.org/repos/asf/incubator/tez/trunk/.
On Sun, Mar 24, 2013 at 6:33 PM, Fatih Haltas fatih.hal...@nyu.edu
is at
http://search-hadoop.com/m/RH5AP11ob2o1.
On Wed, Mar 20, 2013 at 6:10 PM, Fatih Haltas fatih.hal...@nyu.edu
wrote:
Hi Everyone,
I am trying to implement Secondary Sort Algorithm on mydata. But I am
having
a trouble with my Combiner.
When I donot use Combiner, grouping is done well
Hi Everyone,
I am trying to implement Secondary Sort Algorithm on mydata. But I am
having a trouble with my Combiner.
When I donot use Combiner, grouping is done well, I mean one reduce task is
running for every pair, sharing the same first element.
However, when I set Combiner as Reducer class
Hi Everyone,
I would like to have 2 different output (having different columns of a same
input text file.)
When I googled a bit, I got multipleoutputs classes, is this the common way
of doing it or is there any way to create contextiterable kind of
things/is there context array/is it possible to
Hi Austin,
I am not sure whether you had this kind of mistake or not but in any case,
I would like to state:
that you might be trying to read whole input values,(corresponding key
values) to reducer function from beginning to end(which is the output value
of mapper) while merging them into one
I mean,
while trying to add newcoming reducer input value to already merged input
values,to construct whole input values of corresponding key value to
reducer, you might be reading every input values(which are output value of
mapper) from beginning to end.
On Tue, Mar 5, 2013 at 1:46 PM, Fatih
Hi Ring,
Can you write output of jar tf filename command to see package
organization of jar, you created?
2 Mart 2013 Cumartesi tarihinde springring adlı kullanıcı şöyle yazdı:
Hi,
I want to use:
hadoop jar my dir/hadoop-streaming-0.20.2-cdh3u3.jar -inputformat
Hi all,
First, I would like to thank you all, espacially to Hemanth and Harsh.
I solved my problem, this was exactly the about java version and hadoop
version incompatibility, now, I can run my compiled and jarred MapReduce
program.
I have a different question now. I created a code, finding the
| grep version
minor version: 0
major version: 50
Please paste the output of this - we can verify what the problem is.
Thanks
Hemanth
On Sat, Feb 23, 2013 at 4:45 PM, Fatih Haltas fatih.hal...@nyu.eduwrote:
Hi again,
Thanks for your help but now, I am struggling with the same
I am always getting the Child Error, I googled but I could not solve the
problem, did anyone encounter with same problem before?
[hadoop@ADUAE042-LAP-V conf]$ hadoop jar
/home/hadoop/project/hadoop-1.0.4/hadoop-examples-1.0.4.jar
aggregatewordcount /home/hadoop/project/hadoop-data/NetFlow
Hi everyone,
I know this is the common mistake to not specify the class adress while
trying to run a jar, however,
although I specified, I am still getting the ClassNotFound exception.
What may be the reason for it? I have been struggling for this problem more
than a 2 days.
I just wrote
Hi everyone,
I know this is the common mistake to not specify the class adress while
trying to run a jar, however,
although I specified, I am still getting the ClassNotFound exception.
What may be the reason for it? I have been struggling for this problem more
than a 2 days.
I just wrote
(java.lang.Class)
On Tuesday, February 19, 2013, Fatih Haltas wrote:
Hi everyone,
I know this is the common mistake to not specify the class adress while
trying to run a jar, however,
although I specified, I am still getting the ClassNotFound exception.
What may be the reason for it? I have
to reflect the right
package structure.
Also, the error you are getting seems to indicate that you aphave compiled
using Jdk 7. Note that some versions of Hadoop are supported only on Jdk 6.
Which version of Hadoop are you using.
Thanks
Hemanth
On Tuesday, February 19, 2013, Fatih Haltas
30 matches
Mail list logo