Hi,
could you please try,
ssh -o ServerAliveInterval=10 -L 10004:localhost:10004 murat@10.0.0.100
2014-08-25 20:52 GMT+03:00 murat migdisoglu murat.migdiso...@gmail.com:
Hello,
Due to some firewall restrictions, I need to connect from tableau to the
hiveserver2 through ssh tunnel..
I
Hi, thx for your answer. I don't think that ssh tunnel is the issue(btw,
why port 10004 and not 1?)
Does ODBC driver connects to any other port?service(hive metastore etc..)
On Tue, Aug 26, 2014 at 9:55 AM, Kadir Sert kadirser...@gmail.com wrote:
Hi,
could you please try,
ssh -o
Hi,
Is it not good idea to model key as Text type?
I have a large number of sequential files that has bunch of key value
pairs. I will read these seq files inside the map. Hence my map needs only
filenames. I believe, with CombineFileInputFormat the map will run on nodes
where data is already
http://community.jaspersoft.com/jaspersoft-aws/connect-emr
I think that's because of hive version.
2014-08-26 12:13 GMT+03:00 murat migdisoglu murat.migdiso...@gmail.com:
Hi, thx for your answer. I don't think that ssh tunnel is the issue(btw, why
port 10004 and not 1?)
Does ODBC driver
Hi!
I have just installed hadoop in my windows x64 machine.l followd carefully the
instructions in https://wiki.apache.org/hadoop/Hadoop2OnWindows but in the
3.5https://wiki.apache.org/hadoop/Hadoop2OnWindows%20but%20in%20the%203.5 and
3.6 points I have some problems I can not handle.
Thanks for reporting this, Mark.
It appears the artifacts are published to
https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-common/2.5.0/,
but haven't propagated to
http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/
I am following up on this, and
Hello,
We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
The cluster was working fine since it went on Safe Mode during the
execution of a job with this message on the
Looking at the error message, it looks like your namenode is not formated.
On the namenode, could you run
hadoop namenode -format
Hope it helps.
Kind regards
Olivier
On 26 Aug 2014 11:08, Blanca Hernandez blanca.hernan...@willhaben.at
wrote:
Hi!
I have just installed hadoop in my
Hi All,
I have a data set in text csv files and are compressed using gzip
compression. Each record is having around 100 fields. I need to filter the
data by applying various checks like 1. type of field, 2. nullable?,
3. min max length, 4. value belongs to predefined list, 5. value
substitution.
And the namenode does not even start: 14/08/26 12:01:09 WARN
namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
Have you formatted HDFS (step 3.4)?
On Tue, Aug 26, 2014 at 3:08 AM, Blanca Hernandez
blanca.hernan...@willhaben.at
One more follow up, in case someone stumbles across this in the future. From
what we can tell, the Hadoop security initialization is very sensitive to
startup order, and this has been confirmed by discussions with other people.
The only thing that we've been able to make work at all reliably
You can force the namenode to get out of safe mode: hadoop dfsadmin
-safemode leave
On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet vincent.emo...@gmail.com
wrote:
Hello,
We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
I am not sure this is what you want but you can try this shell command:
find [DATANODE_DIR] -name [blockname]
On Tue, Aug 26, 2014 at 6:42 AM, Demai Ni nid...@gmail.com wrote:
Hi, folks,
New in this area. Hopefully to get a couple pointers.
I am using Centos and have Hadoop set up using
would you please past the code in the loop?
On Sat, Aug 23, 2014 at 2:47 PM, rab ra rab...@gmail.com wrote:
Hi
By default, it is true in hadoop 2.4.1. Nevertheless, I have set it to
true explicitly in hdfs-site.xml. Still, I am not able to achieve append.
Regards
On 23 Aug 2014 11:20,
You can leave safe mode:
Namenode in safe mode how to leave:
http://www.unmeshasreeveni.blogspot.in/2014/04/name-node-is-in-safe-mode-how-to-leave.html
On Wed, Aug 27, 2014 at 9:38 AM, Stanley Shi s...@pivotal.io wrote:
You can force the namenode to get out of safe mode: hadoop dfsadmin
15 matches
Mail list logo