i am getting this error on running that jsp as servlet
Apr 15, 2012 2:34:07 PM org.apache.catalina.core.AprLifecycleListener init
INFO: Loaded APR based Apache Tomcat Native library 1.1.22.
Apr 15, 2012 2:34:07 PM org.apache.catalina.core.AprLifecycleListener init
INFO: APR capabilities:
Thanks.
The native snappy libraries I have installed. However, I use the
normal jars that you get when downloading Hadoop, I am not compiling
Hadoop myself.
I do not want to use the snappy codec (I don't care about compression
at the moment), but it seems it is needed anyway? I added this to the
Can you restart tasktrackers once and run the job again? It refreshes the
class path.
On Sun, Apr 15, 2012 at 11:58 AM, Bas Hickendorff
hickendorff...@gmail.comwrote:
Thanks.
The native snappy libraries I have installed. However, I use the
normal jars that you get when downloading Hadoop, I
Hello John,
I did restart them (in fact, I did a full reboot of the machine). The
error is still there.
I guess my question is: is it expected that Hadoop needs to do
something with the Snappycodec when mapred.compress.map.output is set
to false?
Regards,
Bas
On Sun, Apr 15, 2012 at 12:04 PM,
Dear all,
I was wondering if it is possible to format the HDFS at boot time. I have some
VM's that are pre-set and pre-configured with Hadoop (datanodes [slaves] and a
namenode [master]), and I'm looking for a way to obtain a cluster from them
out of the box, as they're launched (including the
That is odd why would it crash when your m/r job did not rely on snappy?
One possibility : Maybe because your input is snappy compressed, Hadoop is
detecting that compression, and trying to use the snappy codec to decompress.?
Jay Vyas
MMSB
UCHC
On Apr 15, 2012, at 5:08 AM, Bas
hi Madhu,
After doing Modification in /ets/host it's working fine
Thanks a lot :)
Kind Regards
Sijit Dhamale
(+91 9970086652)
On Fri, Apr 13, 2012 at 10:49 AM, madhu phatak phatak@gmail.com wrote:
Please check contents of /etc/hosts for the hostname and ipaddress mapping.
On Thu, Apr
You need three things. 1 install snappy to a place the system can pick
it out automatically or add it to your java.library.path
Then add the full name of the codec to io.compression.codecs.
hive set io.compression.codecs;
I am a newbie to Unix/Hadoop and have basic questions about CDH3 setup.
I installed CDH3 on Ubuntu 11.0 Unix box. I want to setup a sudo
cluster where I can run my pig jobs under mapreduce mode.
How do I achieve that?
1. I couldd not find the core-site.xml. hdfs-site.xml and mapred-site.xml
Hi,
I use hadoop cloudera 0.20.2-cdh3u0.
I have a program which uploads local files to HDFS every hour.
Basically, I open a gzip input stream by in= new GZIPInputStream(fin); And
write to HDFS file. After less than two days, it will hang. It hangs at
FSDataOutputStream.close(86).
Here is the
Prashant,
Post your questions to cdh-u...@cloudera.org.
Follow CDH3 installation guide. After installing package and individual
components you need to configure all configuration files like core-site.xml,
hdfs-site.xml etc.
Thanks
Manish
Sent from my BlackBerry, pls excuse typo
-Original
Hi Mingxi,
In your thread dump, did you check DataStreamer thread? is it running?
If DataStreamer thread is not running, then this issue would be mostly same as
HDFS-2850.
Did you find any OOME in your clients?
Regards,
Uma
From: Mingxi Wu
12 matches
Mail list logo