I am having an input file, which contains last column as class label
7.4 0.29 0.5 1.8 0.042 35 127 0.9937 3.45 0.5 10.2 7 1
10 0.41 0.45 6.2 0.071 6 14 0.99702 3.21 0.49 11.8 7 -1
7.8 0.26 0.27 1.9 0.051 52 195 0.9928 3.23 0.5 10.9 6 1
6.9 0.32 0.3 1.8 0.036 28 117 0.99269 3.24 0.48 11 6 1
Hi,
I don't understand this part of your answer: read the other as a
side-input directly by creating a client..
If I consider both inputs through the InputFormat, this means that a job
will contain both input path in its configuration, and this is enough to
work. So, what is the other? Is is the
Duh. try running ntpdate as root (sudo).
On Fri, Feb 27, 2015 at 11:39 PM, daemeon reiydelle daeme...@gmail.com
wrote:
try ntpdate -b -p8 whichever server
However, you flat should not be seeing 13 minutes. Something wrong.
Suggest nptdate -d -b -p8 whichever server and look at the results.
try ntpdate -b -p8 whichever server
However, you flat should not be seeing 13 minutes. Something wrong. Suggest
nptdate -d -b -p8 whichever server and look at the results.
On Thu, Feb 26, 2015 at 10:54 AM, Jan van Bemmelen j...@tokyoeye.net wrote:
Hi Tariq,
So this is not really an Hadoop
may be you can check with cloudera support team
On Sat, Feb 28, 2015 at 1:24 PM, Krish Donald gotomyp...@gmail.com wrote:
Hi,
I searched a lot in google but couldn't find really well documented about
the cloudera manager version and release date table i.e.
VersionRelease date
CM 4.1
When the access fails, do you have a way to check that the utilization on
the target node ... i.e. was the target node utilization at 100%?
On Thu, Feb 26, 2015 at 10:30 PM, hadoop.supp...@visolve.com wrote:
Hello Krishna,
Exception seems to be IP specific. It might be occurred due to
My thinking ... in your map step take each frame and tag it with an
appropriate unique key. Your reducers (if used) then do the frame analysis,
If doing frame sequences, then you need to decide the granularity vs. time
each node spends executing. Same sort of process that is done for e.g.
Hi,
I searched a lot in google but couldn't find really well documented about
the cloudera manager version and release date table i.e.
VersionRelease date
CM 4.1 June 2013
CM 4.2 Nov 2013
etc.
Above is the hypothetical date .
Does anybody has data like this ?
Thanks
Krish
Hi All,
I have many jobs failed because AM trying to rerun job in very short
interval (only in 6 second). How can I add the interval to bigger
value?
https://dl.dropboxusercontent.com/u/33705885/2015-02-27_145104.png
Thank you.
Looks like this is related:
https://issues.apache.org/jira/browse/YARN-964
On Fri, Feb 27, 2015 at 4:29 AM, Nur Kholis Majid
nur.kholis.ma...@gmail.com wrote:
Hi All,
I have many jobs failed because AM trying to rerun job in very short
interval (only in 6 second). How can I add the interval
Hi,
I would like to have a mapreduce job that reads input data from 2 HDFS.
Is this possible?
Thanks,
Hi experts,
i'm Carmen and i'm trying to write a simple class which submit a Job,
having a jena-elephas mapper and input and output formats, in to my hadoop
cluster with only one master and one slave, but i get this error:
java.lang.RuntimeException: java.lang.InstantiationException
at
It is entirely possible. You should treat one of them as the primary inputs
through the InputFormat/Mapper and read the other as a side-input directly by
creating a client.
+Vinod
On Feb 27, 2015, at 7:22 AM, xeonmailinglist xeonmailingl...@gmail.com wrote:
Hi,
I would like to have a
That's an old JIRA. The right solution is not an AM-retry interval but
launching the AM somewhere.
Why is your AM failing in the first place? If it is due to full-disk, the
situation should be better with YARN-1781 - can you use the configuration
Dear Jan,
I changed the data of the node by sudo date *newdatetimestring*
Thanks for your help
Regards,
On Thu, Feb 26, 2015 at 6:31 PM, Jan van Bemmelen j...@tokyoeye.net wrote:
Hi Tariq,
You seem to be using debian or ubuntu. The documentation here will guide
you through setting up
Any thoughts ?
Thanks,
On Thu, Feb 26, 2015 at 7:30 PM, Manoj Samel manojsamelt...@gmail.com
wrote:
On a kerberos based Hadoop cluster, a kinit is done and then oozie command
is executed. This works every time (thus no setup issues), except once it
failed with following error.
Error:
Is it possible to use multi-threaded maven mode to build hadoop?
When I try mvn -T2 package -Pdist,native-win -DskipTests -Dtar, it fails
with message:
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-assembly-plugin:2.3:single
(package-mapreduce) on project hadoop-mapreduce: Failed
17 matches
Mail list logo