Some of each.
There is now some momentum for moving the random forest implementation
forward a bit.
But the real problem is that to really develop a solution for this takes
quite a bit of effort. Supporting it takes more.
If you have a partial or full solution to the resource issue, that would
Hi,
I am running a small java program that basically write a small input data
to the Hadoop FileSystem, run a Mahout Canopy and Kmeans Clustering and
then output the content of the data.
In my hadoop.properties I have included the core-site.xml definition for
the Java program to connect to my
security.UserGroupInformation:
PriviledgedActionException as:cyril
I'm not entirely sure but sounds like a permissions issue to me. check all the
files are owned by the user cyril and not root.
also did you start hadoop as root and run the program as cyril, hadoop might
also complain about
Thank you for the reply Chris,
I create and write fine on the file system. And the file is there when I
check hadoop. So I do not think the problem is privileges. As I read it,
the Canopy Driver is looking for the file under the Class file
(/home/cyrille/DataWriter/src/testdata_seq/) instead of
Fuzzy KMeans will use a lot of heap memory because every vector is
observed (with weighting) by every cluster. This will make the cluster
centers (and other vectors) much more dense than with any of the other
clustering algorithms. Figure you are storing 90k doubles in each vector
and each
Well then do all the various folders exist on the hadoop fs?
I also had a similar problem awhile ago where my program ran fine but then I
did something (no idea what) and hadoop started complaining. To fix it I had to
put everything on the hadoop fs. i.e. was move all local fs path to/data to
Thank you again Chris.
Yes it is a typo.
After careful reading of the output, my program is exactly doing what you
describe.
I am trying to do everything in Hadoop fs but it is creating files on both
hadoop fs and class fs and some files are missing. When I run AND copy the
missing file from
On 29 Mar 2013, at 17:05, Cyril Bogus wrote:
Thank you again Chris.
Yes it is a typo.
After careful reading of the output, my program is exactly doing what you
describe.
I am trying to do everything in Hadoop fs but it is creating files on both
hadoop fs and class fs and some files are
whoops sorry about the empty mail last time,
I have one last suggestion though I'm not sure it'll work
you could try putting the path names as hdfs://testdata_seq/clusters
apart from that I'm out of ideas
On 29 Mar 2013, at 17:05, Cyril Bogus wrote:
Thank you again Chris.
Yes it is a
One thing that you could try is just using _absolute paths_ everywhere. So,
something on HDFS is hdfs://... whereas something on your local file system
is file://...
On Fri, Mar 29, 2013 at 7:05 PM, Cyril Bogus cyrilbo...@gmail.com wrote:
Thank you again Chris.
Yes it is a typo.
After
Hello,
I checked the implementation of GenericDataModel for adding and removing
preferences after instantiation. Those methods (setPreference(long, long,
float) and removePreference(long, long)) throw
UnsupportedOperationException s. I'd like to know whether there is an
important reason for not
Yes it's OK. You need to care for thread safety though, which will be
hard. The other problem is that changing the underlying data doesn't
necessarily invalidate caches above it. You'll have to consider that
part as well. I suppose this is part of why it was conceived as a
model where the data is
Kind of saw this coming since I felt like file:/// will be appended but
here is the error I get if I do it
ERROR: java.lang.IllegalArgumentException: Wrong FS:
hdfs://super:54310/user/cyril/testdata_seq, expected: file:///
On Fri, Mar 29, 2013 at 1:27 PM, Dan Filimon
Maybe this helps?
http://www.opensourceconnections.com/2013/03/24/hdfs-debugging-wrong-fs-expected-file-exception/
On Fri, Mar 29, 2013 at 9:27 PM, Cyril Bogus cyrilbo...@gmail.com wrote:
Kind of saw this coming since I felt like file:/// will be appended but
here is the error I get if I do
THANK YOU SO MUCH DAN...
It even solved another problem I was having with Sqoop who couldn't connect
to the hdfs through Java Programming.
On Fri, Mar 29, 2013 at 3:30 PM, Dan Filimon dangeorge.fili...@gmail.comwrote:
Maybe this helps?
Happy to help! :)
On Fri, Mar 29, 2013 at 9:38 PM, Cyril Bogus cyrilbo...@gmail.com wrote:
THANK YOU SO MUCH DAN...
It even solved another problem I was having with Sqoop who couldn't connect
to the hdfs through Java Programming.
On Fri, Mar 29, 2013 at 3:30 PM, Dan Filimon
16 matches
Mail list logo