I wonder if there is anything like this in the south Germany area.
Bob
On Thu Bob Schulze b.schu...@ecircle.com wrote:
I wonder if there is anything like this in the south Germany area.
There are at least two developers in or close to Munich I am aware of.
You should contact @larsgeorge - he expressed interest in a Munich
Hadoop User Group as well.
Isabel
Hi Bob,
I am working on getting a Munich HUG meeting going, which would happen
in between the Berlin ones. I am looking currently for a sponsor that
would offer a room and projector. If I get that organized I would simply
set a date and send an invitation on the usual channels and the
Reshu Jain wrote:
Hi
I wanted to propose IBM's Global Parallel File System™ (GPFS™ ) as the
distributed filesystem. GPFS™ is well known for its unmatched
scalability and
leadership in file system performance, and it is now IBM’s premier
storage virtualization solution. More information at
Are you certain that your records are being split into key and value the way
you expect. That is the usual reason for odd join behavior.
I haven't used the join code past 19.1, however.
On Wed, Nov 18, 2009 at 12:42 PM, Edmund Kohlwey ekohl...@gmail.com wrote:
I'm using Cloudera's distribution
Lars,
Will email this evening. In the mean time will speak to the folks who we had
requested the space from. I would guess that Jan would be better so we have
time to set up everything. However it may be good to arrange a meetup in
Munich prior to this. We are near Erding and so only about 40
Hi Brian
you mean that i can find the HDFS API doc in Google or Baidu(Baidu is
my nearest search engine)?
But I still can not find anything *hdfs* api docs with keywords just like
org.apache.hadoop.hdfs.DFSUtil or org.apache.hadoop.hdfs.DFSUtil api doc
or hadoop hdfs api doc.
Do I miss
Steve Loughran wrote:
Michael Thomas wrote:
IPs are passed to the rack awareness script. We use 'dig' to do the
reverse lookup to find the hostname, as we also embed the rack id in
the worker node hostnames.
It might be nice to have some example scripts up on the wiki, to give
people a
Can I just change the block size in the config and restart or do I have to
reformat? It's okay if what is currently in the file system stays at the old
block size if that's possible ?
Feel free to add this here:
http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
On Thu, Nov 19, 2009 at 11:18 AM, Michael Thomas tho...@hep.caltech.edu wrote:
Steve Loughran wrote:
Michael Thomas wrote:
IPs are passed to the rack awareness script. We use 'dig' to do the
reverse
On Thu, Nov 19, 2009 at 11:24 AM, Raymond Jennings III
raymondj...@yahoo.com wrote:
Can I just change the block size in the config and restart or do I have to
reformat? It's okay if what is currently in the file system stays at the old
block size if that's possible ?
Raymond,
The block
Hi.
I have a strange case of missing files, which most probably randomly delete
by my application.
Does HDFS provides any auditing tools for tracking who deleted what and
when?
Thanks in advance.
I'm don't know about the auditing tools, but I have seen files get randomly
deleted in dev setups when using hadoop with the default hadoop.tmp.dir
setting, which is /tmp/hadoop-${user.name}.
On Thu, Nov 19, 2009 at 9:03 AM, Stas Oskin stas.os...@gmail.com wrote:
Hi.
I have a strange case
everything online says that replication will be taken care of automatically,
but i've had a file (that i uploaded through the put command on one node)
sitting with a replication of 1 for three days.
oh, i bet the node's replication level overrode the master's... yeah.
thanks.
On Thu, Nov 19, 2009 at 10:17 AM, Boris Shkolnik bo...@yahoo-inc.comwrote:
What is your configured replication level?
(namedfs.replication/name in hdfs-site.xml or hdfs-default.xml)
One can specify
Hey Mike,
1) What was the initial replication factor requested? It will always stay at
that level until you request a new one.
2) I think to manually change a file's replication it is hadoop dfsadmin
-setrep or something like that. Don't trust what I wrote, trust the help
output.
3) If a
On 11/19/2009 10:25 AM, Brian Bockelman wrote:
Hey Mike,
1) What was the initial replication factor requested? It will always stay at
that level until you request a new one.
2) I think to manually change a file's replication it is hadoop dfsadmin
-setrep or something like that. Don't trust
I just set up a hadoop cluster. When I try to write to it from my java code,
I get the.error below. When using the core-site.xml, do I need to specify a
user?
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=DrWho,
and setrep is a good tool to add to my arsenal. thanks.
On Thu, Nov 19, 2009 at 10:28 AM, Michael Thomas tho...@hep.caltech.eduwrote:
On 11/19/2009 10:25 AM, Brian Bockelman wrote:
Hey Mike,
1) What was the initial replication factor requested? It will always stay
at that level until you
Just in case anyone comes across this again, I figured out that it was a
bug in the local job runner.
https://issues.apache.org/jira/browse/MAPREDUCE-1223
On 11/19/09 7:37 AM, Jason Venner wrote:
Are you certain that your records are being split into key and value the way
you expect. That is
Hadoop will perform a 'whoami' to identify the user that is making the
HDFS request. If you have not turned off file permissions in the Hadoop
configuration, the user name will be matched to the permission settings
related to the path you are going after. Think of it as a mechanism
similar (but
you can run you MR programm in the relative *nix account, make sure it
is as same as your hdfs dir user and group .
or you can trun off the hdfs permission conf .
2009/11/20, Ananth T. Sarathy ananth.t.sara...@gmail.com:
I just set up a hadoop cluster. When I try to write to it from my java
Hello all,
I am not sure if the question is framed right !
Lets say user1 launches an instance of hadoop on *single node* , and hence
he has permission to create,delete files on hdfs or launch M/R jobs .
now what should i do if user2 wants to use the same instance of hadoop which
is launched by
23 matches
Mail list logo