Re: JAVA_HOME is not set

2012-07-05 Thread Simon
I think you should set  JAVA_HOME=/usr/lib/jvm/java-7-**openjdk-i386/jre

JAVA_HOME is the base location of java, where it can find the java
executable $JAVA_HOME/bin/java

Regards,
Simon


On Thu, Jul 5, 2012 at 12:42 PM, Ying Huang  wrote:

> Hello,
> I am installing hadoop according to this page:
> https://cwiki.apache.org/**BIGTOP/how-to-install-hadoop-**
> distribution-from-bigtop.html<https://cwiki.apache.org/BIGTOP/how-to-install-hadoop-distribution-from-bigtop.html>
> I think I have successfully installed hadoop on my Ubuntu 12.04 x64.
> Then I go to step Running Hadoop, bellowing is my operation step, why
> it prompts that my JAVA_HOME is not set?
> --**--**
> --
> root@ubuntu32:/usr/lib/hadoop# export JAVA_HOME=/usr/lib/jvm/java-7-**
> openjdk-i386/jre/bin/java
> root@ubuntu32:/usr/lib/hadoop# sudo -u hdfs hadoop namenode -format
> Error: JAVA_HOME is not set.
> root@ubuntu32:/usr/lib/hadoop# ls $JAVA_HOME -al
> -rwxr-xr-x 1 root root 5588 May  2 20:14 /usr/lib/jvm/java-7-openjdk-**
> i386/jre/bin/java
> root@ubuntu32:/usr/lib/hadoop#
>
> --**--**
> --
>
>
> --
>
>
> Best Regards
> Ying Huang
>
>


Re: JAVA_HOME is not set

2012-07-05 Thread Simon
Did you configure JAVA_HOME in file hadoop-env.sh?

Simon


On Thu, Jul 5, 2012 at 1:02 PM, Ying Huang  wrote:

>  According to your suggestion, I try following command, but still fail:
> 
> root@ubuntu32:/usr/lib/hadoop# export
> JAVA_HOME=/usr/lib/jvm/java-7-openjdk-i386/jre
>
> root@ubuntu32:/usr/lib/hadoop# sudo -u hdfs hadoop namenode -format
> Error: JAVA_HOME is not set.
> root@ubuntu32:/usr/lib/hadoop# ls $JAVA_HOME -al
> total 20
> drwxr-xr-x 5 root root 4096 Jul  3 20:43 .
> drwxr-xr-x 5 root root 4096 Jul  3 20:43 ..
> lrwxrwxrwx 1 root root   50 May  2 20:14 ASSEMBLY_EXCEPTION ->
> ../../java-7-openjdk-common/jre/ASSEMBLY_EXCEPTION
> drwxr-xr-x 2 root root 4096 Jul  3 20:43 bin
> drwxr-xr-x 8 root root 4096 Jul  3 20:43 lib
> drwxr-xr-x 4 root root 4096 Jul  3 20:43 man
> lrwxrwxrwx 1 root root   50 May  2 20:14 THIRD_PARTY_README ->
> ../../java-7-openjdk-common/jre/THIRD_PARTY_README
> root@ubuntu32:/usr/lib/hadoop#
>
> ----
>
> On 07/05/2012 11:53 AM, Simon wrote:
>
> I think you should set  JAVA_HOME=/usr/lib/jvm/java-7-openjdk-i386/jre
>
>  JAVA_HOME is the base location of java, where it can find the java
> executable $JAVA_HOME/bin/java
>
>  Regards,
> Simon
>
>
> On Thu, Jul 5, 2012 at 12:42 PM, Ying Huang  wrote:
>
>> Hello,
>> I am installing hadoop according to this page:
>> https://cwiki.apache.org/BIGTOP/how-to-install-hadoop-distribution-from-bigtop.html
>> I think I have successfully installed hadoop on my Ubuntu 12.04 x64.
>> Then I go to step Running Hadoop, bellowing is my operation step, why
>> it prompts that my JAVA_HOME is not set?
>>
>> --
>> root@ubuntu32:/usr/lib/hadoop# export
>> JAVA_HOME=/usr/lib/jvm/java-7-openjdk-i386/jre/bin/java
>> root@ubuntu32:/usr/lib/hadoop# sudo -u hdfs hadoop namenode -format
>> Error: JAVA_HOME is not set.
>> root@ubuntu32:/usr/lib/hadoop# ls $JAVA_HOME -al
>> -rwxr-xr-x 1 root root 5588 May  2 20:14
>> /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java
>> root@ubuntu32:/usr/lib/hadoop#
>>
>>
>> --
>>
>>
>> --
>>
>>
>> Best Regards
>> Ying Huang
>>
>>
>
>
> --
>
>
> Best Regards
> Ying Huang
>
>


Re: libhdfs on hadoop 0.20.0 release

2009-10-19 Thread Simon
Maybe you need to run ./configure first to generate Makefile for your
specific system?

On Mon, Oct 19, 2009 at 6:06 PM, 杨杰  wrote:

> NOTE: for amd64 architecture, libhdfs will not compile unless you edit
> the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
> (probably the same for others too). See
> [https://issues.apache.org/jira/browse/HADOOP-3344 HADOOP-3344]
>
> Common build problems include not finding the libjvm.so in
> JAVA_HOME/jre/lib/OS_ARCH/server or not finding fuse in
> FUSE_HOME or /usr/local.
>
>
>
> In the guide, it's suggested to modify *Makefile, *but in the 0.20.0
> release, there is no such file but a Makefile.am and Makefile.in. I don't
> know how to change then. As a result my "libhdfs.so" refuses to come, which
> has puzzled me for long~~
>
> Is there anyone has the experience on configuring the fuse-dfs based on the
> ubuntu server ( amd64) ? will you please give me a guide ?
>
> Thank you!
>
> --
> Yang Jie(杨杰)
> Group of CLOUD, Xi'an Jiaotong University
> Department of Computer Science and Technology, Xi’an Jiaotong University
>
> hi.baidu.com/thinkdifferent
> PHONE: 86 1346888 3723
> TEL: 86 29 82665263 EXT. 24
> MSN: xtyangjie2...@yahoo.com.cn
>


Re: Problem with building hadoop 0.21

2011-02-27 Thread Simon
Hey,

Can you let us know why you want to replace all the jar files? That usually
does not work, especially for development code in the code base.
So, just use the one you have successfully compiled, don't replace jar
files.

Hope it can work.

Simon

2011/2/27 朱韬 

> Hi,guys:
>  I checked out the source code fromhttp://
> svn.apache.org/repos/asf/hadoop/mapreduce/trunk/. Then I compiled using
> this script:
>  #!/bin/bash
> export JAVA_HOME=/usr/share/jdk1.6.0_14
> export CFLAGS=-m64
> export CXXFLAGS=-m64
> export ANT_HOME=/opt/apache-ant-1.8.2
> export PATH=$PATH:$ANT_HOME/bin
> ant -Dversion=0.21.0 -Dcompile.native=true
> -Dforrest.home=/home/hadoop/apache-forrest-0.9 clean tar
> It was Ok before these steps. Then I replaced
> "hadoop-mapred-0.21.0.jar", hadoop-mapred-0.21.0-sources.jar,
>  hadoop-mapred-examples-0.21.0.jar,hadoop-mapred-test-0.21.0.jar,and
> hadoop-mapred-tools-0.21.0.jar inRelease 0.21.0 with the compiled jar files
> from the above step. Also I added my scheduler to lib. When starting the
> customed hadoop, I encountered the problems as blow:
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/hadoop/security/RefreshUserMappingsProtocol
>at java.lang.ClassLoader.defineClass1(Native Method)
>at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
>at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
>at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
>at java.net.URLClassLoader.access$000(URLClassLoader.java:56)
>at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
>at java.security.AccessController.doPrivileged(Native Method)
>at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
>at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
> 10.61.0.6: starting tasktracker, logging to
> /home/hadoop/hadoop-green-0.1.0/logs/hadoop-hadoop-tasktracker-hdt0.hypercloud.ict.out
> 10.61.0.143: starting tasktracker, logging to
> /home/hadoop/hadoop-green-0.1.0/logs/hadoop-hadoop-tasktracker-hdt1.hypercloud.ict.out
> 10.61.0.7: starting tasktracker, logging to
> /home/hadoop/hadoop-green-0.1.0/logs/hadoop-hadoop-tasktracker-hdt2.hypercloud.ict.out
> 10.61.0.6: Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/hadoop/io/SecureIOUtils$AlreadyExistsException
> 10.61.0.6: Caused by: java.lang.ClassNotFoundException:
> org.apache.hadoop.io.SecureIOUtils$AlreadyExistsException
> 10.61.0.6:  at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
> 10.61.0.6:  at java.security.AccessController.doPrivileged(Native
> Method)
> 10.61.0.6:  at
> java.net.URLClassLoader.findClass(URLClassLoader.java:188)
> 10.61.0.6:  at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
> 10.61.0.6:  at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> 10.61.0.6:  at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
> 10.61.0.6:  at
> java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
> 10.61.0.6: Could not find the main class:
> org.apache.hadoop.mapred.TaskTracker.  Program will exit.
> 10.61.0.143: Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/hadoop/io/SecureIOUtils$AlreadyExistsException
> 10.61.0.143: Caused by: java.lang.ClassNotFoundException:
> org.apache.hadoop.io.SecureIOUtils$AlreadyExistsException
> 10.61.0.143:at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
> 10.61.0.143:at java.security.AccessController.doPrivileged(Native
> Method)
> 10.61.0.143:at
> java.net.URLClassLoader.findClass(URLClassLoader.java:188)
> 10.61.0.143:at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
> 10.61.0.143:at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> 10.61.0.143:at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
> 10.61.0.143:at
> java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
> 10.61.0.143: Could not find the main class:
> org.apache.hadoop.mapred.TaskTracker.  Program will exit.
> 10.61.0.7: Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/hadoop/io/SecureIOUtils$AlreadyExistsException
> 10.61.0.7: Caused by: java.lang.ClassNotFoundException:
> org.apache.hadoop.io.SecureIOUtils$AlreadyExistsException
> 10.61.0.7:  at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
> 10.61.0.7:  at java.security.AccessController.doPrivileged(Native
> Method)
> 10.61.0.7:  at
> java.net.URLClassLoader.findClass(URLClassLoader.java:188)
> 10.61.0.7:  at java.lang.ClassLoader.loadClass(ClassLoader.ja

Re: Hadoop Case Studies?

2011-02-27 Thread Simon
I think you can also simulate PageRank Algorithm with hadoop.

Simon -

On Sun, Feb 27, 2011 at 9:20 PM, Lance Norskog  wrote:

> This is an exercise that will appeal to undergrads: pull the Craiglist
> personals ads from several cities, and do text classification. Given a
> training set of all the cities, attempt to classify test ads by city.
> (If Peter Harrington is out there, I stole this from you.)
>
> Lance
>
> On Sun, Feb 27, 2011 at 4:55 PM, Ted Dunning 
> wrote:
> > Ted,
> >
> > Greetings back at you.  It has been a while.
> >
> > Check out Jimmy Lin and Chris Dyer's book about text processing with
> > hadoop:
> >
> > http://www.umiacs.umd.edu/~jimmylin/book.html
> >
> >
> > On Sun, Feb 27, 2011 at 4:34 PM, Ted Pedersen 
> wrote:
> >
> >> Greetings all,
> >>
> >> I'm teaching an undergraduate Computer Science class that is using
> >> Hadoop quite heavily, and would like to include some case studies at
> >> various points during this semester.
> >>
> >> We are using Tom White's "Hadoop The Definitive Guide" as a text, and
> >> that includes a very nice chapter of case studies which might even
> >> provide enough material for my purposes.
> >>
> >> But, I wanted to check and see if there were other case studies out
> >> there that might provide motivating and interesting examples of how
> >> Hadoop is currently being used. The idea is to find material that goes
> >> beyond simply saying "X uses Hadoop" to explaining in more detail how
> >> and why X are using Hadoop.
> >>
> >> Any hints would be very gratefully received.
> >>
> >> Cordially,
> >> Ted
> >>
> >> --
> >> Ted Pedersen
> >> http://www.d.umn.edu/~tpederse
> >>
> >
>
>
>
> --
> Lance Norskog
> goks...@gmail.com
>



-- 
Regards,
Simon


Re: a hadoop input format question

2011-02-27 Thread Simon
Firstly, I think your hadoop version is a bit too old, maybe you can try
version number larger than 20.
And try to run the sort sample with the following command.
bin/hadoop jar hadoop-*-examples.jar sort [-m <#maps>] [-r <#reduces>]
 

HTH.
Simon
On Fri, Feb 25, 2011 at 5:37 PM, Shivani Rao  wrote:

> I am running basic hadoop examples on amazon emr and I am stuck at a very
> simple place. I am apparently not passing the right "classname" for
> inputFormat
>
> From hadoop documentation it seems like "TextInputFormat" is a valid option
> for input format
>
> I am running a simple sort example using mapreduce.
>
> Here is the command variations I tried, all to vain:
>
>
> $usr/local/hadoop/bin/hadoop jar /path to hadoop
> examples/hadoop-0.18.0-examples.jar sort -inFormat TextInputFormat
> -outFormat TextOutputFormat /path to datainput/datain/ /path to data
> output/dataout
>
> The sort function does not declare "TextInputFormat" in its import list.
> Could that be a problem
> ?
> Could it be a version problem?
>
>
> Any help is aprpeciated!
> Shivani
>
>
>
> --
> Research Scholar,
> School of Electrical and Computer Engineering
> Purdue University
> West Lafayette IN
> web.ics.purdue.edu/~sgrao <http://web.ics.purdue.edu/%7Esgrao>
>



-- 
Regards,
Simon


Re: TaskTracker not starting on all nodes

2011-02-27 Thread Simon
Hey Bikash,

Maybe you can manually start a  tasktracker on the node and see if there are
any error messages. Also, don't forget to check your configure files for
mapreduce and hdfs and make sure datanode can start successfully first.
After all these steps, you can submit a job on the master node and see if
there are any communication between these failed nodes and the master node.
Post your error messages here if possible.

HTH.
Simon -

On Sat, Feb 26, 2011 at 10:44 AM, bikash sharma wrote:

> Thanks James. Well all the config. files and shared keys are on a shared
> storage that is accessed by all the nodes in the cluster.
> At times, everything runs fine on initialization, but at other times, the
> same problem persists, so was bit confused.
> Also, checked the TaskTracker logs on those nodes, there does not seem to
> be
> any error.
>
> -bikash
>
> On Sat, Feb 26, 2011 at 10:30 AM, James Seigel  wrote:
>
> > Maybe your ssh keys aren’t distributed the same on each machine or the
> > machines aren’t configured the same?
> >
> > J
> >
> >
> > On 2011-02-26, at 8:25 AM, bikash sharma wrote:
> >
> > > Hi,
> > > I have a 10 nodes Hadoop cluster, where I am running some benchmarks
> for
> > > experiments.
> > > Surprisingly, when I initialize the Hadoop cluster
> > > (hadoop/bin/start-mapred.sh), in many instances, only some nodes have
> > > TaskTracker process up (seen using jps), while other nodes do not have
> > > TaskTrackers. Could anyone please explain?
> > >
> > > Thanks,
> > > Bikash
> >
> >
>



-- 
Regards,
Simon


Re: Re: Problem with building hadoop 0.21

2011-02-28 Thread Simon
I mean can you just make changes to the 0.21 version of your hadoop rather
than put the 0.21 version jars to the latest code. There might be API
breakdowns. Or you can try downloading source code of version 0.21 and try
your steps.

Thanks
Simon

2011/2/28 朱韬 

> Hi.Simon:
>   I modified some coed related to scheduler and designed a  customized
> scheduler .when I built the modified code, then the problems described above
> came up with it. I doubt whether there was something with my code, but after
>  I built the out-of-box code, the same problems still existed. Can you tell
> me how to build and deploy  a  customized hadoop?
> Thank you!
>
>   zhutao
>
>
>
>
>
> At 2011-02-28 11:21:16,Simon  wrote:
>
> >Hey,
> >
> >Can you let us know why you want to replace all the jar files? That
> usually
> >does not work, especially for development code in the code base.
> >So, just use the one you have successfully compiled, don't replace jar
> >files.
> >
> >Hope it can work.
> >
> >Simon
> >
> >2011/2/27 朱韬 
> >
> >> Hi,guys:
> >>  I checked out the source code fromhttp://
> >> svn.apache.org/repos/asf/hadoop/mapreduce/trunk/. Then I compiled using
> >> this script:
> >>  #!/bin/bash
> >> export JAVA_HOME=/usr/share/jdk1.6.0_14
> >> export CFLAGS=-m64
> >> export CXXFLAGS=-m64
> >> export ANT_HOME=/opt/apache-ant-1.8.2
> >> export PATH=$PATH:$ANT_HOME/bin
> >> ant -Dversion=0.21.0 -Dcompile.native=true
> >> -Dforrest.home=/home/hadoop/apache-forrest-0.9 clean tar
> >> It was Ok before these steps. Then I replaced
> >> "hadoop-mapred-0.21.0.jar", hadoop-mapred-0.21.0-sources.jar,
> >>  hadoop-mapred-examples-0.21.0.jar,hadoop-mapred-test-0.21.0.jar,and
> >> hadoop-mapred-tools-0.21.0.jar inRelease 0.21.0 with the compiled jar
> files
> >> from the above step. Also I added my scheduler to lib. When starting the
> >> customed hadoop, I encountered the problems as blow:
> >> Exception in thread "main" java.lang.NoClassDefFoundError:
> >> org/apache/hadoop/security/RefreshUserMappingsProtocol
> >>at java.lang.ClassLoader.defineClass1(Native Method)
> >>at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
> >>at
> >> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
> >>at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
> >>at java.net.URLClassLoader.access$000(URLClassLoader.java:56)
> >>at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
> >>at java.security.AccessController.doPrivileged(Native Method)
> >>at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
> >>at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
> >> 10.61.0.6: starting tasktracker, logging to
> >>
> /home/hadoop/hadoop-green-0.1.0/logs/hadoop-hadoop-tasktracker-hdt0.hypercloud.ict.out
> >> 10.61.0.143: starting tasktracker, logging to
> >>
> /home/hadoop/hadoop-green-0.1.0/logs/hadoop-hadoop-tasktracker-hdt1.hypercloud.ict.out
> >> 10.61.0.7: starting tasktracker, logging to
> >>
> /home/hadoop/hadoop-green-0.1.0/logs/hadoop-hadoop-tasktracker-hdt2.hypercloud.ict.out
> >> 10.61.0.6: Exception in thread "main" java.lang.NoClassDefFoundError:
> >> org/apache/hadoop/io/SecureIOUtils$AlreadyExistsException
> >> 10.61.0.6: Caused by: java.lang.ClassNotFoundException:
> >> org.apache.hadoop.io.SecureIOUtils$AlreadyExistsException
> >> 10.61.0.6:  at
> java.net.URLClassLoader$1.run(URLClassLoader.java:200)
> >> 10.61.0.6:  at java.security.AccessController.doPrivileged(Native
> >> Method)
> >> 10.61.0.6:  at
> >> java.net.URLClassLoader.findClass(URLClassLoader.java:188)
> >> 10.61.0.6:  at
> java.lang.ClassLoader.loadClass(ClassLoader.java:307)
> >> 10.61.0.6:  at
> >> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> >> 10.61.0.6:  at
> java.lang.ClassLoader.loadClass(ClassLoader.java:252)
> >> 10.61.0.6:  at
> >> java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
> >> 10.61.0.6: Could not find the main class:
> >> org.apache.hadoop.mapred.TaskTracker.  Program will exit.
> >> 10.61.0.143: Exception in thread "main" java.lang.NoClassDefFoundError:
> >> org/apache/hadoop/io/SecureIOUtils$AlreadyExistsException
> >> 10.61.0.143: Caused

Re: WritableName can't load class ... for custom WritableClasses

2011-03-19 Thread Simon
It is hard to judge without the code. But my guess is that your
TermFreqArrayWritable
is not properly compiled or imported into your job control file.

HTH.
Simon

On Fri, Mar 18, 2011 at 7:23 PM, maha  wrote:

> Hi,
>
>  The following was working fine with Hadoop Writables.
> Now, I'm using my custom Writable class called "TermFreqArrayWritable" to
> produce a Sequence File with key=LongWritable and
> value=TermFreqArrayWritable.
>
>  However, when I try to read the produced Sequence File using its Reader, I
> get the following:
>
> java.lang.RuntimeException: java.io.IOException: WritableName can't load
> class: TermFreqArrayWritable
>at
> org.apache.hadoop.io.SequenceFile$Reader.getValueClass(SequenceFile.java:1615)
>at
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1555)
>at
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1428)
>at
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1417)
>at
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1412)
>at
> SequenceFileReader_HadoopJob.main(SequenceFileReader_HadoopJob.java:46)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>at java.lang.reflect.Method.invoke(Method.java:597)
>at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.IOException: WritableName can't load class:
> TermFreqArrayWritable
>at org.apache.hadoop.io.WritableName.getClass(WritableName.java:73)
>at
> org.apache.hadoop.io.SequenceFile$Reader.getValueClass(SequenceFile.java:1613)
>... 10 more
> Caused by: java.lang.ClassNotFoundException: TermFreqArrayWritable
>at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>at java.security.AccessController.doPrivileged(Native Method)
>at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
>at java.lang.Class.forName0(Native Method)
>at java.lang.Class.forName(Class.java:247)
>at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762)
>at org.apache.hadoop.io.WritableName.getClass(WritableName.java:71)
>... 11 more
>
> By the way, my SequenceFileReader has those lines:
>
>LongWritable key = new LongWritable();
>TermFreqArrayWritable value = new TermFreqArrayWritable();
>while(reader.next(key,value)){
>System.out.println("key: "+ key.toString());
>System.out.println("value: "+ value.toString());
>}
> and TermFreqArrayWritable is inside the same project under a default
> package.
>
>
> Has any one tried their custom Writable with SequenceFiles ?
>
> Thank you,
> Maha
>
>


-- 
Regards,
Simon


Re: running local hadoop job in windows

2011-03-19 Thread Simon
As far as I know, currently hadoop can only run under *nix like systems.
Correct me if I am wrong.
And if you want to run it under windows, you can try cygwin as the
environment.

Thanks
Simon

On Fri, Mar 18, 2011 at 7:11 PM, Mark Kerzner  wrote:

> No, I hoped that it is not absolutely necessary for that kind of use. I am
> not even issuing the "hadoop -jar" command, but it is pure "java -jar". It
> is true though that my Ubuntu has a Hadoop set up, so maybe it is doing a
> lot of magic behind my back.
>
> I did not want to have my inexperienced Windows users to have to install
> cygwin for just trying the package.
>
> Thank you,
> Mark
>
> On Fri, Mar 18, 2011 at 6:06 PM, Stephen Boesch  wrote:
>
> > presumably you ran this under cygwin?
> >
> > 2011/3/18 Mark Kerzner 
> >
> > > Hi, guys,
> > >
> > > I want to give my users a sense of what my hadoop application can do,
> and
> > I
> > > am trying to make it run in Windows, with this command
> > >
> > > java -jar dist\FreeEed.jar
> > >
> > > This command runs my hadoop job locally, and it works in Linux.
> However,
> > in
> > > Windows I get the error listed below. Since I am running completely
> > > locally,
> > > I don't see why it is trying to do what it does. Is there a workaround?
> > >
> > > Thank you,
> > > Mark
> > >
> > > Error:
> > >
> > > 11/03/18 17:57:43 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> > > processName
> > > =JobTracker, sessionId=
> > > java.io.IOException: Failed to set permissions of path:
> > > file:/tmp/hadoop-Mark/ma
> > > pred/staging/Mark-1397630897/.staging to 0700
> > >at
> > > org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFile
> > > System.java:526)
> > >at
> > > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSys
> > > tem.java:500)
> > >at
> > > org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.jav
> > > a:310)
> > >at
> > > org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:18
> > > 9)
> > >at
> > > org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmi
> > > ssionFiles.java:116)
> > >at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:799)
> > >at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:793)
> > >at java.security.AccessController.doPrivileged(Native Method)
> > >at javax.security.auth.Subject.doAs(Unknown Source)
> > >at
> > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
> > > tion.java:1063)
> > >at
> > > org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:7
> > > 93)
> > >at org.apache.hadoop.mapreduce.Job.submit(Job.java:465)
> > >at
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:495)
> > >at org.frd.main.FreeEedProcess.run(FreeEedProcess.java:66)
> > >at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> > >at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> > >at org.frd.main.FreeEedProcess.main(FreeEedProcess.java:71)
> > >at
> org.frd.main.FreeEedMain.runProcessing(FreeEedMain.java:88)
> > >at
> > org.frd.main.FreeEedMain.processOptions(FreeEedMain.java:65)
> > >at org.frd.main.FreeEedMain.main(FreeEedMain.java:31)
> > >
> >
>



-- 
Regards,
Simon


Re: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

2011-05-27 Thread Simon
First you need to make sure that your dfs daemons are running.
You can start you namenode and datanode separately on the master and slave
nodes, and see what happens with the following commands:

hadoop namenode
hadoop datanode

The chancess are that your data node can not be started correctly.
Let us know your error logs if there are errors.

HTH~

Thanks
Simon

2011/5/27 Xu, Richard 

> That setting is 3.
>
> From: DAN [mailto:chaidong...@163.com]
> Sent: Thursday, May 26, 2011 10:23 PM
> To: common-user@hadoop.apache.org; Xu, Richard [ICG-IT]
> Subject: Re:Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203
> cluster
>
> Hi, Richard
>
> Pay attention to "Not able to place enough replicas, still in need of 1".
> Pls confirm right
> setting of "dfs.replication" in hdfs-site.xml.
>
> Good luck!
> Dan
> --
>
>
> At 2011-05-27 08:01:37,"Xu, Richard "  richard...@citi.com>> wrote:
>
>
>
> >Hi Folks,
>
> >
>
> >We try to get hbase and hadoop running on clusters, take 2 Solaris servers
> for now.
>
> >
>
> >Because of the incompatibility issue between hbase and hadoop, we have to
> stick with hadoop 0.20.2-append release.
>
> >
>
> >It is very straight forward to make hadoop-0.20.203 running, but stuck for
> several days with hadoop-0.20.2, even the official release, not the append
> version.
>
> >
>
> >1. Once try to run start-mapred.sh(hadoop-daemon.sh --config
> $HADOOP_CONF_DIR start jobtracker), following errors shown in namenode and
> jobtracker logs:
>
> >
>
> >2011-05-26 12:30:29,169 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place
> enough replicas, still in need of 1
>
> >2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 9000, call addBlock(/tmp/hadoop-cfadm/mapred/system/
> jobtracker.info, DFSCl
>
> >ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException:
> File /tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be
> replicated to 0 n
>
> >odes, instead of 1
>
> >java.io.IOException: File 
> >/tmp/hadoop-cfadm/mapred/system/jobtracker.infocould only be replicated to 0 
> >nodes, instead of 1
>
> >at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>
> >at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> >at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>
> >at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>
> >at java.lang.reflect.Method.invoke(Method.java:597)
>
> >at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>
> >at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>
> >at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>
> >at java.security.AccessController.doPrivileged(Native Method)
>
> >at javax.security.auth.Subject.doAs(Subject.java:396)
>
> >at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> >
>
> >
>
> >2. Also, Configured Capacity is 0, cannot put any file to HDFS.
>
> >
>
> >3. in datanode server, no error in logs, but tasktracker logs has the
> following suspicious thing:
>
> >2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
>
> >2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 41904: starting
>
> >2011-05-25 23:36:10,852 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 41904: starting
>
> >2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 41904: starting
>
> >2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 41904: starting
>
> >2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 41904: starting
>
> >.
>
> >2011-05-25 23:36:10,855 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 63 on 41904: starting
>
> >2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker:
> TaskTracker up at: localhost/127.0.0.1:41904
>
> >2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker:
> Starting tracker tracker_loanps3d:localhost/127.0.0.1:41904
>
> >
>
> >
>
> >I have tried all suggestions found so far, including
>
> > 1) remove hadoop-name and hadoop-data folders and reformat namenode;
>
> > 2) clean up all temp files/folders under /tmp;
>
> >
>
> >But nothing works.
>
> >
>
> >Your help is greatly appreciated.
>
> >
>
> >Thanks,
>
> >
>
> >RX
>
>


-- 
Regards,
Simon


Re: IP address or host name

2009-08-24 Thread Simon Willnauer
You can either try to set the "master.com" name in your /etc/hosts
file or if that does not work for some reason you can try to set the
name in your configured DNS server.
You should make sure that you hostname is not mapped to 127.0.0.1
otherwise hadoop will bind its sockets to loopback. That would explain
why your local datanode can connect but others can't.
Make sure you format you dfs again otherwise you will get the same FS
exception again.

simon

On Mon, Aug 24, 2009 at 6:25 PM, Nelson, William wrote:
> I'm new to hadoop.
> I'm running 0.19.2 on a Centos 5.2  cluster.
> I have been having problems with the nodes connecting to the master (even 
> when the firewall is off) using the hostname  in the hadoop-site.xml but it 
> will connect using the IP address.
>  This is also true trying to connect to port 9000 with telnet. If I start 
> hadoop with hostnames in the hadoop-site.xml, I get  Connection refused. When 
> I use IP addresses in the hadoop-site.xml I can connect with telnet using 
> either the IP address or hostname.
> The datanode running on the master node can connect with either IP address or 
> hostname in the hadoop-site.xml.
> I have found this problem posted a couple of time but have not found the 
> answer yet.
>
>
> Datanodes on slaves can't connect but the datanode on master can connect.
> 
>    fs.default.name
>    hdfs://master.com:9000
>  
>
> Everybody can connect.
> 
>    fs.default.name
>    hdfs://192.68.42.221:9000
>  
>
> Unfortunately  using IP addresses creates another problem when I try to run 
> the job: Wrong FS exception
>
>
> Previous posts refer to https://issues.apache.org/jira/browse/HADOOP-5191 but 
> it appears the work around is to switch back to host names, which I can't get 
> to work.
>
>
>
> Thanks in advance for any help.
>
>
>
> Bill
>
>
>
>
>


Re: IP address or host name

2009-08-24 Thread Simon Willnauer
happy to help :)

simon

On Mon, Aug 24, 2009 at 7:21 PM, Nelson, William wrote:
> Thanks for the quick reply. The host name on the master was bound to the 
> loopback connecter.
> All is well.
> Bill
>
> -Original Message-
> From: Simon Willnauer [mailto:simon.willna...@googlemail.com]
> Sent: Monday, August 24, 2009 12:46 PM
> To: common-user@hadoop.apache.org
> Subject: Re: IP address or host name
>
> You can either try to set the "master.com" name in your /etc/hosts
> file or if that does not work for some reason you can try to set the
> name in your configured DNS server.
> You should make sure that you hostname is not mapped to 127.0.0.1
> otherwise hadoop will bind its sockets to loopback. That would explain
> why your local datanode can connect but others can't.
> Make sure you format you dfs again otherwise you will get the same FS
> exception again.
>
> simon
>
> On Mon, Aug 24, 2009 at 6:25 PM, Nelson, William wrote:
>> I'm new to hadoop.
>> I'm running 0.19.2 on a Centos 5.2  cluster.
>> I have been having problems with the nodes connecting to the master (even 
>> when the firewall is off) using the hostname  in the hadoop-site.xml but it 
>> will connect using the IP address.
>>  This is also true trying to connect to port 9000 with telnet. If I start 
>> hadoop with hostnames in the hadoop-site.xml, I get  Connection refused. 
>> When I use IP addresses in the hadoop-site.xml I can connect with telnet 
>> using either the IP address or hostname.
>> The datanode running on the master node can connect with either IP address 
>> or hostname in the hadoop-site.xml.
>> I have found this problem posted a couple of time but have not found the 
>> answer yet.
>>
>>
>> Datanodes on slaves can't connect but the datanode on master can connect.
>> 
>>    fs.default.name
>>    hdfs://master.com:9000
>>  
>>
>> Everybody can connect.
>> 
>>    fs.default.name
>>    hdfs://192.68.42.221:9000
>>  
>>
>> Unfortunately  using IP addresses creates another problem when I try to run 
>> the job: Wrong FS exception
>>
>>
>> Previous posts refer to https://issues.apache.org/jira/browse/HADOOP-5191 
>> but it appears the work around is to switch back to host names, which I 
>> can't get to work.
>>
>>
>>
>> Thanks in advance for any help.
>>
>>
>>
>> Bill
>>
>>
>>
>>
>>
>


missing libraries when starting hadoop daemon

2009-09-17 Thread Simon Chu
had...@zoe:/opt/hadoop-0.18.3> hadoop start-all.sh
Exception in thread "main" java.lang.NoClassDefFoundError: start-all/sh
Caused by: java.lang.ClassNotFoundException: start-all.sh
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
Could not find the main class: start-all.sh.  Program will exit.


Can someone help me figure out how to include these libraries?  by setting
LD_LIBRARY_PATH?

Simon


Re: dump configuration

2011-09-28 Thread Simon Dong
Or http://jobtracker:50030/conf

-SD

On Wed, Sep 28, 2011 at 2:39 PM, Raj V  wrote:
> The xml configuration file is also available under hadoop logs on the 
> jobtracker.
>
> Raj
>
>
>
>>
>>From: "GOEKE, MATTHEW (AG/1000)" 
>>To: "common-user@hadoop.apache.org" 
>>Sent: Wednesday, September 28, 2011 2:27 PM
>>Subject: RE: dump configuration
>>
>>You could always check the web-ui job history for that particular run, open 
>>the job.xml, and search for what the value of that parameter was at runtime.
>>
>>Matt
>>
>>-Original Message-
>>From: patrick sang [mailto:silvianhad...@gmail.com]
>>Sent: Wednesday, September 28, 2011 4:00 PM
>>To: common-user@hadoop.apache.org
>>Subject: dump configuration
>>
>>Hi hadoopers,
>>
>>I was looking the way to dump hadoop configuration in order to check if what
>>i have just changed
>>in mapred-site.xml is really kicked in.
>>
>>Found that HADOOP-6184
>>is exactly what i
>>want but the thing is I am running CDH3u0 which is
>>0.20.2 based.
>>
>>I wonder if anyone here have a magic to dump the hadoop configuration;
>>doesn't need to be json
>>as long as i can check if what i changed in configuration file is really
>>kicked in.
>>
>>PS, i change this "mapred.user.jobconf.limit"
>>
>>-P
>>This e-mail message may contain privileged and/or confidential information, 
>>and is intended to be received only by persons entitled
>>to receive such information. If you have received this e-mail in error, 
>>please notify the sender immediately. Please delete it and
>>all attachments from any servers, hard drives or any other media. Other use 
>>of this e-mail by you is strictly prohibited.
>>
>>All e-mails and attachments sent and received are subject to monitoring, 
>>reading and archival by Monsanto, including its
>>subsidiaries. The recipient of this e-mail is solely responsible for checking 
>>for the presence of "Viruses" or other "Malware".
>>Monsanto, along with its subsidiaries, accepts no liability for any damage 
>>caused by any such code transmitted by or accompanying
>>this e-mail or any attachment.
>>
>>
>>The information contained in this email may be subject to the export control 
>>laws and regulations of the United States, potentially
>>including but not limited to the Export Administration Regulations (EAR) and 
>>sanctions regulations issued by the U.S. Department of
>>Treasury, Office of Foreign Asset Controls (OFAC).  As a recipient of this 
>>information you are obligated to comply with all
>>applicable U.S. export laws and regulations.
>>
>>
>>
>>


InputFormat Problem

2011-10-21 Thread Simon Klausner
Hi,

 

i'm trying to define my own InputFormat and RecordReader, however I'm
getting a type mismatch error in the createRecordReader method of the
InputFormat class.

 

Here is the inputformat:

http://codepad.org/wdr2NqBe

 

here is the recordreader:

  http://codepad.org/9cmY6BjS

 

i get the error at the inputformat class line 20: return new
PDFLinkRecordReader();

 

error: type mismatch: cannot convert PDFLinkRecordReader to
RecordReader.

How can I fix this problem? I checked the following tutorial:

http://developer.yahoo.com/hadoop/tutorial/module5.html

I don't see my mistake.

 

Best regards



mapreduce 0.21 problem with inputformat

2011-10-24 Thread Simon Klausner
Hi,

 

i'm trying to define my own InputFormat and RecordReader, however I'm
getting a type mismatch error in the createRecordReader method of the
InputFormat class.

 

Here is the inputformat:

  http://codepad.org/wdr2NqBe

 

here is the recordreader:

  http://codepad.org/9cmY6BjS

 

i get the error at the inputformat class line 20: return new
PDFLinkRecordReader();

 

error: type mismatch: cannot convert PDFLinkRecordReader to
RecordReader.

How can I fix this problem? I checked the following tutorial:

http://developer.yahoo.com/hadoop/tutorial/module5.html

I don't see my mistake.

 

Best regards

 



PathFilter File Glob

2012-02-24 Thread Heeg, Simon
Hello,

I would like to use a PathFilter for filtering the files with a regular 
expression which are read by the TextInputFormat, but I don't know how to apply 
the filter. I cannot find a setter. Unfortunately google was not my friend with 
this issue and "The definitive Guide" does  not help that much.  I am using 
Hadoop 0.20.2-cdh3u3.

Please Help!

Kind regards
Simon

Deutsche Telekom AG
Products & Innovation
Simon Heeg
Werkstudent
T-Online-Allee 1, 64295 Darmstadt
+49 6151 680-7835 (Tel.)
E-Mail: s.h...@telekom.de<mailto:vorname.nachn...@telekom.de>
www.telekom.com<http://www.telekom.com>
Erleben, was verbindet.
Deutsche Telekom AG
Aufsichtsrat: Prof. Dr. Ulrich Lehner (Vorsitzender)
Vorstand: René Obermann (Vorsitzender),
Dr. Manfred Balz, Reinhard Clemens, Niek Jan van Damme,
Timotheus Höttges, Edward Kozel, Claudia Nemat, Thomas Sattelberger
Handelsregister: Amtsgericht Bonn HRB 6794
Sitz der Gesellschaft: Bonn
WEEE-Reg.-Nr. DE50478376
Große Veränderungen fangen klein an - Ressourcen schonen und nicht jede E-Mail 
drucken.

Hinweis: Diese E-Mail und / oder die Anhänge ist / sind vertraulich und 
ausschließlich für den bezeichneten Adressaten bestimmt. Jegliche Durchsicht, 
Weitergabe oder Kopieren dieser E-Mail ist strengstens verboten. Wenn Sie diese 
E-Mail irrtümlich erhalten haben, informieren Sie bitte unverzüglich den 
Absender und vernichten Sie die Nachricht und alle Anhänge. Vielen Dank.



Re: Delivery Status Notification (Failure)

2010-06-10 Thread Simon Narowki
Thanks Abhishek for your answer. But sorry still I don't understand... What
do you mean by the "the runtime/programming support needed for MapReduce"?

Could you please mention some other implementations of MapReduce?

Cheers
Simon


On Thu, Jun 10, 2010 at 10:35 PM, abhishek sharma  wrote:

> Hadoop is an open source implementation of the runtime/programming
> support needed for MapReduce.
> Several different implementations of MapReduce are possible. Google
> has its own that is different from Hadoop.
>
> Abhishek
>
> On Thu, Jun 10, 2010 at 1:32 PM, Simon Narowki 
> wrote:
> > Dear all,
> >
> > I am a new Hadoop user and am confused a little bit about the difference
> > between Hadoop and MapReduce. Could anyone please clear me?
> >
> > Thanks!
> > Simon
> >
>


Re: Delivery Status Notification (Failure)

2010-06-10 Thread Simon Narowki
Hi Edson,

Thank you for the answer. That's right MapReduce is the Google framework
based on two functions Map and Reduce. If I understood it correctly, Hadoop
is an implementation of Map and Reduce functions in MapReduce. My question
is: Does Hadoop includes MapReduce framework of Google as well?


Regards
Simon



On Thu, Jun 10, 2010 at 10:44 PM, Edson Ramiro  wrote:

> Hi Simon,
>
> MapReduce is a framework developed by Google that uses a
> programming model based in two functions called Map and Reduce,
>
> Both the framework and the programming model are called MapReduce, right?
>
> Hadoop is an open-source implementation of MapReduce.
>
> HTH,
>
> --
> Edson Ramiro Lucas Filho
> http://www.inf.ufpr.br/erlf07/
>
>
> On 10 June 2010 17:40, Simon Narowki  wrote:
>
> > Thanks Abhishek for your answer. But sorry still I don't understand...
> What
> > do you mean by the "the runtime/programming support needed for
> MapReduce"?
> >
> > Could you please mention some other implementations of MapReduce?
> >
> > Cheers
> > Simon
> >
> >
> > On Thu, Jun 10, 2010 at 10:35 PM, abhishek sharma 
> > wrote:
> >
> > > Hadoop is an open source implementation of the runtime/programming
> > > support needed for MapReduce.
> > > Several different implementations of MapReduce are possible. Google
> > > has its own that is different from Hadoop.
> > >
> > > Abhishek
> > >
> > > On Thu, Jun 10, 2010 at 1:32 PM, Simon Narowki <
> simon.naro...@gmail.com>
> > > wrote:
> > > > Dear all,
> > > >
> > > > I am a new Hadoop user and am confused a little bit about the
> > difference
> > > > between Hadoop and MapReduce. Could anyone please clear me?
> > > >
> > > > Thanks!
> > > > Simon
> > > >
> > >
> >
>


Delivery Status Notification (Failure)

2010-06-10 Thread Simon Narowki
Dear all,

I am a new Hadoop user and am confused a little bit about the difference
between Hadoop and MapReduce. Could anyone please clear me?

Thanks!
Simon


33 Days left to Berlin Buzzwords 2011

2011-05-04 Thread Simon Willnauer
hey folks,

BerlinBuzzwords 2011 is close only 33 days left until the big Search,
Store and Scale opensource crowd is gathering
in Berlin on June 6th/7th.

The conference again focuses on the topics search,
data analysis and NoSQL. It is to take place on June 6/7th 2011 in Berlin.

We are looking forward to two awesome keynote speakers who shaped the world of
open source data analysis: Doug Cutting, founder of Apache Lucene and
Hadoop) as
well as Ted Dunning (Chief Application Architect at MapR Technologies
and active
developer at Apache Hadoop and Mahout).

We are amazed by the amount and quality of the talk submissions we
got. As a result
this year we have added one more track to the main conference. If you haven't
done so already, make sure to book your ticket now - early bird tickets are
already sold out since April 7th and there might not be many tickets left.

As we would like to give visitors of our main conference a reason to stay in
town for the whole week, we have been talking to local co-working spaces and
companies asking them for free space and WiFi to host Hackathons right after the
main conference - that is on June 8th through 10th.

If you would like to gather with fellow developers and users of your project,
fix bugs together, hack on new features or give users a hands-on introduction to
your tools, please submit your workshop proposal to our wiki:

http://berlinbuzzwords.de/node/428

Please note that slots are assigned on a first come first serve basis. We are
doing our best to get you connected, however space is limited.

The deal is simple: We get you in touch with a conference room provider. Your
event gets promoted in our schedule. Co-Ordination however is completely up to
you: Make sure to provide an interesting abstract, provide a Hackathon
registration area - see the Barcamp page for a good example:

http://berlinbuzzwords.de/wiki/barcamp

Attending Hackathons requires a Berlin Buzzwords ticket and (then free)
registration at the Hackathon in question.

Hope I see you all around in Berlin,

Simon