Did you previously run with the local dfsbroker from the same location?
If so (assuming this is a fresh install and you don't have another
Hyperspace process running) run "cap cleandb" first and then "cap start".

Also check that the user you're running as has write permission to the path
"/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/hyperspace" ?

-Sanjit


On Fri, May 28, 2010 at 9:08 AM, Harshada <[email protected]> wrote:

> Thanks kevin,
>
> I reinstalled Hadoop from scratch and now I could do "cap dist"
> successfully.
>
> But, when I start servers using "cap start", I get following error:
>
>  * executing `start'
>  ** transaction: start
>  * executing `start_hyperspace'
>   * executing "/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/
> start-hyperspace.sh <http://0.9.2.8/bin/%0Astart-hyperspace.sh>
> --config=/opt/hypertable/hypertable-0.9.2.8-
> alpha/0.9.2.8/conf/hypertable.cfg"
>     servers: ["master"]
>    [master] executing command
>
> *** [err :: master] /opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/
> bin/ht-env.sh <http://0.9.2.8/%0Abin/ht-env.sh>: line 95: 14096
> Segmentation fault      (core dumped)
> $HEAPCHECK $VALGRIND $HYPERTABLE_HOME/bin/$servercmd --pidfile
> $pidfile "$@" >&$logfile
>
>  ** [out :: master] Waiting for Hyperspace to come up...
>  ** [out :: master] Waiting for Hyperspace to come up...
>  ** [out :: master] Waiting for Hyperspace to come up...
>  ** [out :: master] Waiting for Hyperspace to come up...
>  ** [out :: master] Waiting for Hyperspace to come up...
>
> It never comes up!
>
> log at log/Hyperspace.log tells:
>
> 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> hypertable-0.9.2.8-alpha/src/cc/Hyperspace/Master.cc:145) BerkeleyDB
> base directory = '/opt/hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/
> hyperspace'
> 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> 304) BDB ERROR:unable to join the environment
> 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> 304) BDB ERROR:Recovery function for LSN 2 3158676 failed
> 1275062567 INFO Hyperspace.Master : (/opt/hypertable/
> hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> 304) BDB ERROR:PANIC: Permission denied
> 1275062567 FATAL Hyperspace.Master : (/opt/hypertable/
> hypertable-0.9.2.8-alpha/src/cc/Hyperspace/BerkeleyDbFilesystem.cc:
> 358) Received DB_EVENT_PANIC event
>
> waiting for pointers.
>
> Thanks.
>
> On May 27, 4:19 pm, Kevin Yuan <[email protected]> wrote:
> > I think you should make sure that the HDFS is running normally by
> > checking its log files.
> >
> > And, firewalls? (just wild guesses)
> >
> > -Kevin
> >
> > On 5月27日, 下午2时20分, Harshada <[email protected]> wrote:
> >
> > > Thank you for the reply.
> >
> > > First of all I am sorry for posting this query to the wrong thread. It
> > > you can, please migrate it to the -user mailing list.
> >
> > > I checked the log file for DfsBroker.hadoop, it says:
> >
> > > Num CPUs=2
> > > HdfsBroker.Port=38030
> > > HdfsBroker.Reactors=2
> > > HdfsBroker.Workers=20
> > > HdfsBroker.Server.fs.default.name=hdfs://localhost:54310
> > > 10/05/27 05:01:58 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 0 time(s).
> > > 10/05/27 05:01:59 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 1 time(s).
> > > 10/05/27 05:02:00 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 2 time(s).
> > > 10/05/27 05:02:01 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 3 time(s).
> > > 10/05/27 05:02:02 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 4 time(s).
> > > 10/05/27 05:02:03 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 5 time(s).
> > > 10/05/27 05:02:04 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 6 time(s).
> > > 10/05/27 05:02:05 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 7 time(s).
> > > 10/05/27 05:02:06 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 8 time(s).
> > > 10/05/27 05:02:07 INFO ipc.Client: Retrying connect to server:
> > > localhost/127.0.0.1:54310. Already tried 9 time(s).
> > > 27 May, 2010 5:02:07 AM org.hypertable.DfsBroker.hadoop.HdfsBroker
> > > <init>
> > > SEVERE: ERROR: Unable to establish connection to HDFS.
> > > ShutdownHook called
> > > Exception in thread "Thread-1" java.lang.NullPointerException
> > >         at org.hypertable.DfsBroker.hadoop.main
> > > $ShutdownHook.run(main.java:69)
> >
> > > ---------------------------
> >
> > > but hdfs is running. because jps on master gives me following output:
> >
> > > e...@erts-server:~$ jps
> > > 32538 SecondaryNameNode
> > > 32270 NameNode
> > > 32388 DataNode
> > > 310 TaskTracker
> > > 32671 JobTracker
> > > 21233 Jps
> > > -----------------------------------------------------
> >
> > > > Is there a reason you're using 0.9.2.8 and not 0.9.3.1 (the latest
> and
> > > > greatest) ?
> >
> > > Oh.. ok. thanks for the info. But since 0.9.2.8 was successfully
> > > installed, I ll continue with it for the moment.
> >
> > > > Do you have HDFS running and if so make sure the permissions for the
> > > > /hypertable dir are set correctly.
> >
> > > yes. I followedhttp://
> code.google.com/p/hypertable/wiki/UpAndRunningWithHadoop
> > > andhttp://code.google.com/p/hypertable/wiki/DeployingHypertable.
> >
> > > A doubt: do I always need the owner of slave and master machine to be
> > > the same? coz currently I have 'erts' as the user for master and one
> > > slave (which are on the same machine) and harshada as the user on
> > > slave. So what happens is, whenever I use '$cap dist' or '$cap shell
> > > cap>date' it asks for password of e...@slave which does not exist
> > > hence authentication fails. I am in the process of getting same user
> > > on all the machines, but till then I thought of getting max things up.
> > > Is this the reason why DfsBroker.hadoop is also failing?
> >
> > > If yes, then I should better wait and have same user on all the
> > > machines.
> >
> > > PS: though hadoop require same installation paths on all the machines,
> > > I managed it with symbolic links, though my users (and hence their
> > > $HOME s ) were different.
> >
> > > > Beyond that, try taking a look at
> <HT_INSTALL_DIR>/log/DfsBroker.hadoop.log
> > > > to figure out whats going on.
> >
> > > > -Sanjit
> >
> > > > On Wed, May 26, 2010 at 4:35 PM, Harshada <[email protected]>
> wrote:
> > > > > Hi,
> >
> > > > > I am installing Hypertable 0.9.2.8 on Hadoop. I have successfully
> set
> > > > > up Hadoop and its working. When I start servers using 'cap start',
> DFS
> > > > > Server doesn't come up. The output of cap start is:
> >
> > > > > e...@erts-server:~/hypertable/hypertable-0.9.2.8-alpha/conf$ cap
> start
> > > > >  * executing `start'
> > > > >  ** transaction: start
> > > > >  * executing `start_hyperspace'
> > > > >  * executing "/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/
> > > > > bin/start-hyperspace.sh <http://0.9.2.8/%0Abin/start-hyperspace.sh
> >
> > > > > --config=/home/erts/hypertable/
> > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg"
> > > > >    servers: ["127.0.0.1"]
> > > > >    [127.0.0.1] executing command
> > > > >  ** [out :: 127.0.0.1] Hyperspace appears to be running (12170):
> > > > >  ** [out :: 127.0.0.1] erts 12170 1 0 04:40 ? 00:00:00 /home/erts/
> > > > > hypertable/hypertable-0.9.2.8-alpha/0.9.2.8/bin/Hyperspace.Master--
> > > > > pidfile /home/erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/run/
> > > > > Hyperspace.pid --verbose --config=/home/erts/hypertable/
> > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg
> > > > >    command finished
> > > > >  * executing `start_master'
> > > > >  * executing "/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/
> > > > > bin/start-dfsbroker.sh <http://0.9.2.8/%0Abin/start-dfsbroker.sh>
> hadoop
> > > > >     --config=/home/erts/hypertable/
> > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&\\\n
> /home/
> > > > > erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/bin/start-master.sh --
> > > > > config=/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/conf/
> > > > > hypertable.cfg"
> > > > >    servers: ["127.0.0.1"]
> > > > >    [127.0.0.1] executing command
> > > > >  ** [out :: 127.0.0.1] DFS broker: available file descriptors: 1024
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] Waiting for DFS Broker (hadoop) to come
> up...
> > > > >  ** [out :: 127.0.0.1] ERROR: DFS Broker (hadoop) did not come up
> > > > >    command finished
> > > > > failed: "sh -c '/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/
> > > > > bin/start-dfsbroker.sh <http://0.9.2.8/%0Abin/start-dfsbroker.sh>
> hadoop
> > > > >     --config=/home/erts/hypertable/
> > > > > hypertable-0.9.2.8-alpha/0.9.2.8/conf/hypertable.cfg &&\\\n
> /home/
> > > > > erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/bin/start-master.sh --
> > > > > config=/home/erts/hypertable/hypertable-0.9.2.8-alpha/
> 0.9.2.8/conf/
> > > > > hypertable.cfg'" on 127.0.0.1
> >
> > > > > ---------------------------------------------------------------
> >
> > > > > Here, 127.0.0.1 is my master.
> >
> > > > > My hypertable.cfg looks like:
> >
> > > > > #
> > > > > # hypertable.cfg
> > > > > #
> >
> > > > > # HDFS Broker
> > > > > HdfsBroker.Port=38030
> > > > > HdfsBroker.fs.default.name=hdfs://localhost:54310
> > > > > HdfsBroker.Workers=20
> >
> > > > > # Ceph Broker
> > > > > CephBroker.Port=38030
> > > > > CephBroker.Workers=20
> > > > > CephBroker.MonAddr=10.0.1.245:6789
> >
> > > > > # Local Broker
> > > > > DfsBroker.Local.Port=38030
> > > > > DfsBroker.Local.Root=fs/local
> >
> > > > > # DFS Broker - for clients
> > > > > DfsBroker.Host=localhost
> > > > > DfsBroker.Port=38030
> >
> > > > > # Hyperspace
> > > > > Hyperspace.Replica.Host=localhost
> > > > > Hyperspace.Replica.Port=38040
> > > > > Hyperspace.Replica.Dir=hyperspace
> > > > > Hyperspace.Replica.Workers=20
> >
> > > > > # Hypertable.Master
> > > > > Hypertable.Master.Host=localhost
> > > > > Hypertable.Master.Port=38050
> > > > > Hypertable.Master.Workers=20
> >
> > > > > # Hypertable.RangeServer
> > > > > Hypertable.RangeServer.Port=38060
> >
> > > > > Hyperspace.KeepAlive.Interval=30000
> > > > > Hyperspace.Lease.Interval=1000000
> > > > > Hyperspace.GracePeriod=200000
> >
> > > > > # ThriftBroker
> > > > > ThriftBroker.Port=38080
> > > > > ------------------------------------------
> >
> > > > > Note: It does not have Hyperspace.Master.Host=localhost property.
> >
> > > > > Capfile:
> >
> > > > > set :source_machine, "127.0.0.1"
> > > > > set :install_dir,  "/home/erts/hypertable/hypertable-0.9.2.8-alpha"
> > > > > set :hypertable_version, "0.9.2.8"
> > > > > set :default_dfs, "hadoop"
> > > > > set :default_config, "/home/erts/hypertable/hypertable.cfg"
> >
> > > > > role :master, "127.0.0.1"
> > > > > role :hyperspace, "127.0.0.1"
> > > > > role :slave, "127.0.0.1", "10.129.125.12"
> > > > > role :localhost, "127.0.0.1"
> > > > > ------------------------------------
> >
> > > > > Any idea why DFS Broker is failing?
> >
> > > > > Thanks,
> > > > > Harshada
> >
> > > > > --
> > > > > You received this message because you are subscribed to the Google
> Groups
> > > > > "Hypertable Development" group.
> > > > > To post to this group, send email to
> [email protected].
> > > > > To unsubscribe from this group, send email to
> > > > > [email protected]<hypertable-dev%[email protected]>
> <hypertable-dev%2bunsubscr...@go oglegroups.com>
> > > > > .
> > > > > For more options, visit this group at
> > > > >http://groups.google.com/group/hypertable-dev?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypertable Development" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<hypertable-dev%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/hypertable-dev?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en.

Reply via email to