Thanks Doug,

It worked.
When i am running Date command on cap then i am getting following
result:
--------------------------------------------------------------------
cap> date
 ** [out :: 192.168.2.85] Thu Dec 10 11:29:45 IST 2009
 ** [out :: 192.168.2.86] Thu Dec 10 11:29:18 IST 2009
 ** [out :: 192.168.2.69] Thu Dec 10 11:29:53 IST 2009
--------------------------------------------------------------------

Which shows that time on all the servers are not same. Do we need to
do something else to sync them or it should have been taken care by
Date command itself?

later we tried doing cap dist, and here is the output for it:
--------------------------------------------------------------------
[r...@localhost conf]# cap dist
  * executing `dist'
 ** transaction: start
  * executing `copy_config'
  * executing "rsync /opt/hypertable/0.9.2.7/conf/hypertable.cfg /opt/
hypertable/0.9.2.7/conf"
    servers: ["192.168.2.69"]
    [192.168.2.69] executing command
    command finished
  * executing `rsync_installation'
  * executing "rsync -av --exclude=log --exclude=run --exclude=demo --
exclude=fs --exclude=hyperspace 192.168.2.69:/opt/hypertable/0.9.2.7 /
opt/hypertable"
    servers: ["192.168.2.69", "192.168.2.85", "192.168.2.86"]
Password:
    [192.168.2.69] executing command
    [192.168.2.86] executing command
    [192.168.2.85] executing command
*** [err :: 192.168.2.85] Host key verification failed.
*** [err :: 192.168.2.85] rsync: connection unexpectedly closed (0
bytes received so far) [receiver]
*** [err :: 192.168.2.85] rsync error: unexplained error (code 255) at
io.c(632) [receiver=3.0.4]
*** [err :: 192.168.2.86] Host key verification failed.
*** [err :: 192.168.2.86] rsync: connection unexpectedly closed (0
bytes received so far) [receiver]
*** [err :: 192.168.2.86]
*** [err :: 192.168.2.86] rsync error: unexplained error (code 255) at
io.c(632) [receiver=3.0.4]
*** [err :: 192.168.2.86]
 ** [out :: 192.168.2.69] receiving incremental file list
 ** [out :: 192.168.2.69]
 ** [out :: 192.168.2.69] sent 343 bytes  received 68029 bytes
45581.33 bytes/sec
 ** [out :: 192.168.2.69] total size is 635939178  speedup is 9301.16
    command finished
failed: "sh -c 'rsync -av --exclude=log --exclude=run --exclude=demo --
exclude=fs --exclude=hyperspace 192.168.2.69:/opt/hypertable/0.9.2.7 /
opt/hypertable'" on 192.168.2.85,192.168.2.86
--------------------------------------------------------------------

What may be the problem?

Thanks
Vivek

On Dec 10, 10:13 am, Doug Judd <[email protected]> wrote:
> The Capfile.cluster file in the conf directory of the tarball is meant to be
> an example.  You need to copy it somewhere and rename it to "Capfile" and
> make your edits.  Then run the 'cap' command in the same directory that
> contains 'Capfile' (or you can run 'cap -f <location-of-Capfile>').  With
> the Capfile setup correctly you should be able to start the system with 'cap
> start', stop the system with 'cap stop', and scrub the database clean with
> 'cap cleandb'
>
> - Doug
>
> On Wed, Dec 9, 2009 at 9:03 PM, [email protected] 
> <[email protected]>wrote:
>
> > Hi,
>
> > I have installed Hypertable and Hadoop. I  want to use Hypertable on a
> > Hadoop cluster of 3 machines.
> > I want to run Namenode on 192.168.2.69, Jobtracker on 192.168.2.85,
> > Tasktracker and Datanode on 192.168.2.86
>
> > Please check whether my configurations are right or not.
> > Files in directory /opt/hypertable/0.9.2.7/conf are : Capfile.cluser,
> > Capfile.localhost, hypertable.cfg and Metadata.xml
>
> > --------------------------------------------------------------------------
> > Content of Capfile.cluster
> > set :source_machine, "192.168.2.69"
> > set :install_dir,  "/opt/hypertable"
> > set :hypertable_version, "0.9.2.7"
> > set :default_dfs, "hadoop"
> > set :default_config, "/opt/hypertable/0.9.2.7/conf/hypertable.cfg"
>
> > role :master, "192.168.2.69"
> > role :slave,  "192.168.2.69", "192.168.2.85", "192.168.2.86"
> > role :localhost, "192.168.2.69"
> > --------------------------------------------------------------------------
> > Content of Capfile.localhost (Not modified because cluster setup is
> > needed)
> > set :source_machine, "localhost"
> > set :install_dir,  "/opt/hypertable"
> > set :hypertable_version, "0.9.2.7"
> > set :default_dfs, "local"
> > set :default_config, "/opt/hypertable/#{hypertable_version}/conf/
> > hypertable.cfg"
>
> > role :master, "localhost"
> > role :slave,  "localhost"
> > role :localhost, "localhost"
> > --------------------------------------------------------------------------
> > Content of Hypertable.cfg
> > #
> > # hypertable.cfg
> > #
>
> > # Global properties
> > Hypertable.Request.Timeout=180000
>
> > # HDFS Broker
> > HdfsBroker.Port=38030
> > HdfsBroker.fs.default.name=hdfs://192.168.2.69:9000
> > HdfsBroker.Workers=20
>
> > # Ceph Broker
> > CephBroker.Port=38030
> > CephBroker.Workers=20
> > CephBroker.MonAddr=10.0.1.245:6789
>
> > # Local Broker
> > DfsBroker.Local.Port=38030
> > DfsBroker.Local.Root=fs/local
>
> > # DFS Broker - for clients
> > DfsBroker.Host=192.168.2.69
> > DfsBroker.Port=38030
>
> > # Hyperspace
> > Hyperspace.Master.Host=192.168.2.69
> > Hyperspace.Master.Port=38040
> > Hyperspace.Master.Dir=hyperspace
> > Hyperspace.Master.Workers=20
>
> > # Hypertable.Master
> > Hypertable.Master.Host=192.168.2.69
> > Hypertable.Master.Port=38050
> > Hypertable.Master.Workers=20
>
> > # Hypertable.RangeServer
> > Hypertable.RangeServer.Port=38060
>
> > Hyperspace.KeepAlive.Interval=30000
> > Hyperspace.Lease.Interval=1000000
> > Hyperspace.GracePeriod=200000
>
> > # ThriftBroker
> > ThriftBroker.Port=38080
>
> > ------------------------------------------------------------------------------
>
> > When i am running "cap shell" i get cap prompt, but when i am trying
> > to execute date (to sync all machines) it is not showing any result.
> > Do i need to "capify /opt/hypertable/0.9.2.7/conf", if so then what
> > should be added in deploy.rb file because it throws error while doing
> > "cap deploy" (Do i really need to do that).
> > Also when i am doing "cap dist" is says "the task 'dist' does not
> > exist".
>
> > Please guide me to start cluster machine through Capistrano.
>
> > Thanks
> > Vivek
>
> > --
>
> > You received this message because you are subscribed to the Google Groups
> > "Hypertable Development" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to
> > [email protected]<hypertable-dev%[email protected]>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/hypertable-dev?hl=en.

--

You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en.


Reply via email to