The DfsBroker log has a couple of errors, i.e.
java.lang.IllegalAccessError: org/apach$
at org.hypertable.DfsBroker.
hadoop.HdfsBroker.Length(HdfsBroker.java:399)
at
org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLeng$
at
org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:$
at java.lang.Thread.run(Thread.java:679)
(the error message seems to be messed up - maybe that's my google mail
client)
I would check the hadoop logs if they have any errors.
bye
Christoph
2012/9/12 Mehmet Ali Cetinkaya <[email protected]>
> Hi Christoph,
>
> thank you for quick ansswer.
>
> This is DfsBroker.hadoop.log;
>
> Sep 12, 2012 10:33:06 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:60499 ; Wed Sep 12 10:33:06 EEST 2012] Connection
> Established
> Closed 0 input streams and 0 output streams
> Sep 12, 2012 10:33:06 AM org.hypertable.DfsBroker.hadoop.ConnectionHandler
> handle
> INFO: [/172.16.200.52:60499 ; Wed Sep 12 10:33:06 EEST 2012] Disconnect -
> COMM broken $
> Closed 0 input streams and 0 output streams for client connection /
> 172.16.200.52:60499
> Num CPUs=8
> HdfsBroker.Port=38030
> HdfsBroker.Reactors=8
> HdfsBroker.Workers=20
> HdfsBroker.Hadoop.ConfDir=/hadoop/conf
> Adding hadoop configuration file /hadoop/conf/hdfs-site.xml
> Adding hadoop configuration file /hadoop/conf/core-site.xml
> HdfsBroker.dfs.client.read.shortcircuit=false
> HdfsBroker.dfs.replication=2
> HdfsBroker.Server.fs.default.name=hdfs://dfs1:9000
> Sep 12, 2012 10:36:59 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:50074 ; Wed Sep 12 10:36:59 EEST 2012] Connection
> Established
> Sep 12, 2012 10:36:59 AM org.hypertable.DfsBroker.hadoop.ConnectionHandler
> handle
> INFO: [/172.16.200.52:50074 ; Wed Sep 12 10:36:59 EEST 2012] Disconnect -
> COMM broken $
> Closed 0 input streams and 0 output streams for client connection /
> 172.16.200.52:50074
> Sep 12, 2012 10:37:01 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:57138 ; Wed Sep 12 10:37:01 EEST 2012] Connection
> Established
> Sep 12, 2012 10:37:01 AM org.hypertable.AsyncComm.IOHandlerData
> handle_message_body
> WARNING: Received response for non-pending event
> (id=0,version=1,total_len=38)
> Sep 12, 2012 10:37:01 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Exists
> INFO: Testing for existence of file '/hypertable/servers/master/log/mml
> Sep 12, 2012 10:37:01 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Readdir
> INFO: Readdir('/hypertable/servers/master/log/mml')
> Sep 12, 2012 10:37:01 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
> INFO: Getting length of file '/hypertable/servers/master/log/mml/0'
> (accurate: true)
> Exception in thread "ApplicationQueueThread 3"
> java.lang.IllegalAccessError: tried to $
> at
> org.hypertable.DfsBroker.hadoop.HdfsBroker.Length(HdfsBroker.java:399)
> at
> org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLeng$
> at
> org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:$
> at java.lang.Thread.run(Thread.java:679)
> Sep 12, 2012 10:40:01 AM org.hypertable.DfsBroker.hadoop.ConnectionHandler
> handle
> INFO: [/172.16.200.52:57138 ; Wed Sep 12 10:40:01 EEST 2012] Disconnect -
> COMM broken $
> Closed 0 input streams and 0 output streams for client connection /
> 172.16.200.52:57138
> Sep 12, 2012 11:37:33 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:52875 ; Wed Sep 12 11:37:33 EEST 2012] Connection
> Established
> Closed 0 input streams and 0 output streams
> java.nio.channels.ClosedByInterruptException
> at
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibl$
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:359)
> at
> org.hypertable.AsyncComm.IOHandlerData.SendBuf(IOHandlerData.java:241)
> at
> org.hypertable.AsyncComm.IOHandlerData.SendMessage(IOHandlerData.java:268)
> at org.hypertable.AsyncComm.Comm.SendResponse(Comm.java:122)
> at
> org.hypertable.AsyncComm.ResponseCallback.response_ok(ResponseCallback.java$
> at
> org.hypertable.DfsBroker.hadoop.RequestHandlerShutdown.run(RequestHandlerSh$
> at
> org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:$
> at java.lang.Thread.run(Thread.java:679)
>
> Num CPUs=8
> HdfsBroker.Port=38030
> HdfsBroker.Reactors=8
> HdfsBroker.Workers=20
> HdfsBroker.Hadoop.ConfDir=/hadoop/conf
> Adding hadoop configuration file /hadoop/conf/hdfs-site.xml
> Adding hadoop configuration file /hadoop/conf/core-site.xml
> HdfsBroker.dfs.client.read.shortcircuit=false
> HdfsBroker.dfs.replication=2
> HdfsBroker.Server.fs.default.name=hdfs://dfs1:9000
> Sep 12, 2012 11:38:39 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:53823 ; Wed Sep 12 11:38:39 EEST 2012] Connection
> Established
> Sep 12, 2012 11:38:39 AM org.hypertable.DfsBroker.hadoop.ConnectionHandler
> handle
> INFO: [/172.16.200.52:53823 ; Wed Sep 12 11:38:39 EEST 2012] Disconnect -
> COMM broken $
> Closed 0 input streams and 0 output streams for client connection /
> 172.16.200.52:53823
> Sep 12, 2012 11:38:40 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:60824 ; Wed Sep 12 11:38:40 EEST 2012] Connection
> Established
> Sep 12, 2012 11:38:40 AM org.hypertable.DfsBroker.hadoop.ConnectionHandler
> handle
> INFO: [/172.16.200.52:60824 ; Wed Sep 12 11:38:40 EEST 2012] Disconnect -
> COMM broken $
> Closed 0 input streams and 0 output streams for client connection /
> 172.16.200.52:60824
> Sep 12, 2012 11:39:11 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:45029 ; Wed Sep 12 11:39:11 EEST 2012] Connection
> Established
> Sep 12, 2012 11:39:21 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Mkdirs
> INFO: Making directory '/hypertable/servers/rs1/log/user'
> Sep 12, 2012 11:39:21 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Exists
> INFO: Testing for existence of file '/hypertable/servers/rs1/log/rsml
> Sep 12, 2012 11:39:21 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Readdir
> INFO: Readdir('/hypertable/servers/rs1/log/rsml')
> Sep 12, 2012 11:39:21 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
> INFO: Getting length of file '/hypertable/servers/rs1/log/rsml/0'
> (accurate: true)
> Exception in thread "ApplicationQueueThread 4"
> java.lang.IllegalAccessError: tried to $
> at
> org.hypertable.DfsBroker.hadoop.HdfsBroker.Length(HdfsBroker.java:399)
> at
> org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLeng$
> at
> org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:$
> at java.lang.Thread.run(Thread.java:679)
> Sep 12, 2012 11:40:02 AM org.hypertable.AsyncComm.IOHandler DeliverEvent
> INFO: [/172.16.200.52:58580 ; Wed Sep 12 11:40:02 EEST 2012] Connection
> Established
> Sep 12, 2012 11:40:02 AM org.hypertable.AsyncComm.IOHandlerData
> handle_message_body
> WARNING: Received response for non-pending event
> (id=0,version=1,total_len=38)
> Sep 12, 2012 11:40:02 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Exists
> INFO: Testing for existence of file '/hypertable/servers/master/log/mml
> Sep 12, 2012 11:40:02 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Readdir
> INFO: Readdir('/hypertable/servers/master/log/mml')
> Sep 12, 2012 11:40:02 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
> INFO: Getting length of file '/hypertable/servers/master/log/mml/0'
> (accurate: true)
> Exception in thread "ApplicationQueueThread 7"
> java.lang.IllegalAccessError: org/apach$
> at
> org.hypertable.DfsBroker.hadoop.HdfsBroker.Length(HdfsBroker.java:399)
> at
> org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLeng$
> at
> org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:$
> at java.lang.Thread.run(Thread.java:679)
> Sep 12, 2012 11:42:21 AM org.hypertable.DfsBroker.hadoop.ConnectionHandler
> handle
> INFO: [/172.16.200.52:45029 ; Wed Sep 12 11:42:21 EEST 2012] Disconnect -
> COMM broken $
> Closed 0 input streams and 0 output streams for client connection /
> 172.16.200.52:45029
> Sep 12, 2012 11:43:02 AM org.hypertable.DfsBroker.hadoop.ConnectionHandler
> handle
> INFO: [/172.16.200.52:58580 ; Wed Sep 12 11:43:02 EEST 2012] Disconnect -
> COMM broken $
> Closed 0 input streams and 0 output streams for client connection /
> 172.16.200.52:58580
>
>
>
> ------------------------------
> *From:* Christoph Rupp <[email protected]>
> *To:* [email protected]
> *Sent:* Wednesday, September 12, 2012 12:37 PM
> *Subject:* Re: [hypertable-dev] "HYPERTABLE request timeout" Error
>
> Hi,
>
> the logs imply that the problem is caused by the DfsBroker - can you
> verify that it's running and maybe check the DfsBroker logs and/or HDFS
> logs as well?
>
> bye
> Christoph
>
> 2012/9/12 Mehmet Ali Cetinkaya <[email protected]>
>
> Hello,
>
> I have installed Hypertable and Hadoop. I wanna use HT on Hadoop cluster
> of 2 machines.
>
> i'm using python for write data to HT and i inserted 2-3 million data. But
> now i have rangeserver crush error. it's timeout error when i write a
> "some" simple insert or select hql syntax.
> for example: i don't take a any error when use like a " select * from urls
> where row=^'test' " sythax. but it's "select domain:test_com_tr from urls
> cell_limit 200" not work.
>
> anyway, hyperspace, dfsbroker (hadoop) and rangeserver was succesfully
> started when restart ht. bu ht.master didn't start.
>
> and i don't wanna use cleandb. how can i start ht succesfully?
>
> regards,
> mali
>
> this is the log of Hypertable.Master.log file;
>
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12# tail -f
> Hypertable.Master.log
> lease-interval=1000000
> logging-level=warn
> pidfile=/opt/hypertable/current/run/Hypertable.Master.pid
> port=38050
> reactors=8
> timeout=180000
> verbose=true
> 1347435601 ERROR Hypertable.Master : main
> (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:255):
> Hypertable::Exception: Error getting length of DFS file:
> /hypertable/servers/master/log/mml/0 - HYPERTABLE request timeout
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:451)
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:445): Event:
> type=ERROR "HYPERTABLE request timeout" from=127.0.0.1:38030
> ^X^Z
> [1]+ Stopped tail -f Hypertable.Master.log
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12# tail -f
> Hypertable.Master.log
> lease-interval=1000000
> logging-level=warn
> pidfile=/opt/hypertable/current/run/Hypertable.Master.pid
> port=38050
> reactors=8
> timeout=180000
> verbose=true
> 1347435601 ERROR Hypertable.Master : main
> (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:255):
> Hypertable::Exception: Error getting length of DFS file:
> /hypertable/servers/master/log/mml/0 - HYPERTABLE request timeout
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:451)
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:445): Event:
> type=ERROR "HYPERTABLE request timeout" from=127.0.0.1:38030
> ^Z
> [2]+ Stopped tail -f Hypertable.Master.log
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12#
> root@dfs1:/opt/hypertable/current/log/archive/2012-09/12# tail -f
> Hypertable.Master.loglease-interval=1000000
> logging-level=warn
> pidfile=/opt/hypertable/current/run/Hypertable.Master.pid
> port=38050
> reactors=8
> timeout=180000
> verbose=true
> 1347435601 ERROR Hypertable.Master : main
> (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:255):
> Hypertable::Exception: Error getting length of DFS file:
> /hypertable/servers/master/log/mml/0 - HYPERTABLE request timeout
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:451)
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:445): Event:
> type=ERROR "HYPERTABLE request timeout" from=127.0.0.1:38030
> CPU cores count=8
> CephBroker.MonAddr=10.0.1.245:6789
> DfsBroker.Local.Root=fs/local
> DfsBroker.Port=38030
> HdfsBroker.Hadoop.ConfDir=/hadoop/conf
> HdfsBroker.Workers=20
> Hyperspace.GracePeriod=200000
> Hyperspace.KeepAlive.Interval=30000
> Hyperspace.Lease.Interval=1000000
> Hyperspace.Replica.Dir=hyperspace
> Hyperspace.Replica.Host=[dfs1]
> Hyperspace.Replica.Port=38040
> Hypertable.Logging.Level=warn
> Hypertable.Master.Port=38050
> Hypertable.Master.Reactors=8
> Hypertable.RangeServer.MemoryLimit=5000000000
> Hypertable.RangeServer.MemoryLimit.Percentage=50
> Hypertable.RangeServer.Port=38060
> Hypertable.RangeServer.QueryCache.MaxMemory=5000000000
> Hypertable.RangeServer.Range.SplitSize=3000000000
> Hypertable.RangeServer.Scanner.BufferSize=3000000000
> Hypertable.RangeServer.Scanner.Ttl=14400000
> Hypertable.Request.Timeout=180000
> Hypertable.Verbose=true
> ThriftBroker.Port=38080
> dfs-port=38030
> grace-period=200000
> hs-host=[dfs1]
> hs-port=38040
> keepalive=30000
> lease-interval=1000000
> logging-level=warn
> pidfile=/opt/hypertable/current/run/Hypertable.Master.pid
> port=38050
> reactors=8
> timeout=180000
> verbose=true
> 1347439382 ERROR Hypertable.Master : main
> (/root/src/hypertable/src/cc/Hypertable/Master/main.cc:255):
> Hypertable::Exception: Error getting length of DFS file:
> /hypertable/servers/master/log/mml/0 - HYPERTABLE request timeout
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:451)
> at virtual int64_t Hypertable::DfsBroker::Client::length(const
> Hypertable::String&, bool)
> (/root/src/hypertable/src/cc/DfsBroker/Lib/Client.cc:445): Event:
> type=ERROR "HYPERTABLE request timeout" from=127.0.0.1:38030
> --
> You received this message because you are subscribed to the Google Groups
> "Hypertable Development" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/hypertable-dev?hl=en.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypertable Development" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/hypertable-dev?hl=en.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypertable Development" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/hypertable-dev?hl=en.
>
--
You received this message because you are subscribed to the Google Groups
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/hypertable-dev?hl=en.