Interesting - this issue would certainly go away with local mode as
there's no thrift call to fail. I'd very much prefer to run HMS as a
centralized service though.

Thanks for the info - I'll have to take a look at how the thrift
client handles timeouts/reconnects/etc.

--travis


On Wed, Aug 1, 2012 at 11:57 AM, Edward Capriolo <edlinuxg...@gmail.com> wrote:
> The two setup options are:
>
> cli->thriftmetastore->jdbc
>
> cli->jdbc (used to be called local mode)
>
> localmode has less moving parts so I prefer it.
>
> On Wed, Aug 1, 2012 at 2:54 PM, Travis Crawford
> <traviscrawf...@gmail.com> wrote:
>> Oh interesting - you're saying instead of running a single
>> HiveMetaStore thrift service, most users use the embedded
>> HiveMetaStore mode and have each CLI instance connect to the DB
>> directly?
>>
>> --travis
>>
>>
>> On Wed, Aug 1, 2012 at 11:47 AM, Edward Capriolo <edlinuxg...@gmail.com> 
>> wrote:
>>> I feel that that interface is very rarely used in the wild. The only
>>> use case I can figure out for it is people with very in depth hive
>>> experience that do not wish to interact with hive through the QL
>>> language. That being said I would think the coverage might be a little
>>> weak there. With the local metastore users have data nucleus providing
>>> support for reconnection etc.
>>>
>>> On Wed, Aug 1, 2012 at 2:35 PM, Travis Crawford
>>> <traviscrawf...@gmail.com> wrote:
>>>> I'm using the thrift metastore via TFramedTransport. What value do you
>>>> specify for hive.metastore.client.socket.timeout? I'm using 60.
>>>>
>>>> If I open the CLI, run "show tables", wait the timeout period, then
>>>> run "show tables" the CLI hangs in:
>>>>
>>>> "main" prio=10 tid=0x000000004151a000 nid=0x448 runnable 
>>>> [0x0000000041b42000]
>>>>    java.lang.Thread.State: RUNNABLE
>>>>         at java.net.SocketInputStream.socketRead0(Native Method)
>>>>         at java.net.SocketInputStream.read(SocketInputStream.java:129)
>>>>         at 
>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>>>>         at 
>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>>         at 
>>>> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
>>>>         at 
>>>> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
>>>>         at 
>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>>>>         at 
>>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
>>>>         at 
>>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
>>>>         at 
>>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
>>>>         at 
>>>> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>>>>         at 
>>>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_database(ThriftHiveMetastore.java:374)
>>>>         at 
>>>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_database(ThriftHiveMetastore.java:361)
>>>>         at 
>>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:705)
>>>>         at 
>>>> org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1077)
>>>>         at 
>>>> org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1066)
>>>>         at 
>>>> org.apache.hadoop.hive.ql.exec.DDLTask.showTables(DDLTask.java:2004)
>>>>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:325)
>>>>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
>>>>         at 
>>>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>>>>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1329)
>>>>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1115)
>>>>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:948)
>>>>         at 
>>>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
>>>>         at 
>>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
>>>>         at 
>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
>>>>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:750)
>>>>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>         at 
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>         at 
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
>>>>
>>>> --travis
>>>>
>>>>
>>>> On Wed, Aug 1, 2012 at 11:31 AM, Edward Capriolo <edlinuxg...@gmail.com> 
>>>> wrote:
>>>>> Are you communicating with a thrift metastore or a JDBC metastore? I
>>>>> have had connections opened for long periods of time and never
>>>>> remember experiencing them timeout.
>>>>>
>>>>> Edward
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Aug 1, 2012 at 12:01 PM, Travis Crawford
>>>>> <traviscrawf...@gmail.com> wrote:
>>>>>> Hey Hive gurus -
>>>>>>
>>>>>> Does anyone know how the CLI handles metastore connection timeouts? It
>>>>>> seems if I leave a CLI session idle more than
>>>>>> hive.metastore.client.socket.timeout seconds then run "show tables",
>>>>>> the cli hangs for the timeout then throws a SocketTimeoutException.
>>>>>> Restarting the CLI and running the same "show tables" always works.
>>>>>>
>>>>>> Does anyone else see this? My hive.metastore.client.socket.timeout is
>>>>>> set to 60 - is that a reasonable value?
>>>>>>
>>>>>> --travis

Reply via email to