[ 
https://issues.apache.org/jira/browse/HIVE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15214796#comment-15214796
 ] 

Gopal V commented on HIVE-12049:
--------------------------------

MaxRows = 1000

!old-driver-profiles.png!

The hot codepath with the new driver is 

{code}
 Stacks at 2016-03-28 01:10:19 PM (uptime 7m 58 sec)

 faeb41dd-3869-40cc-860b-748f505d5565 eab06890-8bb8-478f-877a-9282f5b4d64e 
HiveServer2-Handler-Pool: Thread-788 [RUNNABLE]
*** java.util.concurrent.ConcurrentHashMap.putAll(Map) 
ConcurrentHashMap.java:1084
*** java.util.concurrent.ConcurrentHashMap.<init>(Map) 
ConcurrentHashMap.java:852
*** org.apache.hadoop.conf.Configuration.<init>(Configuration) 
Configuration.java:713
*** org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf) HiveConf.java:3460
*** org.apache.hive.service.cli.operation.SQLOperation.getConfigForOperation() 
SQLOperation.java:529
*** 
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(FetchOrientation,
 long) SQLOperation.java:360
*** 
org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationHandle,
 FetchOrientation, long) OperationManager.java:280
org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(OperationHandle,
 FetchOrientation, long, FetchType) HiveSessionImpl.java:786
org.apache.hive.service.cli.CLIService.fetchResults(OperationHandle, 
FetchOrientation, long, FetchType) CLIService.java:452
org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(TFetchResultsReq)
 ThriftCLIService.java:743
org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService$Iface,
 TCLIService$FetchResults_args) TCLIService.java:1557
org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(Object,
 TBase) TCLIService.java:1542
org.apache.thrift.ProcessFunction.process(int, TProtocol, TProtocol, Object) 
ProcessFunction.java:39
org.apache.thrift.TBaseProcessor.process(TProtocol, TProtocol) 
TBaseProcessor.java:39
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TProtocol, 
TProtocol) TSetIpAddressProcessor.java:56
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() 
TThreadPoolServer.java:286
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) 
ThreadPoolExecutor.java:1142
java.util.concurrent.ThreadPoolExecutor$Worker.run() ThreadPoolExecutor.java:617
java.lang.Thread.run() Thread.java:745
{code}

> Provide an option to write serialized thrift objects in final tasks
> -------------------------------------------------------------------
>
>                 Key: HIVE-12049
>                 URL: https://issues.apache.org/jira/browse/HIVE-12049
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>            Reporter: Rohit Dholakia
>            Assignee: Rohit Dholakia
>         Attachments: HIVE-12049.1.patch, HIVE-12049.11.patch, 
> HIVE-12049.12.patch, HIVE-12049.13.patch, HIVE-12049.14.patch, 
> HIVE-12049.2.patch, HIVE-12049.3.patch, HIVE-12049.4.patch, 
> HIVE-12049.5.patch, HIVE-12049.6.patch, HIVE-12049.7.patch, 
> HIVE-12049.9.patch, new-driver-profiles.png, old-driver-profiles.png
>
>
> For each fetch request to HiveServer2, we pay the penalty of deserializing 
> the row objects and translating them into a different representation suitable 
> for the RPC transfer. In a moderate to high concurrency scenarios, this can 
> result in significant CPU and memory wastage. By having each task write the 
> appropriate thrift objects to the output files, HiveServer2 can simply stream 
> a batch of rows on the wire without incurring any of the additional cost of 
> deserialization and translation. 
> This can be implemented by writing a new SerDe, which the FileSinkOperator 
> can use to write thrift formatted row batches to the output file. Using the 
> pluggable property of the {{hive.query.result.fileformat}}, we can set it to 
> use SequenceFile and write a batch of thrift formatted rows as a value blob. 
> The FetchTask can now simply read the blob and send it over the wire. On the 
> client side, the *DBC driver can read the blob and since it is already 
> formatted in the way it expects, it can continue building the ResultSet the 
> way it does in the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to