[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Norris Lee updated HIVE-14901: ------------------------------ Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > -------------------------------------------------------------------------------- > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC > Affects Versions: 2.1.0 > Reporter: Vaibhav Gumashta > Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)