[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-14901: -- Labels: TODOC2.2 (was: ) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Labels: TODOC2.2 > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-14901: Target Version/s: 2.2.0 (was: 2.1.0) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-14901: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master. Thanks [~norrisl]! > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: (was: HIVE-14901.9.patch) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.9.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.9.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: (was: HIVE-14901.9.patch) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.9.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, > HIVE-14901.9.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.8.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.7.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.6.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, > HIVE-14901.6.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.5.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: Open) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: (was: HIVE-14901.5.patch) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Open (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.5.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.5.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.4.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.4.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: Open) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.3.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Open (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: (was: HIVE-14901.3.patch) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.3.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, > HIVE-14901.3.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.2.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: Patch Available (was: In Progress) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.1.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.1.patch, HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Status: In Progress (was: Patch Available) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Target Version/s: 2.1.0 Status: Patch Available (was: Open) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Norris Lee updated HIVE-14901: -- Attachment: HIVE-14901.patch > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > Attachments: HIVE-14901.patch > > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-14901: Assignee: Norris Lee (was: Ziyang Zhao) > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta >Assignee: Norris Lee > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
[ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-14901: Affects Version/s: 2.1.0 > HiveServer2: Use user supplied fetch size to determine #rows serialized in > tasks > > > Key: HIVE-14901 > URL: https://issues.apache.org/jira/browse/HIVE-14901 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, JDBC, ODBC >Affects Versions: 2.1.0 >Reporter: Vaibhav Gumashta > > Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide > the max number of rows that we write in tasks. However, we should ideally use > the user supplied value (which can be extracted from the > ThriftCLIService.FetchResults' request parameter) to decide how many rows to > serialize in a blob in the tasks. We should however use > {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on > it, so that we don't go OOM in tasks and HS2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)