Github user dyozie commented on a diff in the pull request:

    https://github.com/apache/incubator-hawq-docs/pull/59#discussion_r87917402
  
    --- Diff: bestpractices/querying_data_bestpractices.html.md.erb ---
    @@ -4,6 +4,23 @@ title: Best Practices for Querying Data
     
     To obtain the best results when querying data in HAWQ, review the best 
practices described in this topic.
     
    +## <a id="virtual_seg_performance"></a>Factors Impacting Query Performance
    +
    +The number of virtual segments used for a query directly impacts the 
query's performance. The following factors can impact the degree of parallelism 
of a query:
    +
    +-   **Cost of the query**. Small queries use fewer segments and larger 
queries use more segments. Some techniques used in defining resource queues can 
influence the number of both virtual segments and general resources allocated 
to queries. For more information, see [Best Practices for Using Resource 
Queues](managing_resources_bestpractices.html#topic_hvd_pls_wv).
    +-   **Available resources at query time**. If more resources are available 
in the resource queue, those resources will be used.
    +-   **Hash table and bucket number**. If the query involves only 
hash-distributed tables, the query's parallelism is fixed (equal to the hash 
table bucket number) under the following conditions: 
    +   - the bucket number (bucketnum) configured for all the hash tables is 
the same bucket number for all tables 
    +   - the table size for random tables is no more than 1.5 times larger 
than the size allotted for the hash tables. 
    +
    +  Otherwise, the number of virtual segments depends on the query's cost: 
hash-distributed table queries will behave like queries on randomly distributed 
tables.
    +-   **Query Type**: For queries with some user-defined functions, or for 
external tables where calculating resource costs is difficult, then the number 
of virtual segments is controlled by the  `hawq_rm_nvseg_perquery_limit `and 
`hawq_rm_nvseg_perquery_perseg_limit` parameters, as well as by the ON clause 
and the location list of external tables. If the query has a hash result table 
(e.g. `INSERT into hash_table`), the number of virtual segments must be equal 
to the bucket number of the resulting hash table. If the query is performed in 
utility mode, such as for `COPY` and `ANALYZE` operations, the virtual segment 
number is calculated by different policies.
    --- End diff --
    
    This sentence needs some clarifying edits.  The best I can suggest is 
something like:  "It can be difficult to calculate resource costs for queries 
that reference either user-defined functions or external tables. For these 
queries, the number of virtual segments is controlled by the 
`hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit` 
parameters, as well as by the ON clause and the location list of external 
tables."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to