Hi Anil,

Most likely, your query takes long time due to SQL query is running in
single thread. The only workaround for now is to add more nodes.

However, query is quite simple, so you can run ScanQuery per partition in
parallel manner for iterating over PERSON_CACHE.


On Fri, Feb 17, 2017 at 5:29 AM, Anil <anilk...@gmail.com> wrote:

> Hi Andrey,
>
> Yes. index is available on eqId of PersonDetail object.
>
> Query says scan for Person cache not the PersonDetail cache.
>
> and i think the  above  Computask executed by only one thread and  not by
> number of threads on number of partitions. Can parallelism achieved here ?
>
> Thanks.
>
>
>
> On 17 February 2017 at 02:32, Andrey Mashenkov <andrey.mashen...@gmail.com
> > wrote:
>
>> Hi Anil,
>>
>> 1. Seems, some node enter to topology, but cannot finish partition map
>> exchange operations due to long running transtaction or smth holds lock on
>> a partition.
>>
>> 2.     /* PERSON_CACHE.PERSON.__SCAN_ */ says that no indices is used
>> for this query and sull scan will be performed.Do you have an index on 
>> PersonDetail.eqId
>> field?
>>
>> On Thu, Feb 16, 2017 at 6:50 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Val,
>>>
>>> I have created ComputeTask which updates which scans the local cache and
>>> updates its information to child records in another cache. Both caches are
>>> collocated so that parent and child records fall under node and partition.
>>>
>>> 1. I see following warning in the logs when compute task is running -
>>>
>>>  GridCachePartitionExchangeManager:480 - Failed to wait for partition
>>> map exchange [topVer=AffinityTopologyVersion [topVer=6, minorTopVer=0],
>>> node=c7a3957b-a3d0-4923-8e5d-e95430c7e66e]. Dumping pending objects
>>> that might be the cause:
>>>
>>> Should I worry about this warning ? what could be the reason for this
>>> warning.
>>>
>>> 2.
>>>
>>> QueryCursor<Entry<String, Person>> cursor = cache.query(new
>>> SqlQuery<String, Person>(Person.class, "select * from Person").
>>> *setLocal(**true**)*);
>>>
>>>
>>>
>>>   for (Entry<String, Person> row : cursor) {
>>>
>>>        String eqId =   row.getValue().getEqId(); //(String) row.get(0);
>>>
>>>        QueryCursor<Entry<AffinityKey<String>, PersonDetail>> dCursor =
>>>
>>>                                                  detailsCache.query(new
>>> SqlQuery<AffinityKey<String>, PersonDetail>(PersonDetail.class,
>>>
>>>
>>>
>>>                   "select * from DETAIL_CACHE.PersonDetail  where eqId =
>>> ?").*setLocal(true)*.setArgs(eqId));
>>>
>>>          for (Entry<AffinityKey<String>, PersonDetail> d : dCursor) {
>>>
>>>                // add person info to person detail and add to person
>>> detail data streamer.
>>>
>>>             }
>>>
>>>
>>>      }
>>>
>>>
>>> I see (in logs) that query is taking long time -
>>>
>>>
>>> Query execution is too long [time=23309 ms, sql='SELECT
>>> "PERSON_CACHE".Person._key, "PERSON_CACHE".PERSON._val from Person', plan=
>>>
>>> SELECT
>>>
>>>     PERSON_CACHE.PERSON._KEY,
>>>
>>>     PERSON_CACHE.PERSON._VAL
>>>
>>> FROM PERSON_CACHE.PERSON
>>>
>>>     /* PERSON_CACHE.PERSON.__SCAN_ */
>>>
>>> , parameters=[]]
>>>
>>> any issues with the above approach ? thanks.
>>>
>>>
>>> Thanks.
>>>
>>>
>>> On 11 February 2017 at 04:18, vkulichenko <valentin.kuliche...@gmail.com
>>> > wrote:
>>>
>>>> Looks ok except that the first query should also be local I guess. Also
>>>> note
>>>> that you used split adapter, so didn't actually map the jobs to nodes,
>>>> leaving this to Ignite. This means that there is a chance some nodes
>>>> will
>>>> get more than one job, and some none of the jobs. Round robin balancing
>>>> is
>>>> used by default, so this should not happen, at least on stable
>>>> topology, but
>>>> theoretically there is no guarantee. Use map method instead to manually
>>>> map
>>>> jobs to nodes, or just use broadcast() method.
>>>>
>>>> Jobs are executed in parallel in the public thread pool.
>>>>
>>>> -Val
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context: http://apache-ignite-users.705
>>>> 18.x6.nabble.com/EntryProcessor-for-cache-tp10432p10559.html
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>


-- 
Best regards,
Andrey V. Mashenkov

Reply via email to