[jira] [Resolved] (GEODE-1775) CI failure: ParallelWANPropagationClientServerDUnitTest.testParallelPropagationWithClientServer

2017-05-11 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-1775.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> CI failure: 
> ParallelWANPropagationClientServerDUnitTest.testParallelPropagationWithClientServer
> ---
>
> Key: GEODE-1775
> URL: https://issues.apache.org/jira/browse/GEODE-1775
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Grace Meilen
>Assignee: Dan Smith
>  Labels: ci, flaky
> Fix For: 1.2.0
>
>
> {no format}
> :geode-wan:distributedTest
> com.gemstone.gemfire.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest
>  > testParallelPropagationWithClientServer FAILED
> com.gemstone.gemfire.test.dunit.RMIException: While invoking 
> com.gemstone.gemfire.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest$$Lambda$19/1746236140.run
>  in VM 4 running on Host 9ff79c8190b7 with 8 VMs
> at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:389)
> at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:355)
> at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:293)
> at 
> com.gemstone.gemfire.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest.testParallelPropagationWithClientServer(ParallelWANPropagationClientServerDUnitTest.java:59)
> Caused by:
> com.gemstone.gemfire.cache.NoSubscriptionServersAvailableException: 
> com.gemstone.gemfire.cache.NoSubscriptionServersAvailableException: Primary 
> discovery failed.
> at 
> com.gemstone.gemfire.cache.client.internal.QueueManagerImpl.getAllConnections(QueueManagerImpl.java:198)
> at 
> com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeOnQueuesAndReturnPrimaryResult(OpExecutorImpl.java:550)
> at 
> com.gemstone.gemfire.cache.client.internal.PoolImpl.executeOnQueuesAndReturnPrimaryResult(PoolImpl.java:763)
> at 
> com.gemstone.gemfire.cache.client.internal.RegisterInterestOp.execute(RegisterInterestOp.java:63)
> at 
> com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.registerInterest(ServerRegionProxy.java:376)
> at 
> com.gemstone.gemfire.internal.cache.LocalRegion.processSingleInterest(LocalRegion.java:3968)
> at 
> com.gemstone.gemfire.internal.cache.LocalRegion.registerInterest(LocalRegion.java:4058)
> at 
> com.gemstone.gemfire.internal.cache.LocalRegion.registerInterest(LocalRegion.java:3873)
> at 
> com.gemstone.gemfire.internal.cache.LocalRegion.registerInterest(LocalRegion.java:3867)
> at 
> com.gemstone.gemfire.internal.cache.LocalRegion.registerInterest(LocalRegion.java:3863)
> at 
> com.gemstone.gemfire.internal.cache.wan.WANTestBase.createClientWithLocator(WANTestBase.java:2154)
> at 
> com.gemstone.gemfire.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest.lambda$testParallelPropagationWithClientServer$cb73cba9$3(ParallelWANPropagationClientServerDUnitTest.java:59)
> Caused by:
> 
> com.gemstone.gemfire.cache.NoSubscriptionServersAvailableException: Primary 
> discovery failed.
> {no format}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2824) FunctionException: No target node found when executing hasNext on Lucene result

2017-05-08 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou updated GEODE-2824:
-
Fix Version/s: 1.2.0

> FunctionException: No target node found when executing hasNext on Lucene 
> result
> ---
>
> Key: GEODE-2824
> URL: https://issues.apache.org/jira/browse/GEODE-2824
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: xiaojian zhou
> Fix For: 1.2.0
>
>
> The stack trace below is thrown during a race condition when a node is 
> closing and calling hasNext on a Lucene result.
> It looks there was a CacheClosedException, but this execution was unable to 
> find a target node to retry on.  This execution then threw a 
> FunctionException.
> We have code to unwrap CacheClosedExceptions from function exceptions, 
> however this was just an ordinary function exception.  The underlying cause 
> is that the cache is closing at this time.
> We should probably wrap all function exceptions with either a 
> LuceneQueryException or equivalent as a user would probably not expect a 
> FunctionException when calling Lucene methods.
> The stack trace:
> {noformat}
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeOnMultipleNodes(PartitionedRegion.java:3459)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3367)
> at 
> org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:228)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:376)
> at 
> org.apache.geode.internal.cache.partitioned.PRFunctionStreamingResultCollector.getResult(PRFunctionStreamingResultCollector.java:178)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:112)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2824) FunctionException: No target node found when executing hasNext on Lucene result

2017-05-08 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2824.
--
Resolution: Fixed

> FunctionException: No target node found when executing hasNext on Lucene 
> result
> ---
>
> Key: GEODE-2824
> URL: https://issues.apache.org/jira/browse/GEODE-2824
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: xiaojian zhou
>
> The stack trace below is thrown during a race condition when a node is 
> closing and calling hasNext on a Lucene result.
> It looks there was a CacheClosedException, but this execution was unable to 
> find a target node to retry on.  This execution then threw a 
> FunctionException.
> We have code to unwrap CacheClosedExceptions from function exceptions, 
> however this was just an ordinary function exception.  The underlying cause 
> is that the cache is closing at this time.
> We should probably wrap all function exceptions with either a 
> LuceneQueryException or equivalent as a user would probably not expect a 
> FunctionException when calling Lucene methods.
> The stack trace:
> {noformat}
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeOnMultipleNodes(PartitionedRegion.java:3459)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3367)
> at 
> org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:228)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:376)
> at 
> org.apache.geode.internal.cache.partitioned.PRFunctionStreamingResultCollector.getResult(PRFunctionStreamingResultCollector.java:178)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:112)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-1734) Lucene search for a single entry is returning multiple results

2017-05-04 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-1734.
--
Resolution: Fixed

Fixed in GEODE-2241 revision 0182a1bb744d25fe490d142dfed7d9a6f20b2713

> Lucene search for a single entry is returning multiple results
> --
>
> Key: GEODE-1734
> URL: https://issues.apache.org/jira/browse/GEODE-1734
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: William Markito Oliveira
>Assignee: xiaojian zhou
>
> Searching for a unique entry is returning multiple results, although the key 
> is the same.  It should return a single result.
> {code}
> gfsh>lucene search --name=customerRegionAll 
> --queryStrings="firstName:Jdfmlevjenzwgd" --region=/customer 
> --defaultField=displayName
> key  |
>value   | score
>  | 
> -
>  | -
> 70dbdb7f-648e-415e-880d-15631f87a523 | 
> PDX[16777220,org.example.domain.model.CustomerEntity]{active=false, 
> addresses=.. | 12.798602
> 70dbdb7f-648e-415e-880d-15631f87a523 | 
> PDX[16777220,org.example.domain.model.CustomerEntity]{active=false, 
> addresses=.. | 12.798602
> 70dbdb7f-648e-415e-880d-15631f87a523 | 
> PDX[16777220,org.example.domain.model.CustomerEntity]{active=false, 
> addresses=.. | 12.798602
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2848) While destroying a LuceneIndex, the AsyncEventQueue region is destroyed in remote members before stopping the AsyncEventQueue

2017-05-03 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou updated GEODE-2848:
-
Fix Version/s: 1.2.0

> While destroying a LuceneIndex, the AsyncEventQueue region is destroyed in 
> remote members before stopping the AsyncEventQueue
> -
>
> Key: GEODE-2848
> URL: https://issues.apache.org/jira/browse/GEODE-2848
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
> Fix For: 1.2.0
>
>
> This causes a NullPointerException in BatchRemovalThread getAllRecipients 
> like:
> {noformat}
> [fine 2017/04/24 14:27:29.163 PDT gemfire4_r02-s28_3222  
> tid=0x6b] BatchRemovalThread: ignoring exception
> java.lang.NullPointerException
>   at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.getAllRecipients(ParallelGatewaySenderQueue.java:1776)
>   at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.run(ParallelGatewaySenderQueue.java:1722)
> {noformat}
> This message is currently only logged at fine level and doesn't cause any 
> real issues.
> The simple fix is to check for null in getAllRecipients like:
> {noformat}
> PartitionedRegion pReg = ((PartitionedRegion) (cache.getRegion((String) pr)));
> if (pReg != null) {
>   recipients.addAll(pReg.getRegionAdvisor().adviseDataStore());
> }
> {noformat}
> Another more complex fix is to change the destroyIndex sequence.
> The current destroyIndex sequence is:
> # stops and destroys the AEQ in the initiator (including the underlying PR)
> # closes the repository manager in the initiator
> # stops and destroys the AEQ in remote members (not including the underlying 
> PR)
> # closes the repository manager in the remote members
> # destroys the fileAndChunk region in the initiator
> Between steps 1 and 3, the region will be null in the remote members, so the 
> NPE can occur.
> A better sequence would be:
> # stops the AEQ in the initiator
> # stops the AEQ in remote members
> # closes the repository manager in the initiator
> # closes the repository manager in the remote members
> # destroys the AEQ in the initiator (including the underlying PR) 
> # destroys the AEQ in the remote members (not including the underlying PR)
> # destroys the fileAndChunk region in the initiator
> That would be 3 messages between the members.
> I think that can be combined into one remote message like:
> # stops the AEQ in the initiator
> # closes the repository manager in the initiator
> # stops the AEQ in remote members
> # closes the repository manager in the remote members
> # destroys the AEQ in the remote members (not including the underlying PR)
> # destroys the AEQ in the initiator (including the underlying PR) 
> # destroys the fileAndChunk region in the initiator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2848) While destroying a LuceneIndex, the AsyncEventQueue region is destroyed in remote members before stopping the AsyncEventQueue

2017-05-03 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2848.
--
Resolution: Fixed

fix in revision d4ece31fa23bbe74c8be0a82ff4b9d143bad79b3

> While destroying a LuceneIndex, the AsyncEventQueue region is destroyed in 
> remote members before stopping the AsyncEventQueue
> -
>
> Key: GEODE-2848
> URL: https://issues.apache.org/jira/browse/GEODE-2848
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>
> This causes a NullPointerException in BatchRemovalThread getAllRecipients 
> like:
> {noformat}
> [fine 2017/04/24 14:27:29.163 PDT gemfire4_r02-s28_3222  
> tid=0x6b] BatchRemovalThread: ignoring exception
> java.lang.NullPointerException
>   at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.getAllRecipients(ParallelGatewaySenderQueue.java:1776)
>   at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.run(ParallelGatewaySenderQueue.java:1722)
> {noformat}
> This message is currently only logged at fine level and doesn't cause any 
> real issues.
> The simple fix is to check for null in getAllRecipients like:
> {noformat}
> PartitionedRegion pReg = ((PartitionedRegion) (cache.getRegion((String) pr)));
> if (pReg != null) {
>   recipients.addAll(pReg.getRegionAdvisor().adviseDataStore());
> }
> {noformat}
> Another more complex fix is to change the destroyIndex sequence.
> The current destroyIndex sequence is:
> # stops and destroys the AEQ in the initiator (including the underlying PR)
> # closes the repository manager in the initiator
> # stops and destroys the AEQ in remote members (not including the underlying 
> PR)
> # closes the repository manager in the remote members
> # destroys the fileAndChunk region in the initiator
> Between steps 1 and 3, the region will be null in the remote members, so the 
> NPE can occur.
> A better sequence would be:
> # stops the AEQ in the initiator
> # stops the AEQ in remote members
> # closes the repository manager in the initiator
> # closes the repository manager in the remote members
> # destroys the AEQ in the initiator (including the underlying PR) 
> # destroys the AEQ in the remote members (not including the underlying PR)
> # destroys the fileAndChunk region in the initiator
> That would be 3 messages between the members.
> I think that can be combined into one remote message like:
> # stops the AEQ in the initiator
> # closes the repository manager in the initiator
> # stops the AEQ in remote members
> # closes the repository manager in the remote members
> # destroys the AEQ in the remote members (not including the underlying PR)
> # destroys the AEQ in the initiator (including the underlying PR) 
> # destroys the fileAndChunk region in the initiator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2824) FunctionException: No target node found when executing hasNext on Lucene result

2017-05-02 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou reassigned GEODE-2824:


Assignee: xiaojian zhou

> FunctionException: No target node found when executing hasNext on Lucene 
> result
> ---
>
> Key: GEODE-2824
> URL: https://issues.apache.org/jira/browse/GEODE-2824
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Jason Huynh
>Assignee: xiaojian zhou
>
> The stack trace below is thrown during a race condition when a node is 
> closing and calling hasNext on a Lucene result.
> It looks there was a CacheClosedException, but this execution was unable to 
> find a target node to retry on.  This execution then threw a 
> FunctionException.
> We have code to unwrap CacheClosedExceptions from function exceptions, 
> however this was just an ordinary function exception.  The underlying cause 
> is that the cache is closing at this time.
> We should probably wrap all function exceptions with either a 
> LuceneQueryException or equivalent as a user would probably not expect a 
> FunctionException when calling Lucene methods.
> The stack trace:
> {noformat}
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeOnMultipleNodes(PartitionedRegion.java:3459)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.executeFunction(PartitionedRegion.java:3367)
> at 
> org.apache.geode.internal.cache.execute.PartitionedRegionFunctionExecutor.executeFunction(PartitionedRegionFunctionExecutor.java:228)
> at 
> org.apache.geode.internal.cache.execute.AbstractExecution.execute(AbstractExecution.java:376)
> at 
> org.apache.geode.internal.cache.partitioned.PRFunctionStreamingResultCollector.getResult(PRFunctionStreamingResultCollector.java:178)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:112)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139)
> at 
> org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2848) While destroying a LuceneIndex, the AsyncEventQueue region is destroyed in remote members before stopping the AsyncEventQueue

2017-05-01 Thread xiaojian zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15991583#comment-15991583
 ] 

xiaojian zhou commented on GEODE-2848:
--

I think it does not worth to introduce complexity of new message or re-arrange 
the message processing sequence. 

But the regionToDispatchedKeysMap will be cleared and temp will be lost, so the 
secondary at remote site will not receive the ParallelQueueRemovalMessage. 

There's a conservative simple fix:
In getAllRecipients(), we need to find the region is gone and return empty set. 
When found recipients.isEmpty(), call regionToDispatchedKeysMap.putAll(temp)

> While destroying a LuceneIndex, the AsyncEventQueue region is destroyed in 
> remote members before stopping the AsyncEventQueue
> -
>
> Key: GEODE-2848
> URL: https://issues.apache.org/jira/browse/GEODE-2848
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Barry Oglesby
>
> This causes a NullPointerException in BatchRemovalThread getAllRecipients 
> like:
> {noformat}
> [fine 2017/04/24 14:27:29.163 PDT gemfire4_r02-s28_3222  
> tid=0x6b] BatchRemovalThread: ignoring exception
> java.lang.NullPointerException
>   at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.getAllRecipients(ParallelGatewaySenderQueue.java:1776)
>   at 
> org.apache.geode.internal.cache.wan.parallel.ParallelGatewaySenderQueue$BatchRemovalThread.run(ParallelGatewaySenderQueue.java:1722)
> {noformat}
> This message is currently only logged at fine level and doesn't cause any 
> real issues.
> The simple fix is to check for null in getAllRecipients like:
> {noformat}
> PartitionedRegion pReg = ((PartitionedRegion) (cache.getRegion((String) pr)));
> if (pReg != null) {
>   recipients.addAll(pReg.getRegionAdvisor().adviseDataStore());
> }
> {noformat}
> Another more complex fix is to change the destroyIndex sequence.
> The current destroyIndex sequence is:
> # stops and destroys the AEQ in the initiator (including the underlying PR)
> # closes the repository manager in the initiator
> # stops and destroys the AEQ in remote members (not including the underlying 
> PR)
> # closes the repository manager in the remote members
> # destroys the fileAndChunk region in the initiator
> Between steps 1 and 3, the region will be null in the remote members, so the 
> NPE can occur.
> A better sequence would be:
> # stops the AEQ in the initiator
> # stops the AEQ in remote members
> # closes the repository manager in the initiator
> # closes the repository manager in the remote members
> # destroys the AEQ in the initiator (including the underlying PR) 
> # destroys the AEQ in the remote members (not including the underlying PR)
> # destroys the fileAndChunk region in the initiator
> That would be 3 messages between the members.
> I think that can be combined into one remote message like:
> # stops the AEQ in the initiator
> # closes the repository manager in the initiator
> # stops the AEQ in remote members
> # closes the repository manager in the remote members
> # destroys the AEQ in the remote members (not including the underlying PR)
> # destroys the AEQ in the initiator (including the underlying PR) 
> # destroys the fileAndChunk region in the initiator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (GEODE-1988) CI failure: RegisterInterestKeysPRDUnitTest fails intermittently

2017-05-01 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou reopened GEODE-1988:
--

It is reproduced in CI FlakeyTest #569 and #564

org.apache.geode.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest
 > testParallelPropagationWithClientServer FAILED
org.apache.geode.test.dunit.RMIException: While invoking 
org.apache.geode.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest$$Lambda$32/630572366.run
 in VM 7 running on Host 56dada81-012e-4ebc-6c30-8480d4e17975 with 8 VMs
at org.apache.geode.test.dunit.VM.invoke(VM.java:377)
at org.apache.geode.test.dunit.VM.invoke(VM.java:347)
at org.apache.geode.test.dunit.VM.invoke(VM.java:292)
at 
org.apache.geode.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest.testParallelPropagationWithClientServer(ParallelWANPropagationClientServerDUnitTest.java:56)

Caused by:
org.apache.geode.cache.NoSubscriptionServersAvailableException: 
org.apache.geode.cache.NoSubscriptionServersAvailableException: Primary 
discovery failed.
at 
org.apache.geode.cache.client.internal.QueueManagerImpl.getAllConnections(QueueManagerImpl.java:191)
at 
org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnQueuesAndReturnPrimaryResult(OpExecutorImpl.java:570)
at 
org.apache.geode.cache.client.internal.PoolImpl.executeOnQueuesAndReturnPrimaryResult(PoolImpl.java:805)
at 
org.apache.geode.cache.client.internal.RegisterInterestOp.execute(RegisterInterestOp.java:58)
at 
org.apache.geode.cache.client.internal.ServerRegionProxy.registerInterest(ServerRegionProxy.java:362)
at 
org.apache.geode.internal.cache.LocalRegion.processSingleInterest(LocalRegion.java:3895)
at 
org.apache.geode.internal.cache.LocalRegion.registerInterest(LocalRegion.java:3974)
at 
org.apache.geode.internal.cache.LocalRegion.registerInterest(LocalRegion.java:3791)
at 
org.apache.geode.internal.cache.LocalRegion.registerInterest(LocalRegion.java:3787)
at 
org.apache.geode.internal.cache.LocalRegion.registerInterest(LocalRegion.java:3783)
at 
org.apache.geode.internal.cache.wan.WANTestBase.createClientWithLocator(WANTestBase.java:2126)
at 
org.apache.geode.internal.cache.wan.parallel.ParallelWANPropagationClientServerDUnitTest.lambda$testParallelPropagationWithClientServer$998d73b4$1(ParallelWANPropagationClientServerDUnitTest.java:56)

Caused by:
org.apache.geode.cache.NoSubscriptionServersAvailableException: 
Primary discovery failed.


> CI failure: RegisterInterestKeysPRDUnitTest fails intermittently
> 
>
> Key: GEODE-1988
> URL: https://issues.apache.org/jira/browse/GEODE-1988
> Project: Geode
>  Issue Type: Bug
>  Components: client/server
>Reporter: Darrel Schneider
>  Labels: ci
>
> :geode-core:distributedTest
> org.apache.geode.internal.cache.tier.sockets.RegisterInterestKeysPRDUnitTest 
> > testRegisterCreatesInvalidEntry FAILED
> org.apache.geode.test.dunit.RMIException: While invoking 
> org.apache.geode.internal.cache.tier.sockets.RegisterInterestKeysDUnitTest$$Lambda$18/601024495.run
>  in VM 3 running on Host 583dcf0e97d9 with 4 VMs
> Caused by:
> java.lang.AssertionError: failed while registering interest
> Caused by:
> org.apache.geode.cache.NoSubscriptionServersAvailableException: 
> org.apache.geode.cache.NoSubscriptionServersAvailableException: Primary 
> discovery failed.
> Caused by:
> 
> org.apache.geode.cache.NoSubscriptionServersAvailableException: Primary 
> discovery failed.
> 7578 tests completed, 1 failed, 588 skipped



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2806) when batch is dispatched, if the bucket is not primary, we should still destroy the event from queue

2017-04-21 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2806.
--
Resolution: Fixed

> when batch is dispatched, if the bucket is not primary, we should still 
> destroy the event from queue
> 
>
> Key: GEODE-2806
> URL: https://issues.apache.org/jira/browse/GEODE-2806
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>  Labels: lucene
>
> This is one of the root causes for data mismatch. 
> When AEQ dispatched a batch, when it tried to destroy the events from queue, 
> the bucket might be no longer primary. There's no need to let the new primary 
> to re-dispatch the batch. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2806) when batch is dispatched, if the bucket is not primary, we should still destroy the event from queue

2017-04-20 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou reassigned GEODE-2806:


Assignee: xiaojian zhou

> when batch is dispatched, if the bucket is not primary, we should still 
> destroy the event from queue
> 
>
> Key: GEODE-2806
> URL: https://issues.apache.org/jira/browse/GEODE-2806
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>  Labels: lucene
>
> This is one of the root causes for data mismatch. 
> When AEQ dispatched a batch, when it tried to destroy the events from queue, 
> the bucket might be no longer primary. There's no need to let the new primary 
> to re-dispatch the batch. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2806) when batch is dispatched, if the bucket is not primary, we should still destroy the event from queue

2017-04-20 Thread xiaojian zhou (JIRA)
xiaojian zhou created GEODE-2806:


 Summary: when batch is dispatched, if the bucket is not primary, 
we should still destroy the event from queue
 Key: GEODE-2806
 URL: https://issues.apache.org/jira/browse/GEODE-2806
 Project: Geode
  Issue Type: Bug
Reporter: xiaojian zhou


This is one of the root causes for data mismatch. 

When AEQ dispatched a batch, when it tried to destroy the events from queue, 
the bucket might be no longer primary. There's no need to let the new primary 
to re-dispatch the batch. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2787) state flush did not wait for notifyGateway

2017-04-14 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2787.
--
Resolution: Fixed

> state flush did not wait for notifyGateway
> --
>
> Key: GEODE-2787
> URL: https://issues.apache.org/jira/browse/GEODE-2787
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>  Labels: lucene
>
> When distribution happened, it calls startOperation() to increase a count, 
> then call an endOperation() to decrease the count. 
> state flush will wait for this count to become 0. 
> But notifyGateway() is called after distribute(). So there's race that 
> stateflush finished but notifyGateway has not done yet. 
> The fix is to move the endOperation() after callbacks. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2787) state flush did not wait for notifyGateway

2017-04-14 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou updated GEODE-2787:
-
Labels: lucene  (was: )

> state flush did not wait for notifyGateway
> --
>
> Key: GEODE-2787
> URL: https://issues.apache.org/jira/browse/GEODE-2787
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>  Labels: lucene
>
> When distribution happened, it calls startOperation() to increase a count, 
> then call an endOperation() to decrease the count. 
> state flush will wait for this count to become 0. 
> But notifyGateway() is called after distribute(). So there's race that 
> stateflush finished but notifyGateway has not done yet. 
> The fix is to move the endOperation() after callbacks. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2787) state flush did not wait for notifyGateway

2017-04-14 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou reassigned GEODE-2787:


Assignee: xiaojian zhou

> state flush did not wait for notifyGateway
> --
>
> Key: GEODE-2787
> URL: https://issues.apache.org/jira/browse/GEODE-2787
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>  Labels: lucene
>
> When distribution happened, it calls startOperation() to increase a count, 
> then call an endOperation() to decrease the count. 
> state flush will wait for this count to become 0. 
> But notifyGateway() is called after distribute(). So there's race that 
> stateflush finished but notifyGateway has not done yet. 
> The fix is to move the endOperation() after callbacks. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2787) state flush did not wait for notifyGateway

2017-04-14 Thread xiaojian zhou (JIRA)
xiaojian zhou created GEODE-2787:


 Summary: state flush did not wait for notifyGateway
 Key: GEODE-2787
 URL: https://issues.apache.org/jira/browse/GEODE-2787
 Project: Geode
  Issue Type: Bug
Reporter: xiaojian zhou


When distribution happened, it calls startOperation() to increase a count, then 
call an endOperation() to decrease the count. 

state flush will wait for this count to become 0. 

But notifyGateway() is called after distribute(). So there's race that 
stateflush finished but notifyGateway has not done yet. 

The fix is to move the endOperation() after callbacks. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-1894) SerialGatewaySenderOperationsDUnitTest test hangs

2017-03-29 Thread xiaojian zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15947572#comment-15947572
 ] 

xiaojian zhou commented on GEODE-1894:
--

It was not reproducible but we found the root cause and fixed it and committed 
at revision 1938b386f1ed906452. 

> SerialGatewaySenderOperationsDUnitTest test hangs
> -
>
> Key: GEODE-1894
> URL: https://issues.apache.org/jira/browse/GEODE-1894
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Hitesh Khamesra
> Fix For: 1.0.0-incubating
>
> Attachments: threaddump.txt
>
>
> test tries to stop Serial Gateway Sender and that thread just hangs. Event 
> processors are waiting to become primary. One AckReader thread waiting for 
> ack. Seems like need to interrupt these threads. Attached thread dump 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (GEODE-1894) SerialGatewaySenderOperationsDUnitTest test hangs

2017-03-29 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou updated GEODE-1894:
-
Comment: was deleted

(was: Re-run the test using the current revision and even Sep 12's revision, no 
reproduce so far. )

> SerialGatewaySenderOperationsDUnitTest test hangs
> -
>
> Key: GEODE-1894
> URL: https://issues.apache.org/jira/browse/GEODE-1894
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Hitesh Khamesra
> Fix For: 1.0.0-incubating
>
> Attachments: threaddump.txt
>
>
> test tries to stop Serial Gateway Sender and that thread just hangs. Event 
> processors are waiting to become primary. One AckReader thread waiting for 
> ack. Seems like need to interrupt these threads. Attached thread dump 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2683) Lucene query did not match region values

2017-03-17 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2683.
--
Resolution: Fixed

> Lucene query did not match region values
> 
>
> Key: GEODE-2683
> URL: https://issues.apache.org/jira/browse/GEODE-2683
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
> Fix For: 1.2.0
>
>
> There're several root causes. This one is due to the fix in #45782 changed 
> the order to notify primary bucket's gateway before distribute to secondary. 
> The log is at /export/buglogs_bvt/xzhou/lucene/concParRegHA-0209-235804
> CLIENT vm_1_thr_17_dataStore1_ip-10-32-108-36_11189
> TASK[1] parReg.ParRegTest.HydraTask_HADoEntryOps
> ERROR util.TestException: util.TestException: Lucene query did not match 
> region values. missingKeys=[], extraKeys=[Object_13, Object_17, Object_952, 
> Object_550, Object_1876, Object_2732, Object_270, Object_4722, Object_4726, 
> Object_2537]
> at lucene.LuceneHelper.verifyLuceneIndex(LuceneHelper.java:88)
> at lucene.LuceneTest.verifyLuceneIndex(LuceneTest.java:128)
> at lucene.LuceneTest.verifyFromSnapshotOnly(LuceneTest.java:79)
> at parReg.ParRegTest.verifyFromSnapshot(ParRegTest.java:5638)
> at parReg.ParRegTest.concVerify(ParRegTest.java:6035)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at util.MethodCoordinator.executeOnce(MethodCoordinator.java:68)
> at parReg.ParRegTest.HADoEntryOps(ParRegTest.java:2273)
> at parReg.ParRegTest.HydraTask_HADoEntryOps(ParRegTest.java:1032)
> at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> The root cause is:
> T1: A putAll (or removeAll. operation arrived at primary bucket at memberA
> T2: BR.virtualPut() called handleWANEvent() and create shadow key
> T3: PutAll will invoke callback (i.e. write into AEQ) before distribution. 
> (Put/Destroy will not have this problem because they distribute before 
> callback)
> T4: handleSuccessfulBatchDispatch will send ParallelQueueRemovalMessage to 
> the secondary bucket at memberB
> T5: memberB has dataRegion's secondary bucket, but brq is not created yet 
> (due to rebalance). So in ParallelQueueRemovalMessage.process(), it will only 
> try to remove the event from tempQueue (which does not contain the event, so 
> it will do nothing)
> T6: Now, finally the BR.virtualPut()'s distribution arrived at user region's 
> secondary bucket at memberB. It will be added into the AEQ (or tempQueue, 
> depends). 
> T7: memberB becomes new primary (due to rebalance) and re-dispatch the shadow 
> key (which has been processed much earlier in memberA). Data mismatch is 
> because the replayed event overrides a newer event.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2683) Lucene query did not match region values

2017-03-17 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou updated GEODE-2683:
-
Description: 
There're several root causes. This one is due to the fix in #45782 changed the 
order to notify primary bucket's gateway before distribute to secondary. 

The log is at /export/buglogs_bvt/xzhou/lucene/concParRegHA-0209-235804
CLIENT vm_1_thr_17_dataStore1_ip-10-32-108-36_11189
TASK[1] parReg.ParRegTest.HydraTask_HADoEntryOps
ERROR util.TestException: util.TestException: Lucene query did not match region 
values. missingKeys=[], extraKeys=[Object_13, Object_17, Object_952, 
Object_550, Object_1876, Object_2732, Object_270, Object_4722, Object_4726, 
Object_2537]
at lucene.LuceneHelper.verifyLuceneIndex(LuceneHelper.java:88)
at lucene.LuceneTest.verifyLuceneIndex(LuceneTest.java:128)
at lucene.LuceneTest.verifyFromSnapshotOnly(LuceneTest.java:79)
at parReg.ParRegTest.verifyFromSnapshot(ParRegTest.java:5638)
at parReg.ParRegTest.concVerify(ParRegTest.java:6035)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at util.MethodCoordinator.executeOnce(MethodCoordinator.java:68)
at parReg.ParRegTest.HADoEntryOps(ParRegTest.java:2273)
at parReg.ParRegTest.HydraTask_HADoEntryOps(ParRegTest.java:1032)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)

The root cause is:
T1: A putAll (or removeAll. operation arrived at primary bucket at memberA
T2: BR.virtualPut() called handleWANEvent() and create shadow key
T3: PutAll will invoke callback (i.e. write into AEQ) before distribution. 
(Put/Destroy will not have this problem because they distribute before callback)
T4: handleSuccessfulBatchDispatch will send ParallelQueueRemovalMessage to the 
secondary bucket at memberB
T5: memberB has dataRegion's secondary bucket, but brq is not created yet (due 
to rebalance). So in ParallelQueueRemovalMessage.process(), it will only try to 
remove the event from tempQueue (which does not contain the event, so it will 
do nothing)
T6: Now, finally the BR.virtualPut()'s distribution arrived at user region's 
secondary bucket at memberB. It will be added into the AEQ (or tempQueue, 
depends). 
T7: memberB becomes new primary (due to rebalance) and re-dispatch the shadow 
key (which has been processed much earlier in memberA). Data mismatch is 
because the replayed event overrides a newer event.

  was:There're several root causes. This one is due to the fix in #45782 
changed the order to notify primary bucket's gateway before distribute to 
secondary. 


> Lucene query did not match region values
> 
>
> Key: GEODE-2683
> URL: https://issues.apache.org/jira/browse/GEODE-2683
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
> Fix For: 1.2.0
>
>
> There're several root causes. This one is due to the fix in #45782 changed 
> the order to notify primary bucket's gateway before distribute to secondary. 
> The log is at /export/buglogs_bvt/xzhou/lucene/concParRegHA-0209-235804
> CLIENT vm_1_thr_17_dataStore1_ip-10-32-108-36_11189
> TASK[1] parReg.ParRegTest.HydraTask_HADoEntryOps
> ERROR util.TestException: util.TestException: Lucene query did not match 
> region values. missingKeys=[], extraKeys=[Object_13, Object_17, Object_952, 
> Object_550, Object_1876, Object_2732, Object_270, Object_4722, Object_4726, 
> Object_2537]
> at lucene.LuceneHelper.verifyLuceneIndex(LuceneHelper.java:88)
> at lucene.LuceneTest.verifyLuceneIndex(LuceneTest.java:128)
> at lucene.LuceneTest.verifyFromSnapshotOnly(LuceneTest.java:79)
> at parReg.ParRegTest.verifyFromSnapshot(ParRegTest.java:5638)
> at parReg.ParRegTest.concVerify(ParRegTest.java:6035)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at util.MethodCoordinator.executeOnce(MethodCoordinator.java:68)
> at parReg.ParRegTest.HADoEntryOps(ParRegTest.java:2273)
> at parReg.ParRegTest.HydraTask_HADoEntryOps(ParRegTest.java:1032)
> at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> The root cause is:
> T1: A putAll (or removeAll. operation 

[jira] [Updated] (GEODE-2617) LuceneResultStruct should be Serializable

2017-03-09 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou updated GEODE-2617:
-
Fix Version/s: 1.2.0

> LuceneResultStruct should be Serializable
> -
>
> Key: GEODE-2617
> URL: https://issues.apache.org/jira/browse/GEODE-2617
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
> Fix For: 1.2.0
>
>
> let LuceneResultStruct to be Serializable, then customer does not have to 
> defined their Serializable class to hold result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2617) LuceneResultStruct should be Serializable

2017-03-07 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2617.
--
Resolution: Fixed

> LuceneResultStruct should be Serializable
> -
>
> Key: GEODE-2617
> URL: https://issues.apache.org/jira/browse/GEODE-2617
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>
> let LuceneResultStruct to be Serializable, then customer does not have to 
> defined their Serializable class to hold result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2617) LuceneResultStruct should be Serializable

2017-03-07 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou reassigned GEODE-2617:


Assignee: xiaojian zhou

> LuceneResultStruct should be Serializable
> -
>
> Key: GEODE-2617
> URL: https://issues.apache.org/jira/browse/GEODE-2617
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>
> let LuceneResultStruct to be Serializable, then customer does not have to 
> defined their Serializable class to hold result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2617) LuceneResultStruct should be Serializable

2017-03-07 Thread xiaojian zhou (JIRA)
xiaojian zhou created GEODE-2617:


 Summary: LuceneResultStruct should be Serializable
 Key: GEODE-2617
 URL: https://issues.apache.org/jira/browse/GEODE-2617
 Project: Geode
  Issue Type: Bug
Reporter: xiaojian zhou


let LuceneResultStruct to be Serializable, then customer does not have to 
defined their Serializable class to hold result.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2471) AsyncEventListenerOffHeapDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching

2017-02-14 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2471.
--
Resolution: Fixed

> AsyncEventListenerOffHeapDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching
> --
>
> Key: GEODE-2471
> URL: https://issues.apache.org/jira/browse/GEODE-2471
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>  Labels: CI
>
> {noformat}
> found in concourse distributedTest #383
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.geode.internal.cache.wan.asyncqueue.AsyncEventListenerDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching(AsyncEventListenerDUnitTest.java:1675)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Assigned] (GEODE-2471) AsyncEventListenerOffHeapDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching

2017-02-13 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou reassigned GEODE-2471:


Assignee: xiaojian zhou

> AsyncEventListenerOffHeapDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching
> --
>
> Key: GEODE-2471
> URL: https://issues.apache.org/jira/browse/GEODE-2471
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>  Labels: CI
>
> {noformat}
> found in concourse distributedTest #383
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.geode.internal.cache.wan.asyncqueue.AsyncEventListenerDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching(AsyncEventListenerDUnitTest.java:1675)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Created] (GEODE-2471) AsyncEventListenerOffHeapDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching

2017-02-12 Thread xiaojian zhou (JIRA)
xiaojian zhou created GEODE-2471:


 Summary: 
AsyncEventListenerOffHeapDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching
 Key: GEODE-2471
 URL: https://issues.apache.org/jira/browse/GEODE-2471
 Project: Geode
  Issue Type: Bug
  Components: core
Reporter: xiaojian zhou


{noformat}

found in concourse distributedTest #383

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.geode.internal.cache.wan.asyncqueue.AsyncEventListenerDUnitTest.testParallelAsyncEventQueueMoveBucketAndMoveItBackDuringDispatching(AsyncEventListenerDUnitTest.java:1675)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 

[jira] [Resolved] (GEODE-2400) PR accessors and client should have a way to wait for a lucene index to be flushed

2017-02-08 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2400.
--
Resolution: Fixed

> PR accessors and client should have a way to wait for a lucene index to be 
> flushed
> --
>
> Key: GEODE-2400
> URL: https://issues.apache.org/jira/browse/GEODE-2400
> Project: Geode
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Dan Smith
>Assignee: xiaojian zhou
>
> LuceneIndex.waitForFlushed can only be called on data stores. Since a user 
> could be doing puts from a client or a peer accessor, they may need to be 
> able to do some puts and then wait for the index to be flushed on the client 
> or the accessor.
> We should probably move waitForFlush to LuceneService instead, and then use a 
> function to route the wait call to a data store so that it can work from any 
> member.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2400) PR accessors and client should have a way to wait for a lucene index to be flushed

2017-01-31 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou reassigned GEODE-2400:


Assignee: xiaojian zhou

> PR accessors and client should have a way to wait for a lucene index to be 
> flushed
> --
>
> Key: GEODE-2400
> URL: https://issues.apache.org/jira/browse/GEODE-2400
> Project: Geode
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Dan Smith
>Assignee: xiaojian zhou
>
> LuceneIndex.waitForFlushed can only be called on data stores. Since a user 
> could be doing puts from a client or a peer accessor, they may need to be 
> able to do some puts and then wait for the index to be flushed on the client 
> or the accessor.
> We should probably move waitForFlush to LuceneService instead, and then use a 
> function to route the wait call to a data store so that it can work from any 
> member.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2384) Lucene index cannot be created on accessor if there's persistent region

2017-01-31 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2384.
--
Resolution: Fixed

> Lucene index cannot be created on accessor if there's persistent region
> ---
>
> Key: GEODE-2384
> URL: https://issues.apache.org/jira/browse/GEODE-2384
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>
> Modify LuceneQueriesPeerPRDUnitTest.java to change to 
> RegionShortcut.PARTITION_PERSISTENT, the test will fail with 
> java.lang.IllegalStateException: Cannot create Gateway Sender 
> AsyncEventQueue_index#_region with isPersistentEnabled 
> false because another cache has the same Gateway Sender defined 
> with isPersistentEnabled



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2384) Lucene index cannot be created on accessor if there's persistent region

2017-01-27 Thread xiaojian zhou (JIRA)
xiaojian zhou created GEODE-2384:


 Summary: Lucene index cannot be created on accessor if there's 
persistent region
 Key: GEODE-2384
 URL: https://issues.apache.org/jira/browse/GEODE-2384
 Project: Geode
  Issue Type: Bug
Reporter: xiaojian zhou


Modify LuceneQueriesPeerPRDUnitTest.java to change to 
RegionShortcut.PARTITION_PERSISTENT, the test will fail with 
java.lang.IllegalStateException: Cannot create Gateway Sender 
AsyncEventQueue_index#_region with isPersistentEnabled 
false because another cache has the same Gateway Sender defined 
with isPersistentEnabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (GEODE-2241) gfsh lucene will generate duplicate results

2017-01-04 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2241.
--
Resolution: Fixed

> gfsh lucene will generate duplicate results
> ---
>
> Key: GEODE-2241
> URL: https://issues.apache.org/jira/browse/GEODE-2241
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>Assignee: xiaojian zhou
>
> When started 2 servers, run gfsh will always display duplicated results.
> The root cause is: we submit a function call to execute query on each member.
> gfsh>list lucene indexes
>  Index Name   | Region Path |   Indexed Fields   |Field Analyzer  
>   | Status
> - | --- | -- | 
>  | ---
> analyzerIndex | /Person | [address, name, email] | 
> {address=MyCharact.. | Initialized
> analyzerIndex | /Person | [address, name, email] | 
> {address=MyCharact.. | Initialized
> customerIndex | /Customer   | [symbol, revenue, SSN, n.. | {} 
>   | Initialized
> customerIndex | /Customer   | [symbol, revenue, SSN, n.. | {} 
>   | Initialized
> pageIndex | /Page   | [symbol, name, email, ad.. | {} 
>   | Initialized
> pageIndex | /Page   | [symbol, name, email, ad.. | {} 
>   | Initialized
> personIndex   | /Person | [name, email, address, s.. | {} 
>   | Initialized
> personIndex   | /Person | [name, email, address, s.. | {} 
>   | Initialized
> gfsh>search lucene --name=personIndex --region=/Person --defaultField=name 
> --queryStrings="Tom*JSON"
>   key|value   
>  | score
>  | 
> --- | 
> -
> jsondoc1 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
> postal.. | 1
> jsondoc2 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
> postal.. | 1
> jsondoc1 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
> postal.. | 1
> jsondoc2 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
> postal.. | 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2263) CliUtil.getRegionAssociatedMembers()'s returnAll parameter is not used

2017-01-03 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou updated GEODE-2263:
-
Description: And DataCommands has a same name method which did use 
returnAll parameter. I think the DataCommands's one should be moved to replace 
the CliUtil's. We should not have 2 methods doing the same thing in 2 different 
class. 

> CliUtil.getRegionAssociatedMembers()'s returnAll parameter is not used
> --
>
> Key: GEODE-2263
> URL: https://issues.apache.org/jira/browse/GEODE-2263
> Project: Geode
>  Issue Type: Bug
>Reporter: xiaojian zhou
>
> And DataCommands has a same name method which did use returnAll parameter. I 
> think the DataCommands's one should be moved to replace the CliUtil's. We 
> should not have 2 methods doing the same thing in 2 different class. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (GEODE-2241) gfsh lucene will generate duplicate results

2016-12-21 Thread xiaojian zhou (JIRA)
xiaojian zhou created GEODE-2241:


 Summary: gfsh lucene will generate duplicate results
 Key: GEODE-2241
 URL: https://issues.apache.org/jira/browse/GEODE-2241
 Project: Geode
  Issue Type: Bug
Reporter: xiaojian zhou


When started 2 servers, run gfsh will always display duplicated results.

The root cause is: we submit a function call to execute query on each member.

gfsh>list lucene indexes
 Index Name   | Region Path |   Indexed Fields   |Field Analyzer
| Status
- | --- | -- |  
| ---
analyzerIndex | /Person | [address, name, email] | {address=MyCharact.. 
| Initialized
analyzerIndex | /Person | [address, name, email] | {address=MyCharact.. 
| Initialized
customerIndex | /Customer   | [symbol, revenue, SSN, n.. | {}   
| Initialized
customerIndex | /Customer   | [symbol, revenue, SSN, n.. | {}   
| Initialized
pageIndex | /Page   | [symbol, name, email, ad.. | {}   
| Initialized
pageIndex | /Page   | [symbol, name, email, ad.. | {}   
| Initialized
personIndex   | /Person | [name, email, address, s.. | {}   
| Initialized
personIndex   | /Person | [name, email, address, s.. | {}   
| Initialized

gfsh>search lucene --name=personIndex --region=/Person --defaultField=name 
--queryStrings="Tom*JSON"
  key|value 
   | score
 | 
--- | 
-
jsondoc1 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
postal.. | 1
jsondoc2 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
postal.. | 1
jsondoc1 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
postal.. | 1
jsondoc2 | PDX[3,__GEMFIRE_JSON]{address=PDX[1,__GEMFIRE_JSON]{city=New York, 
postal.. | 1




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (GEODE-2175) CI failure from TopEntriesFunctionCollectorJUnitTest.expectErrorAfterWaitTime

2016-12-08 Thread xiaojian zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojian zhou resolved GEODE-2175.
--
Resolution: Fixed

> CI failure from TopEntriesFunctionCollectorJUnitTest.expectErrorAfterWaitTime
> -
>
> Key: GEODE-2175
> URL: https://issues.apache.org/jira/browse/GEODE-2175
> Project: Geode
>  Issue Type: Bug
>  Components: lucene
>Reporter: Dan Smith
>Assignee: xiaojian zhou
>
> {noformat}
> org.apache.geode.cache.lucene.internal.distributed.TopEntriesFunctionCollectorJUnitTest
>  > expectErrorAfterWaitTime FAILED
> java.lang.Exception: Unexpected exception, 
> expected but 
> was
> Caused by:
> java.lang.AssertionError: expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.geode.cache.lucene.internal.distributed.TopEntriesFunctionCollectorJUnitTest.expectErrorAfterWaitTime(TopEntriesFunctionCollectorJUnitTest.java:195)
> {noformat}
> Looking at this test, it looks like it's a race condition waiting to happen 
> because it does a bunch of 1 second awaits.
> I'm also suspicious of the functionality that is being tested here in the 
> first place. A user's result collector shouldn't have to contain logic to 
> wait for all of the results to be gather, that's handled by the function 
> execution framework. So the real fix may be to remove these tests and the 
> logic in TopEntriesFunctionCollector that waits for the results to be 
> gathered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)